Updates from: 07/24/2023 01:13:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Inbound Provisioning Api Configure App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-configure-app.md
Depending on the app you selected, use one of the following sections to complete
## Start accepting provisioning requests 1. Open the provisioning application's **Provisioning** -> **Overview** page.
+ :::image type="content" source="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png" alt-text="Screenshot of Provisioning API endpoint." lightbox="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png":::
1. On this page, you can take the following actions: - **Start provisioning** control button ΓÇô Click on this button to place the provisioning job in **listen mode** to process inbound bulk upload request payloads. - **Stop provisioning** control button ΓÇô Use this option to pause/stop the provisioning job. - **Restart provisioning** control button ΓÇô Use this option to purge any existing request payloads pending processing and start a new provisioning cycle. - **Edit provisioning** control button ΓÇô Use this option to edit the job settings, attribute mappings and to customize the provisioning schema.
- - **Provision on demand** control button ΓÇô This feature is not yet enabled in private preview.
+ - **Provision on demand** control button ΓÇô This feature is not supported for API-driven inbound provisioning.
- **Provisioning API Endpoint** URL text ΓÇô Copy the HTTPS URL value shown and save it in a Notepad or OneNote for use later with the API client. 1. Expand the **Statistics to date** > **View technical information** panel and copy the **Provisioning API Endpoint** URL. Share this URL with your API developer after [granting access permission](inbound-provisioning-api-grant-access.md) to invoke the API.
active-directory Inbound Provisioning Api Curl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md
## Pre-requisites * You have configured [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md).
-* You have [configured a service principal and it has access](inbound-provisioning-api-grant-access.md) to the inbound provisioning API.
+* You have [configured a service principal and it has access](inbound-provisioning-api-grant-access.md) to the inbound provisioning API. Make note of the `ClientId` and `ClientSecret` of your service principal app for use in this tutorial.
-## Upload user data to the inbound provisioning API using cURL
+## Upload user data to the inbound provisioning API
1. Retrieve the **client_id** and **client_secret** of the service principal that has access to the inbound provisioning API. 1. Use OAuth **client_credentials** grant flow to get an access token. Replace the variables `[yourClientId]`, `[yourClientSecret]` and `[yourTenantId]` with values applicable to your setup and run the following cURL command. Copy the access token value generated ``` curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "client_id=[yourClientId]&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default&client_secret=[yourClientSecret]&grant_type=client_credentials" "https://login.microsoftonline.com/[yourTenantId]/oauth2/v2.0/token" ```
-1. Copy the bulk request payload from the example [Bulk upload using SCIM core user and enterprise user schema](/graph/api/synchronization-synchronizationjob-post-bulkupload#example-1-bulk-upload-using-scim-core-user-and-enterprise-user-schema) and save the contents in a file called scim-bulk-upload-users.json.
+1. Copy the [bulk request with SCIM Enterprise User Schema](#bulk-request-with-scim-enterprise-user-schema) and save the contents in a file called scim-bulk-upload-users.json.
1. Replace the variable `[InboundProvisioningAPIEndpoint]` with the provisioning API endpoint associated with your provisioning app. Use the `[AccessToken]` value from the previous step and run the following curl command to upload the bulk request to the provisioning API endpoint. ``` curl -v "[InboundProvisioningAPIEndpoint]" -d @scim-bulk-upload-users.json -H "Authorization: Bearer [AccessToken]" -H "Content-Type: application/scim+json"
* The **Provision User** step calls out the final processing step and changes applied to the user account. * Use the **Modified properties** tab to view attribute updates.
+## Appendix
+
+### Bulk request with SCIM Enterprise User Schema
+The bulk request shown below uses the SCIM standard Core User and Enterprise User schema.
+
+**Request body**
+# [HTTP](#tab/http)
+<!-- {
+ "blockType": "request",
+ "name": "Quick_start_with_curl"
+}-->
+```http
+{
+ "schemas": ["urn:ietf:params:scim:api:messages:2.0:BulkRequest"],
+ "Operations": [
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701984",
+ "userName": "bjensen@example.com",
+ "name": {
+ "formatted": "Ms. Barbara J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Barbara",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Babs Jensen",
+ "nickName": "Babs",
+ "emails": [
+ {
+ "value": "bjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Universal City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91608",
+ "country": "USA",
+ "formatted": "100 Universal City Plaza\nHollywood, CA 91608 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5555",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Guide",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701984",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "89607",
+ "displayName": "John Smith"
+ }
+ }
+ }
+ },
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701985",
+ "userName": "Kjensen@example.com",
+ "name": {
+ "formatted": "Ms. Kathy J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Kathy",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Kathy Jensen",
+ "nickName": "Kathy",
+ "emails": [
+ {
+ "value": "kjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Oracle City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91618",
+ "country": "USA",
+ "formatted": "100 Oracle City Plaza\nHollywood, CA 91618 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5545",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Lead",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701985",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "701984",
+ "displayName": "Barbara Jensen"
+ }
+ }
+ }
+ }
+],
+ "failOnErrors": null
+}
+```
+ ## Next steps - [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md) - [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
active-directory Inbound Provisioning Api Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-faqs.md
Yes, the provisioning API supports on-premises AD domains as a target.
## How do we get the /bulkUpload API endpoint for our provisioning app?
-The /bulkUpload API is available only for apps of the type: "API-driven inbound provisioning to Azure AD" and "API-driven inbound provisioning to on-premises Active Directory". You can retrieve the unique API endpoint for each provisioning app from the Provisioning blade home page. In **Statistics to date** > **View technical information**,copy the **Provisioning API Endpoint** URL. It has the format:
+The /bulkUpload API is available only for apps of the type: "API-driven inbound provisioning to Azure AD" and "API-driven inbound provisioning to on-premises Active Directory". You can retrieve the unique API endpoint for each provisioning app from the Provisioning blade home page. In **Statistics to date** > **View technical information**,copy the **Provisioning API Endpoint** URL.
+ :::image type="content" source="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png" alt-text="Screenshot of Provisioning API endpoint." lightbox="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png":::
+
+It has the format:
```http https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}/bulkUpload ```
If the attribute is set to **true**, the default mapping rule enables the accoun
## Can we soft-delete a user in Azure AD using /bulkUpload provisioning API?
-No. Currently the provisioning service only supports enabling or disabling an account in Azure AD/on-premises AD.
+Yes, you can soft-delete a user by using the **DELETE** method in the bulk request operation. Refer to the [bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API spec doc for an example request.
## How can we prevent accidental disabling/deletion of users?
-You can enable accidental deletion prevention. See [Enable accidental deletions prevention in the Azure AD provisioning service](accidental-deletions.md)
+To prevent and recover from accidental deletions, we recommend [configuring accidental deletion threshold](accidental-deletions.md) in the provisioning app and [enabling the on-premises Active Directory recycle bin](../hybrid/connect/how-to-connect-sync-recycle-bin.md). In your provisioning app's **Attribute Mapping** blade, under **Target object actions** disable the **Delete** operation.
+
+**Recovering deleted accounts**
+* If the target directory for the operation is Azure AD, then the matched user is soft-deleted. The user can be seen on the Microsoft Azure portal **Deleted users** page for the next 30 days and can be restored during that time.
+* If the target directory for the operation is on-premises Active Directory, then the matched user is hard-deleted. If the **Active Directory Recycle Bin** is enabled, you can restore the deleted on-premises AD user object.
## Do we need to send all users from the HR system in every request?
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
This configuration registers an app in Azure AD that represents the external API
1. Search and select permission **AuditLog.Read.All** and **SynchronizationData-User.Upload**. 1. Click on **Grant admin consent** on the next screen to complete the permission assignment. Click Yes on the confirmation dialog. Your app should have the following permission sets. [![Screenshot of app permissions.](media/inbound-provisioning-api-grant-access/api-client-permissions.png)](media/inbound-provisioning-api-grant-access/api-client-permissions.png#lightbox)
-1. You're now ready to use the service principal with your API client.
+1. You're now ready to use the service principal with your API client.
+1. For production workloads, we recommend using [client certificate-based authentication](../develop/howto-authenticate-service-principal-powershell.md) with the service principal or managed identities.
## Configure a managed identity
active-directory Inbound Provisioning Api Graph Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-graph-explorer.md
+
+ Title: Quickstart API-driven inbound provisioning with Graph Explorer
+description: Learn how to get started quickly with API-driven inbound provisioning using Graph Explorer
+++++++ Last updated : 07/18/2023++++
+# Quickstart API-driven inbound provisioning with Graph Explorer (Public preview)
+
+This tutorial describes how you can quickly test [API-driven inbound provisioning](inbound-provisioning-api-concepts.md) with Microsoft Graph Explorer.
+
+## Pre-requisites
+
+* You have configured [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md).
+
+> [!NOTE]
+> This provisioning API is primarily meant for use within an application or service. Tenant admins can either configure a service principal or managed identity to grant permission to perform the upload. There is no separate user-assignable Azure AD built-in directory role for this API. Outside of applications that have acquired `SynchronizationData-User.Upload` permission with admin consent, only admin users with Global Administrator role can invoke the API. This tutorial shows how you can test the API with a global administrator role in your test setup.
+
+## Upload user data to the inbound provisioning API
+
+1. Open a new browser tab or browser window.
+1. Launch the URL https://aka.ms/ge to access Microsoft Graph Explorer.
+1. Click on the user profile icon to sign in.
+
+ [![Image showing the user profile icon.](media/inbound-provisioning-api-graph-explorer/provisioning-user-profile-icon.png)](media/inbound-provisioning-api-graph-explorer/provisioning-user-profile-icon.png#lightbox)
+1. Complete the login process with a user account that has *Global Administrator* role.
+1. Upon successful login, the Tenant information shows your tenant name.
+
+ [![Screenshot of Tenant name.](media/inbound-provisioning-api-graph-explorer/provisioning-tenant-name.png)](media/inbound-provisioning-api-graph-explorer/provisioning-tenant-name.png#lightbox)
+
+ You're now ready to invoke the API.
+1. In the API request panel, set the HTTP request type to **POST**.
+1. Copy and paste the provisioning API endpoint retrieved from the provisioning app overview page.
+1. Under the Request headers panel, add a new key value pair of **Content-Type = application/scim+json**.
+ [![Screenshot of request header panel.](media/inbound-provisioning-api-graph-explorer/provisioning-request-header-panel.png)](media/inbound-provisioning-api-graph-explorer/provisioning-request-header-panel.png#lightbox)
+1. Under the **Request body** panel, copy-paste the [bulk request with SCIM Enterprise User Schema](#bulk-request-with-scim-enterprise-user-schema)
+1. Click on the **Run query** button to send the request to the provisioning API endpoint.
+1. If the request is sent successfully, you'll get an `Accepted 202` response from the API endpoint.
+1. Open the **Response headers** panel and copy the URL value of the location attribute. This points to the provisioning logs API endpoint that you can query to check the provisioning status of users present in the bulk request.
+
+## Verify processing of bulk request payload
+
+You can verify the processing either from the Microsoft Entra portal or using Graph Explorer.
+
+### Verify processing from Microsoft Entra portal
+1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**.
+1. Under all applications, use the search filter text box to find and open your API-driven provisioning application.
+1. Open the Provisioning blade. The landing page displays the status of the last run.
+1. Click on **View provisioning logs** to open the provisioning logs blade. Alternatively, you can click on the menu option **Monitor -> Provisioning logs**.
+
+ [![Screenshot of provisioning logs in menu.](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png)](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png#lightbox)
+1. Click on any record in the provisioning logs to view additional processing details.
+1. The provisioning log details screen displays all the steps executed for a specific user.
+ [![Screenshot of provisioning logs details.](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png)](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png#lightbox)
+ * Under the **Import from API** step, see details of user data extracted from the bulk request.
+ * The **Match user** step shows details of any user match based on the matching identifier. If a user match happens, then the provisioning service performs an update operation. If there is no user match, then the provisioning service performs a create operation.
+ * The **Determine if User is in scope** step shows details of scoping filter evaluation. By default, all users are processed. If you have set a scoping filter (example, process only users belonging to the Sales department), the evaluation details of the scoping filter displays in this step.
+ * The **Provision User** step calls out the final processing step and changes applied to the user account.
+ * Use the **Modified properties** tab to view attribute updates.
+
+### Verify processing using provisioning logs API in Graph Explorer
+
+You can inspect the processing using the provisioning logs API URL returned as part of the location response header in the provisioning API call.
+
+1. In the Graph Explorer, **Request URL** text box copy-paste the location URL returned by the provisioning API endpoint or you can construct it using the format: `https://graph.microsoft.com/beta/auditLogs/provisioning/?$filter=jobid eq '<jobId>'` where you can retrieve the ```jobId``` from the provisioning app overview page.
+1. Use the method **GET** and click **Run query** to retrieve the provisioning logs. By default, the response returned contains all log records.
+1. You can set more filters to only retrieve data after a certain time frame or with a specific status value.
+ `https://graph.microsoft.com/beta/auditLogs/provisioning/?$filter=jobid eq '<jobId> and statusInfo/status eq 'failure' and activityDateTime ge 2022-10-10T09:47:34Z`
+ You can also check the status of the user by the ```externalId``` value used in your source system that is used as the source anchor / joining property.
+ `https://graph.microsoft.com/beta/auditLogs/provisioning/?$filter=jobid eq '<jobId>' and sourceIdentity/id eq '701984'`
+
+## Appendix
+
+### Bulk request with SCIM Enterprise User Schema
+The bulk request shown below uses the SCIM standard Core User and Enterprise User schema.
+
+**Request body**
+# [HTTP](#tab/http)
+<!-- {
+ "blockType": "request",
+ "name": "Quick_start_with_Graph_Explorer"
+}-->
+```http
+{
+ "schemas": ["urn:ietf:params:scim:api:messages:2.0:BulkRequest"],
+ "Operations": [
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701984",
+ "userName": "bjensen@example.com",
+ "name": {
+ "formatted": "Ms. Barbara J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Barbara",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Babs Jensen",
+ "nickName": "Babs",
+ "emails": [
+ {
+ "value": "bjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Universal City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91608",
+ "country": "USA",
+ "formatted": "100 Universal City Plaza\nHollywood, CA 91608 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5555",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Guide",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701984",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "89607",
+ "displayName": "John Smith"
+ }
+ }
+ }
+ },
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701985",
+ "userName": "Kjensen@example.com",
+ "name": {
+ "formatted": "Ms. Kathy J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Kathy",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Kathy Jensen",
+ "nickName": "Kathy",
+ "emails": [
+ {
+ "value": "kjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Oracle City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91618",
+ "country": "USA",
+ "formatted": "100 Oracle City Plaza\nHollywood, CA 91618 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5545",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Lead",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701985",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "701984",
+ "displayName": "Barbara Jensen"
+ }
+ }
+ }
+ }
+],
+ "failOnErrors": null
+}
+```
+## Next steps
+- [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)
+- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-logic-apps.md
+
+ Title: API-driven inbound provisioning with Azure Logic Apps (Public preview)
+description: Learn how to implement API-driven inbound provisioning with Azure Logic Apps.
+++++++ Last updated : 07/18/2023++++
+# API-driven inbound provisioning with Azure Logic Apps (Public preview)
+
+This tutorial describes how to use Azure Logic Apps workflow to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+
+## Integration scenario
+
+This tutorial addresses the following integration scenario:
++
+* Your system of record generates periodic CSV file exports containing worker data which is available in an Azure File Share.
+* You want to use an Azure Logic Apps workflow to automatically provision records from the CSV file to your target directory (on-premises Active Directory or Microsoft Entra ID).
+* The Azure Logic Apps workflow simply reads data from the CSV file and uploads it to the provisioning API endpoint. The API-driven inbound provisioning app configured in Microsoft Entra ID performs the task of applying your IT managed provisioning rules to create/update/enable/disable accounts in the target directory.
+
+This tutorial uses the Logic Apps deployment template published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/LogicApps/CSV2SCIMBulkUpload). It has logic for handling large CSV files and chunking the bulk request to send 50 records in each request.
+
+> [!NOTE]
+> The sample Azure Logic Apps workflow is provided "as-is" for implementation reference. If you have questions related to it or if you'd like to enhance it, please use the [GitHub project repository](https://github.com/AzureAD/entra-id-inbound-provisioning).
+
+## Step 1: Create an Azure Storage account to host the CSV file
+The steps documented in this section are optional. If you already have an existing storage account or would like to read the CSV file from another source like SharePoint site or Blob storage, you can tweak the Logic App to use your connector of choice.
+
+1. Log in to your Azure portal as administrator.
+1. Search for "Storage accounts" and create a new storage account.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/storage-accounts.png" alt-text="Screenshot of creating new storage account." lightbox="media/inbound-provisioning-api-logic-apps/storage-accounts.png":::
+1. Assign a resource group and give it a name.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/assign-resource-group.png" alt-text="Screenshot of resource group assignment." lightbox="media/inbound-provisioning-api-logic-apps/assign-resource-group.png":::
+1. After the storage account is created, go to the resource.
+1. Click on "File share" menu option and create a new file share.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/create-new-file-share.png" alt-text="Screenshot of creating new file share." lightbox="media/inbound-provisioning-api-logic-apps/create-new-file-share.png":::
+1. Verify that the file share creation is successful.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/verify-file-share-creation.png" alt-text="Screenshot of file share created." lightbox="media/inbound-provisioning-api-logic-apps/verify-file-share-creation.png":::
+1. Upload a sample CSV file to the file share using the upload option.
+1. Here is a screenshot of the columns in the CSV file.
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/columns.png" alt-text="Screenshot of columns in Excel." lightbox="./media/inbound-provisioning-api-powershell/columns.png":::
+
+## Step 2: Configure Azure Function CSV2JSON converter
+
+1. In the browser associated with your Azure portal login, open the GitHub repository URL - https://github.com/joelbyford/CSVtoJSONcore.
+1. Click on the link "Deploy to Azure" to deploy this Azure Function to your Azure tenant.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/deploy-azure-function.png" alt-text="Screenshot of deploying Azure Function." lightbox="media/inbound-provisioning-api-logic-apps/deploy-azure-function.png":::
+1. Specify the resource group under which to deploy this Azure function.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/azure-function-resource-group.png" alt-text="Screenshot of configuring Azure Function resource group." lightbox="media/inbound-provisioning-api-logic-apps/azure-function-resource-group.png":::
+
+ If you get the error "[This region has quota of 0 instances](/answers/questions/751909/azure-function-app-region-has-quota-of-0-instances)", try selecting a different region.
+1. Ensure that the deployment of the Azure Function as an App Service is successful.
+1. Go to the resource group and open the WebApp configuration. Ensure it is in "Running" state. Copy the default domain name associated with the Web App.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/web-app-domain-name.png" alt-text="Screenshot of Azure Function Web App domain name." lightbox="media/inbound-provisioning-api-logic-apps/web-app-domain-name.png":::
+1. Open Postman client to test if the CSVtoJSON endpoint works as expected. Paste the domain name copied from the previous step. Use Content-Type of "text/csv" and post a sample CSV file in the request body to the endpoint: `https://[your-domain-name]/csvtojson`
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/postman-call-to-azure-function.png" alt-text="Screenshot of Postman client calling the Azure Function." lightbox="media/inbound-provisioning-api-logic-apps/postman-call-to-azure-function.png":::
+1. If the Azure Function deployment is successful, then in the response you'll get a JSON version of the CSV file with status 200 OK.
+
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/azure-function-response.png" alt-text="Screenshot of Azure Function response." lightbox="media/inbound-provisioning-api-logic-apps/azure-function-response.png":::
+1. To allow Logic Apps to invoke this Azure Function, in the CORS setting for the WebApp enter asterisk (*) and "Save" the configuration.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/azure-function-cors-setting.png" alt-text="Screenshot of Azure Function CORS setting." lightbox="media/inbound-provisioning-api-logic-apps/azure-function-cors-setting.png":::
+
+## Step 3: Configure API-driven inbound user provisioning
+
+* Configure [API-driven inbound user provisioning](inbound-provisioning-api-configure-app.md).
+
+## Step 4: Configure your Azure Logic Apps workflow
+
+1. Click on the button below to deploy the Azure Resource Manager template for the CSV2SCIMBulkUpload Logic Apps workflow.
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzureAD%2Fentra-id-inbound-provisioning%2Fmain%2FLogicApps%2FCSV2SCIMBulkUpload%2Fcsv2scimbulkupload-template.json)
+
+1. Under instance details, update the highlighted items, copy-pasting values from the previous steps.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/logic-apps-instance-details.png" alt-text="Screenshot of Azure Logic Apps instance details." lightbox="media/inbound-provisioning-api-logic-apps/logic-apps-instance-details.png":::
+1. For the `Azurefile_access Key` parameter, open your Azure file storage account and copy the access key present under "Security and Networking".
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/azure-file-access-keys.png" alt-text="Screenshot of Azure File access keys." lightbox="media/inbound-provisioning-api-logic-apps/azure-file-access-keys.png":::
+1. Click on "Review and Create" option to start the deployment.
+1. Once the deployment is complete, you'll see the following message.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/logic-apps-deployment-complete.png" alt-text="Screenshot of Azure Logic Apps deployment complete." lightbox="media/inbound-provisioning-api-logic-apps/logic-apps-deployment-complete.png":::
+
+## Step 5: Configure system assigned managed identity
+
+1. Visit the Settings -> Identity blade of your Logic Apps workflow.
+1. Enable **System assigned managed identity**.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/enable-managed-identity.png" alt-text="Screenshot of enabling managed identity." lightbox="media/inbound-provisioning-api-logic-apps/enable-managed-identity.png":::
+1. You'll get a prompt to confirm the use of the managed identity. Click on **Yes**.
+1. Grant the managed identity [permissions to perform bulk upload](inbound-provisioning-api-grant-access.md#configure-a-managed-identity).
+
+## Step 6: Review and adjust the workflow steps
+
+1. Open the Logic App in the designer view.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/designer-view.png" alt-text="Screenshot of Azure Logic Apps designer view." lightbox="media/inbound-provisioning-api-logic-apps/designer-view.png":::
+1. Review the configuration of each step in the workflow to make sure it is correct.
+1. Open the "Get file content using path" step and correct it to browse to the Azure File Storage in your tenant.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/get-file-content.png" alt-text="Screenshot of get file content." lightbox="media/inbound-provisioning-api-logic-apps/get-file-content.png":::
+1. Update the connection if required.
+1. Make sure your "Convert CSV to JSON" step is pointing to the right Azure Function Web App instance.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/convert-file-format.png" alt-text="Screenshot of Azure Function call invocation to convert from CSV to JSON." lightbox="media/inbound-provisioning-api-logic-apps/convert-file-format.png":::
+1. If your CSV file content / headers is different, then update the "Parse JSON" step with the JSON output that you can retrieve from your API call to the Azure Function. Use Postman output from Step 2.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/parse-json-step.png" alt-text="Screenshot of Parse JSON step." lightbox="media/inbound-provisioning-api-logic-apps/parse-json-step.png":::
+1. In the step "Construct SCIMUser", ensure that the CSV fields map correctly to the SCIM attributes that will be used for processing.
+
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/construct-scim-user.png" alt-text="Screenshot of Construct SCIM user step." lightbox="media/inbound-provisioning-api-logic-apps/construct-scim-user.png":::
+1. In the step "Send SCIMBulkPayload to API endpoint" ensure you are using the right API endpoint and authentication mechanism.
+
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/invoke-bulk-upload-api.png" alt-text="Screenshot of invoking bulk upload API with managed identity." lightbox="media/inbound-provisioning-api-logic-apps/invoke-bulk-upload-api.png":::
+
+## Step 7: Run trigger and test your Logic Apps workflow
+
+1. In the "Generally Available" version of the Logic Apps designer, click on Run Trigger to manually execute the workflow.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/run-logic-app.png" alt-text="Screenshot of running the Logic App." lightbox="media/inbound-provisioning-api-logic-apps/run-logic-app.png":::
+1. After the execution is complete, you can review what action Logic Apps performed in each iteration.
+1. In the final iteration, you should see the Logic Apps upload data to the inbound provisioning API endpoint. Look for `202 Accept` status code. You can copy-paste and verify the bulk upload request.
+ :::image type="content" source="media/inbound-provisioning-api-logic-apps/execution-results.png" alt-text="Screenshot of the Logic Apps execution result." lightbox="media/inbound-provisioning-api-logic-apps/execution-results.png":::
+
+## Next steps
+- [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)
+- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-postman.md
+
+ Title: Quickstart API-driven inbound provisioning with Postman
+description: Learn how to get started quickly with API-driven inbound provisioning using Postman
+++++++ Last updated : 07/19/2023++++
+# Quickstart API-driven inbound provisioning with Postman (Public preview)
+
+This tutorial describes how you can quickly test [API-driven inbound provisioning](inbound-provisioning-api-concepts.md) with Postman.
+
+## Pre-requisites
+
+* You have configured [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md).
+* You have [configured a service principal and it has access](inbound-provisioning-api-grant-access.md) to the inbound provisioning API. Make note of the `TenantId`, `ClientId` and `ClientSecret` of your service principal app for use in this tutorial.
++
+### Upload user data to the inbound provisioning API
+In this step, you'll configure the Postman app and invoke the API using the configured service account.
+
+1. Download and install the [Postman app](https://www.postman.com/downloads/).
+1. Open the Postman desktop app.
+1. From the **Workspaces** menu, select **Create Workspace** to create a new Workspace called **Microsoft Entra ID Provisioning API**.
+1. Download the following Postman collections and save it in your local directory.
+ - [Entra ID Inbound Provisioning.postman_collection.json](https://github.com/AzureAD/entra-id-inbound-provisioning/blob/main/Postman/Entra%20ID%20Inbound%20Provisioning.postman_collection.json) (Request collection)
+ - [Test-API2AAD.postman_environment.json](https://github.com/AzureAD/entra-id-inbound-provisioning/blob/main/Postman/Test-API2AAD.postman_environment.json) (Environment collection for API-driven provisioning to on-premises AD)-
+ - [Test-API2AD.postman_environment.json](https://github.com/AzureAD/entra-id-inbound-provisioning/blob/main/Postman/Test-API2AD.postman_environment.json) (Environment collection for API-driven provisioning to on-premises AD)
+1. Use the **Import** option in Postman to import both of these files into your Workspace.
+ :::image type="content" source="media/inbound-provisioning-api-postman/postman-import-elements.png" alt-text="Screenshot of Postman Import elements." lightbox="media/inbound-provisioning-api-postman/postman-import-elements.png":::
+1. Click the **Environments** menu and open the **Test-API2AAD** environment.
+1. Retrieve the values of **client_id**, **client_secret**, and **token_endpoint** from your registered app.
+ :::image type="content" source="media/inbound-provisioning-api-postman/retrieve-authentication-details.png" alt-text="Screenshot of registered app." lightbox="media/inbound-provisioning-api-postman/retrieve-authentication-details.png":::
+1. Paste the values in the table for each variable under the column **Initial value** and **Current value**.
+ :::image type="content" source="media/inbound-provisioning-api-postman/postman-authentication-variables.png" alt-text="Screenshot of authentication variables" lightbox="media/inbound-provisioning-api-postman/postman-authentication-variables.png":::
+1. Open your provisioning app landing page and copy-paste the value of **Job ID** for the `jobId` variable and the value of **Provisioning API endpoint** for the `bulk_upload_endpoint` variable
+ :::image type="content" source="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png" alt-text="Screenshot of Provisioning API endpoint." lightbox="media/inbound-provisioning-api-configure-app/provisioning-api-endpoint.png":::
+1. Leave the value of **ms_graph_resource_id** unchanged and save the environment collection. Make sure that both **Initial value** and **Current value** columns are populated.
+1. Next, open the collection **Entra ID Inbound Provisioning**.
+1. From the **Environment** dropdown, select **Test-API2AAD**.
+1. Select the **Authorization** tab associated with the collection.
+1. Make sure that authorization is configured to use OAuth settings.
+ :::image type="content" source="media/inbound-provisioning-api-postman/provisioning-oauth-configuration.png" alt-text="Screenshot of Provisioning OAuth configuration." lightbox="media/inbound-provisioning-api-postman/provisioning-oauth-configuration.png":::
+1. The **Advanced options** section should show the following configuration:
+ :::image type="content" source="media/inbound-provisioning-api-postman/provisioning-advanced-options.png" alt-text="Screenshot of Provisioning Advanced options." lightbox="media/inbound-provisioning-api-postman/provisioning-advanced-options.png":::
+1. Click on **Get New Access Token** to initiate the process to procure an access token.
+1. Select the option **Use Token** to use the access token with all requests in this collection.
+ >[!NOTE]
+ >The OAuth access token generated using `client_credentials` grant type is valid for one hour. You can decode the token using [https://jwt.io](https://jwt.io) and check when it expires. Requests fail after the token expires. If your access token has expired, click **Get New Access Token** in Postman to get a new access token.
+ The token is automatically copied into the **Current token** section of the Authorization tab. You can now use the token to make API calls. Let's start with the first call in this collection.
+1. Open the request **SCIM bulk request upload**.
+1. Under the **Authorization tab**, make sure that type is set to **Inherit auth from parent**.
+1. Change to the **Request body** tab, to view and edit the sample SCIM bulk request. When you're done editing, click **Send**.
+
+If the API invocation is successful, you see the message `202 Accepted.` Under Headers, the **Location** attribute points to the provisioning logs API endpoint.
+
+## Verify processing of bulk request payload
+You can verify the processing either from the Microsoft Entra portal or using Postman.
+
+### Verify processing from Microsoft Entra portal
+1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**.
+1. Under all applications, use the search filter text box to find and open your API-driven provisioning application.
+1. Open the Provisioning blade. The landing page displays the status of the last run.
+1. Click on **View provisioning logs** to open the provisioning logs blade. Alternatively, you can click on the menu option **Monitor -> Provisioning logs**.
+
+ [![Screenshot of provisioning logs in menu.](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png)](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png#lightbox)
+
+1. Click on any record in the provisioning logs to view additional processing details.
+1. The provisioning log details screen displays all the steps executed for a specific user.
+ [![Screenshot of provisioning logs details.](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png)](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png#lightbox)
+ * Under the **Import from API** step, see details of user data extracted from the bulk request.
+ * The **Match user** step shows details of any user match based on the matching identifier. If a user match happens, then the provisioning service performs an update operation. If there is no user match, then the provisioning service performs a create operation.
+ * The **Determine if User is in scope** step shows details of scoping filter evaluation. By default, all users are processed. If you have set a scoping filter (example, process only users belonging to the Sales department), the evaluation details of the scoping filter displays in this step.
+ * The **Provision User** step calls out the final processing step and changes applied to the user account.
+ * Use the **Modified properties** tab to view attribute updates.
+
+### Verify processing using provisioning logs API in Postman
+This section shows how you can query provisioning logs in Postman using the same service account (service principal) that you configured.
+
+1. Open the workspace **Microsoft Entra ID Provisioning API** in your Postman desktop app.
+2. The collection **Entra ID Inbound Provisioning** contains three sample requests that enable you to query the provisioning logs.
+3. You can open any of these predefined requests.
+4. If you don't have a valid access token or you're not sure if the access token is still valid, go to the collection object's root Authorization tab and use the option **Get New Access Token** to get a fresh token.
+5. Click **Send** to get provisioning log records.
+Upon successful execution, you'll get a `200 HTTP` response from the server along with the provisioning log records.
+
+## Appendix
+
+### Bulk request with SCIM Enterprise User Schema
+The bulk request shown below uses the SCIM standard Core User and Enterprise User schema.
+
+**Request body**
+# [HTTP](#tab/http)
+<!-- {
+ "blockType": "request",
+ "name": "Quick_start_with_Postman"
+}-->
+```http
+{
+ "schemas": ["urn:ietf:params:scim:api:messages:2.0:BulkRequest"],
+ "Operations": [
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701984",
+ "userName": "bjensen@example.com",
+ "name": {
+ "formatted": "Ms. Barbara J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Barbara",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Babs Jensen",
+ "nickName": "Babs",
+ "emails": [
+ {
+ "value": "bjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Universal City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91608",
+ "country": "USA",
+ "formatted": "100 Universal City Plaza\nHollywood, CA 91608 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5555",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Guide",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701984",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "89607",
+ "displayName": "John Smith"
+ }
+ }
+ }
+ },
+ {
+ "method": "POST",
+ "bulkId": "897401c2-2de4-4b87-a97f-c02de3bcfc61",
+ "path": "/Users",
+ "data": {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
+ "externalId": "701985",
+ "userName": "Kjensen@example.com",
+ "name": {
+ "formatted": "Ms. Kathy J Jensen, III",
+ "familyName": "Jensen",
+ "givenName": "Kathy",
+ "middleName": "Jane",
+ "honorificPrefix": "Ms.",
+ "honorificSuffix": "III"
+ },
+ "displayName": "Kathy Jensen",
+ "nickName": "Kathy",
+ "emails": [
+ {
+ "value": "kjensen@example.com",
+ "type": "work",
+ "primary": true
+ }
+ ],
+ "addresses": [
+ {
+ "type": "work",
+ "streetAddress": "100 Oracle City Plaza",
+ "locality": "Hollywood",
+ "region": "CA",
+ "postalCode": "91618",
+ "country": "USA",
+ "formatted": "100 Oracle City Plaza\nHollywood, CA 91618 USA",
+ "primary": true
+ }
+ ],
+ "phoneNumbers": [
+ {
+ "value": "555-555-5545",
+ "type": "work"
+ }
+ ],
+ "userType": "Employee",
+ "title": "Tour Lead",
+ "preferredLanguage": "en-US",
+ "locale": "en-US",
+ "timezone": "America/Los_Angeles",
+ "active":true,
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
+ "employeeNumber": "701985",
+ "costCenter": "4130",
+ "organization": "Universal Studios",
+ "division": "Theme Park",
+ "department": "Tour Operations",
+ "manager": {
+ "value": "701984",
+ "displayName": "Barbara Jensen"
+ }
+ }
+ }
+ }
+],
+ "failOnErrors": null
+}
+```
+
+## Next steps
+- [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)
+- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
+
+ Title: API-driven inbound provisioning with PowerShell script (Public preview)
+description: Learn how to implement API-driven inbound provisioning with a PowerShell script.
+++++++ Last updated : 07/18/2023++++
+# API-driven inbound provisioning with PowerShell script (Public preview)
+
+This tutorial describes how to use a PowerShell script to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+
+## How to use this tutorial
+
+This tutorial addresses the following integration scenario:
+* Your system of record generates periodic CSV file exports containing worker data.
+* You want to use an unattended PowerShell script to automatically provision records from the CSV file to your target directory (on-premises Active Directory or Microsoft Entra ID).
+* The PowerShell script simply reads data from the CSV file and uploads it to the provisioning API endpoint. The API-driven inbound provisioning app configured in Microsoft Entra ID performs the task of applying your IT managed provisioning rules to create/update/enable/disable accounts in the target directory.
++
+Here is a list of automation tasks associated with this integration scenario and how you can implement it by customizing the sample script published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/PowerShell/CSV2SCIM).
+
+> [!NOTE]
+> The sample PowerShell script is provided "as-is" for implementation reference. If you have questions related to the script or if you'd like to enhance it, please use the [GitHub project repository](https://github.com/AzureAD/entra-id-inbound-provisioning).
+
+|# | Automation task | Implementation guidance |
+||||
+|1 | Read worker data from the CSV file. | [Download the PowerShell script](#download-the-powershell-script). It has out-of-the-box logic to read data from any CSV file. Refer to [CSV2SCIM PowerShell usage details](#csv2scim-powershell-usage-details) to get familiar with the different execution modes of this script. |
+|2 | Pre-process and convert data to SCIM format. | By default, the PowerShell script converts each record in the CSV file to a SCIM Core User + Enterprise User representation. Follow the steps in the section [Generate bulk request payload with standard schema](#generate-bulk-request-payload-with-standard-schema) to get familiar with this process. If your CSV file has different fields, tweak the [AttributeMapping.psd file](#attributemappingpsd-file) to generate a valid SCIM user. You can also [generate bulk request with custom SCIM schema](#generate-bulk-request-with-custom-scim-schema). Update the PowerShell script to include any custom CSV data validation logic. |
+|3 | Use a certificate for authentication to Entra ID. | [Create a service principal that can access](inbound-provisioning-api-grant-access.md) the inbound provisioning API. Refer to steps in the section [Configure client certificate for service principal authentication](#configure-client-certificate-for-service-principal-authentication) to learn how to use client certificate for authentication. If you'd like to use managed identity instead of a service principal for authentication, then review the use of `Connect-MgGraph` in the sample script and update it to use [managed identities](/powershell/microsoftgraph/authentication-commands#using-managed-identity). |
+|4 | Provision accounts in on-premises Active Directory or Microsoft Entra ID. | Configure [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md). This will generate a unique [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. Refer to the steps in the section [Generate and upload bulk request payload as admin user](#generate-and-upload-bulk-request-payload-as-admin-user) to learn how to upload data to this endpoint. Once the data is uploaded, the provisioning service applies the attribute mapping rules to automatically provision accounts in your target directory. If you plan to [use bulk request with custom SCIM schema](#generate-bulk-request-with-custom-scim-schema), then [extend the provisioning app schema](#extending-provisioning-job-schema) to include your custom SCIM schema elements. Validate the attribute flow and customize the attribute mappings per your integration requirements. To run the script using a service principal with certificate-based authentication, refer to the steps in the section [Upload bulk request payload using client certificate authentication](#upload-bulk-request-payload-using-client-certificate-authentication) |
+|5 | Scan the provisioning logs and retry provisioning for failed records. | Refer to the steps in the section [Get provisioning logs of the latest sync cycles](#get-provisioning-logs-of-the-latest-sync-cycles) to learn how to fetch and analyze provisioning log data. Identify failed user records and include them in the next upload cycle. |
+|6 | Deploy your PowerShell based automation to production. | Once you have verified your API-driven provisioning flow and customized the PowerShell script to meet your requirements, you can deploy the automation as a [PowerShell Workflow runbook in Azure Automation](../../automation/learn/automation-tutorial-runbook-textual.md). |
++
+## Download the PowerShell script
+
+1. Access the GitHub repository https://github.com/AzureAD/entra-id-inbound-provisioning.
+1. Use the **Code** -> **Clone** or **Code** -> **Download ZIP** option to copy contents of this repository into your local folder.
+1. Navigate to the folder **PowerShell/CSV2SCIM**. It has the following directory structure:
+ - src
+ - CSV2SCIM.ps1 (main script)
+ - ScimSchemaRepresentations (folder containing standard SCIM schema definitions for validating AttributeMapping.psd1 files)
+ - EnterpriseUser.json, Group.json, Schema.json, User.json
+ - Samples
+ - AttributeMapping.psd1 (sample mapping of columns in CSV file to standard SCIM attributes)
+ - csv-with-2-records.csv (sample CSV file with two records)
+ - csv-with-1000-records.csv (sample CSV file with 1000 records)
+ - Test-ScriptCommands.ps1 (sample usage commands)
+ - UseClientCertificate.ps1 (script to generate self-signed certificate and upload it as service principal credential for use in OAuth flow)
+ - `Sample1` (folder with more examples of how CSV file columns can be mapped to SCIM standard attributes. If you get different CSV files for employees, contractors, interns, you can create a separate AttributeMapping.psd1 file for each entity.)
+1. Download and install the latest version of PowerShell.
+1. Run the command to enable execution of remote signed scripts:
+ ```powershell
+ set-executionpolicy remotesigned
+ ```
+1. Install the following prerequisite modules:
+ ```powershell
+ Install-Module -Name Microsoft.Graph.Applications,Microsoft.Graph.Reports
+ ```
+
+## Generate bulk request payload with standard schema
+
+This section explains how to generate a bulk request payload with standard SCIM Core User and Enterprise User attributes from a CSV file.
+To illustrate the procedure, let's use the CSV file `Samples/csv-with-2-records.csv`.
+
+1. Open the CSV file `Samples/csv-with-2-records.csv` in Notepad++ or Excel to check the columns present in the file.
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/columns.png" alt-text="Screenshot of columns in Excel." lightbox="./media/inbound-provisioning-api-powershell/columns.png":::
+
+1. In Notepad++ or a source code editor like Visual Studio Code, open the PowerShell data file `Samples/AttributeMapping.psd1` that enables mapping of CSV file columns to SCIM standard schema attributes. The file that's shipped out-of-the-box already has pre-configured mapping of CSV file columns to corresponding SCIM schema attributes.
+1. Open PowerShell and change to the directory **CSV2SCIM\src**.
+1. Run the following command to initialize the `AttributeMapping` variable.
+
+ ```powershell
+ $AttributeMapping = Import-PowerShellDataFile '..\Samples\AttributeMapping.psd1'
+ ```
+
+1. Run the following command to validate if the `AttributeMapping` file has valid SCIM schema attributes. This command returns **True** if the validation is successful.
+
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping -ValidateAttributeMapping
+ ```
+
+1. Let's say the `AttributeMapping` file has an invalid SCIM attribute called **userId**, then the `ValidateAttributeMapping` mode displays the following error.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/mapping-error.png" alt-text="Screenshot of a mapping error." lightbox="./media/inbound-provisioning-api-powershell/mapping-error.png":::
+
+1. Once you verified that the `AttributeMapping` file is valid, run the following command to generate a bulk request in the file `BulkRequestPayload.json` that includes the two records present in the CSV file.
++
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping > BulkRequestPayload.json
+ ```
+
+1. You can open the contents of the file `BulkRequestPayload.json` to verify if the SCIM attributes are set as per mapping defined in the file `AttributeMapping.psd1`.
+
+1. You can post the file generated above as-is to the [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint associated with your provisioning app using Graph Explorer or Postman or cURL. Reference:
+
+ - [Quick start with Graph Explorer](inbound-provisioning-api-graph-explorer.md)
+ - [Quick start with Postman](inbound-provisioning-api-postman.md)
+ - [Quick start with cURL](inbound-provisioning-api-curl-tutorial.md)
+
+1. To directly upload the generated payload to the API endpoint using the same PowerShell script refer to the next section.
++
+## Generate and upload bulk request payload as admin user
+
+This section explains how to send the generated bulk request payload to your inbound provisioning API endpoint.
+
+1. Log in to your Entra portal as *Application Administrator* or *Global Administrator*.
+1. Copy the `ServicePrincipalId` associated with your provisioning app from **Provisioning App** > **Properties** > **Object ID**.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/object-id.png" alt-text="Screenshot of the Object ID." lightbox="./media/inbound-provisioning-api-powershell/object-id.png":::
+
+1. As user with *Global Administrator* role, run the following command by providing the correct values for `ServicePrincipalId` and `TenantId`. It will prompt you for authentication if an authenticated session doesn't already exist for this tenant. Provide your consent to permissions prompted during authentication.
+
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping -ServicePrincipalId <servicePrincipalId> -TenantId "contoso.onmicrosoft.com"
+ ```
+1. Visit the **Provisioning logs** blade of your provisioning app to verify the processing of the above request.
++
+## Configure client certificate for service principal authentication
+
+> [!NOTE]
+> The instructions here show how to generate a self-signed certificate. Self-signed certificates are not trusted by default and they can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
+
+1. Run the following PowerShell script to generate a new self-signed certificate. You can skip this step if you have purchased a certificate signed by a well-known certificate authority.
+ ```powershell
+ $ClientCertificate = New-SelfSignedCertificate -Subject 'CN=CSV2SCIM' -KeyExportPolicy 'NonExportable' -CertStoreLocation Cert:\CurrentUser\My
+ $ThumbPrint = $ClientCertificate.ThumbPrint
+ ```
+ The generated certificate is stored **Current User\Personal\Certificates**. You can view it using the **Control Panel** -> **Manage user certificates** option.
+1. To associate this certificate with a valid service principal, log in to your Entra portal as *Application Administrator*.
+1. Open [the service principal you configured](inbound-provisioning-api-grant-access.md#configure-a-service-principal) under **App Registrations**.
+1. Copy the **Object ID** from the **Overview** blade. Use the value to replace the string `<AppObjectId>`. Copy the **Application (client) Id**. We will use it later and it is referenced as `<AppClientId>`.
+1. Run the following command to upload your certificate to the registered service principal.
+ ```powershell
+ Connect-MgGraph -Scopes "Application.ReadWrite.All"
+ Update-MgApplication -ApplicationId '<AppObjectId>' -KeyCredentials @{
+ Type = "AsymmetricX509Cert"
+ Usage = "Verify"
+ Key = $ClientCertificate.RawData
+ }
+ ```
+ You should see the certificate under the **Certificates & secrets** blade of your registered app.
+ :::image type="content" source="media/inbound-provisioning-api-powershell/client-certificate.png" alt-text="Screenshot of client certificate." lightbox="media/inbound-provisioning-api-powershell/client-certificate.png":::
+1. Add the following two **Application** permission scopes to the service principal app: **Application.Read.All** and **Synchronization.Read.All**. These are required for the PowerShell script to look up the provisioning app by `ServicePrincipalId` and fetch the provisioning `JobId`.
+
+## Upload bulk request payload using client certificate authentication
+
+This section explains how to send the generated bulk request payload to your inbound provisioning API endpoint using a trusted client certificate.
+
+1. Open the API-driven provisioning app that you [configured](inbound-provisioning-api-configure-app.md). Copy the `ServicePrincipalId` associated with your provisioning app from **Provisioning App** > **Properties** > **Object ID**.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/object-id.png" alt-text="Screenshot of the Object ID." lightbox="./media/inbound-provisioning-api-powershell/object-id.png":::
+
+1. Run the following command by providing the correct values for `ServicePrincipalId`, `ClientId` and `TenantId`.
+
+ ```powershell
+ $ClientCertificate = Get-ChildItem -Path cert:\CurrentUser\my\ | Where-Object {$_.Subject -eq "CN=CSV2SCIM"}
+ $ThumbPrint = $ClientCertificate.ThumbPrint
+
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping -TenantId "contoso.onmicrosoft.com" -ServicePrincipalId "<ProvisioningAppObjectId>" -ClientId "<AppClientId>" -ClientCertificate (Get-ChildItem Cert:\CurrentUser\My\$ThumbPrint)
+ ```
+1. Visit the **Provisioning logs** blade of your provisioning app to verify the processing of the above request.
+
+## Generate bulk request with custom SCIM schema
+
+This section describes how to generate a bulk request with custom SCIM schema namespace consisting of fields in the CSV file.
+
+1. In Notepad++ or a source code editor like Visual Studio Code, open the PowerShell data file `Samples/AttributeMapping.psd1` that enables mapping of CSV file columns to SCIM standard schema attributes. The file that's shipped out-of-the-box already has pre-configured mapping of CSV file columns to corresponding SCIM schema attributes.
+1. Open PowerShell and change to the directory **CSV2SCIM\src**.
+1. Run the following command to initialize the `AttributeMapping` variable.
+
+ ```powershell
+ $AttributeMapping = Import-PowerShellDataFile '..\Samples\AttributeMapping.psd1'
+ ```
+
+1. Run the following command to validate if the `AttributeMapping` file has valid SCIM schema attributes. This command returns **True** if the validation is successful.
+
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping -ValidateAttributeMapping
+ ```
+
+1. In addition to the SCIM Core User and Enterprise User attributes, to get a flat-list of all CSV fields under a custom SCIM schema namespace `urn:ietf:params:scim:schemas:extension:contoso:1.0:User`, run the following command.
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -AttributeMapping $AttributeMapping -ScimSchemaNamespace "urn:ietf:params:scim:schemas:extension:contoso:1.0:User" > BulkRequestPayloadWithCustomNamespace.json
+ ```
+ The CSV fields will show up under the custom SCIM schema namespace.
+ :::image type="content" source="media/inbound-provisioning-api-powershell/user-details-under-custom-schema.png" alt-text="Screenshot of user details under custom schema." lightbox="media/inbound-provisioning-api-powershell/user-details-under-custom-schema.png":::
+
+## Extending provisioning job schema
+
+Often the data file sent by HR teams contains more attributes that don't have a direct representation in the standard SCIM schema. To represent such attributes, we recommend creating a SCIM extension schema and adding attributes under this namespace.
+
+The CSV2SCIM script provides an execution mode called `UpdateSchema` which reads all columns in the CSV file, adds them under an extension schema namespace, and updates the provisioning app schema.
+
+> [!NOTE]
+> If the attribute extensions are already present in the provisioning app schema, then this mode only emits a warning that the attribute extension already exists. So, there is no issue running the CSV2SCIM script in the **UpdateSchema** mode if new fields are added to the CSV file and you want to add them as an extension.
+
+To illustrate the procedure, we'll use the CSV file ```Samples/csv-with-2-records.csv``` present in the **CSV2SCIM** folder.
+
+1. Open the CSV file ```Samples/csv-with-2-records.csv``` in a Notepad, Excel, or TextPad to check the columns present in the file.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/check-columns.png" alt-text="Screenshot of how to check CSV columns." lightbox="./media/inbound-provisioning-api-powershell/check-columns.png":::
+
+1. Run the following command:
+
+ ```powershell
+ .\CSV2SCIM.ps1 -Path '..\Samples\csv-with-2-records.csv' -UpdateSchema -ServicePrincipalId <servicePrincipalId> -TenantId "contoso.onmicrosoft.com" -ScimSchemaNamespace "urn:ietf:params:scim:schemas:extension:contoso:1.0:User"
+ ```
+
+1. You can verify the update to your provisioning app schema by opening the **Attribute Mapping** page and accessing the **Edit attribute list for API** option under **Advanced options**.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/advanced-options.png" alt-text="Screenshot of Attribute Mapping in Advanced options." lightbox="./media/inbound-provisioning-api-powershell/advanced-options.png":::
+
+1. The **Attribute List** shows attributes under the new namespace.
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/attribute-list.png" alt-text="Screenshot of the attribute list." lightbox="./media/inbound-provisioning-api-powershell/attribute-list.png":::
+++
+## Get provisioning logs of the latest sync cycles
+
+After sending the bulk request, you can query the logs of the latest sync cycles processed by Azure AD. You can retrieve the sync statistics and processing details with the PowerShell script and save it for analysis.
+
+1. To view the log details and sync statistics on the console, run the following command:
+
+ ```powershell
+ .\CSV2SCIM.ps1 -ServicePrincipalId <servicePrincipalId> -TenantId "contoso.onmicrosoft.com" -GetPreviousCycleLogs -NumberOfCycles 1
+ ```
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/stats.png" alt-text="Screenshot of sync statistics." lightbox="./media/inbound-provisioning-api-powershell/stats.png":::
+
+ > [!NOTE]
+ > NumberOfCycles is 1 by default. Specify a number to retrieve more sync cycles.
+
+1. To view sync statistics on the console and save the logs details to a variable, run the following command:
+
+ ```powershell
+ $logs=.\CSV2SCIM.ps1 -ServicePrincipalId <servicePrincipalId> -TenantId "contoso.onmicrosoft.com" -GetPreviousCycleLogs
+ ```
+
+ To run the command using client certificate authentication, run the command by providing the correct values for `ServicePrincipalId`, `ClientId` and `TenantId`:
+ ```powershell
+ $ClientCertificate = Get-ChildItem -Path cert:\CurrentUser\my\ | Where-Object {$_.Subject -eq "CN=CSV2SCIM"}
+ $ThumbPrint = $ClientCertificate.ThumbPrint
+
+ $logs=.\CSV2SCIM.ps1 -ServicePrincipalId "<ProvisioningAppObjectId>" -TenantId "contoso.onmicrosoft.com" -ClientId "<AppClientId>" -ClientCertificate (Get-ChildItem Cert:\CurrentUser\My\$ThumbPrint) -GetPreviousCycleLogs -NumberOfCycles 1
+ ```
+
+ - To see the details of a specific record we can loop into the collection or select a specific index of it, for example: `$logs[0]`
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/index.png" alt-text="Screenshot of a selected index.":::
+
+ - We can also use the `where-object` statement to search for a specific record using the sourceID or DisplayName. In the **ProvisioningLogs** property, we can find all the details of the operation done for that specific record.
+ ```powershell
+ $user = $logs | where sourceId -eq '1222'
+ $user.ProvisioningLogs | fl
+ ```
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/logs.png" alt-text="Screenshot of provisioning logs.":::
+
+ - We can see the specific user affected properties on the **ModifiedProperties** attribute. `$user.ProvisioningLogs.ModifiedProperties`
+
+ :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/properties.png" alt-text="Screenshot of properties.":::
+
+## Appendix
+
+### CSV2SCIM PowerShell usage details
+
+Here is a list of command-line parameters accepted by the CSV2SCIM PowerShell script.
+
+```powershell
+PS > CSV2SCIM.ps1 -Path <path-to-csv-file>
+[-ScimSchemaNamespace <customSCIMSchemaNamespace>]
+[-AttributeMapping $AttributeMapping]
+[-ServicePrincipalId <spn-guid>]
+[-ValidateAttributeMapping]
+[-UpdateSchema]
+[-ClientId <client-id>]
+[-ClientCertificate <certificate-object>]
+[-RestartService]
+```
+
+> [!NOTE]
+> The `AttributeMapping` and `ValidateAttributeMapping` command-line parameters refer to the mapping of CSV column attributes to the standard SCIM schema elements.
+It doesn't refer to the attribute mappings that you perform in the Entra portal provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes.
+
+| Parameter | Description | Processing remarks |
+|-|-|--|
+| Path | The full or relative path to the CSV file. For example: `.\Samples\csv-with-1000-records.csv` | Mandatory: Yes |
+|ScimSchemaNamespace | The custom SCIM Schema namespace to use to send all columns in the CSV file as custom SCIM attributes belonging to specific namespace. For example, `urn:ietf:params:scim:schemas:extension:csv:1.0:User` | Mandatory: Only when you want to:</br>- Update the provisioning app schema or </br>When you want to include custom SCIM attributes in the payload. |
+| AttributeMapping | Points to a PowerShell Data (.psd1 extension) file that maps columns in the CSV file to SCIM Core User and Enterprise User attributes. </br>See example: [AttributeMapping.psd file for CSV2SCIM script]().</br> For example: ```powershell $AttributeMapping = Import-PowerShellDataFile '.\Samples\AttributeMapping.psd1'`-AttributeMapping $AttributeMapping``` | Mandatory: Yes </br> The only scenario when you don't need to specify this is when using the `UpdateSchema` switch.|
+| ValidateAttributeMapping |Use this Switch flag to validate that the AttributeMapping file contains attributes that comply with the SCIM Core and Enterprise user schema. | Mandatory: No</br> Recommend using it to ensure compliance. |
+| ServicePrincipalId |The GUID value of your provisioning app's service principal ID that you can retrieve from the **Provisioning App** > **Properties** > **Object ID**| Mandatory: Only when you want to: </br>- Update the provisioning app schema, or</br>- Send the generated bulk request to the API endpoint. |
+| UpdateSchema |Use this switch to instruct the script to read the CSV columns and add them as custom SCIM attributes in your provisioning app schema.|
+| ClientId |The Client ID of an Azure AD registered app to use for OAuth authentication flow. This app must have valid certificate credentials. | Mandatory: Only when performing certificate-based authentication. |
+| ClientCertificate |The Client Authentication Certificate to use during OAuth flow. | Mandatory: Only when performing certificate-based authentication.|
+| GetPreviousCycleLogs |To get the provisioning logs of the latest sync cycles. |
+| NumberOfCycles | To specify how many sync cycles should be retrieved. This value is 1 by default.|
+| RestartService | With this option, the script temporarily pauses the provisioning job before uploading the data, it uploads the data and then starts the job again to ensure immediate processing of the payload. | Use this option only during testing. |
+
+### AttributeMapping.psd file
+
+This file is used to map columns in the CSV file to standard SCIM Core User and Enterprise User attribute schema elements. The file also generates an appropriate representation of the CSV file contents as a bulk request payload.
+
+In the next example, we mapped the following columns in the CSV file to their counterpart SCIM Core User and Enterprise User attributes.
++
+```powershell
+ @{
+ externalId = 'WorkerID'
+ name = @{
+ familyName = 'LastName'
+ givenName = 'FirstName'
+ }
+ active = { $_.'WorkerStatus' -eq 'Active' }
+ userName = 'UserID'
+ displayName = 'FullName'
+ nickName = 'UserID'
+ userType = 'WorkerType'
+ title = 'JobTitle'
+ addresses = @(
+ @{
+ type = { 'work' }
+ streetAddress = 'StreetAddress'
+ locality = 'City'
+ postalCode = 'ZipCode'
+ country = 'CountryCode'
+ }
+ )
+ phoneNumbers = @(
+ @{
+ type = { 'work' }
+ value = 'OfficePhone'
+ }
+ )
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User" = @{
+ employeeNumber = 'WorkerID'
+ costCenter = 'CostCenter'
+ organization = 'Company'
+ division = 'Division'
+ department = 'Department'
+ manager = @{
+ value = 'ManagerID'
+ }
+ }
+}
+```
+
+## Next steps
+- [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)
+- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
The rows in a downloaded CSV template are as follows:
## To create users in bulk
-1. [Sign in to the Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
1. Browse to **Azure Active Directory** > **Users** > **Bulk create**. 1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add users you want to create.
Next, you can check to see that the users you created exist in the Azure AD orga
## Verify users in the Azure portal
-1. [Sign in to the Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
1. Browse to **Azure Active Directory** > **Users**. 1. Under **Show**, select **All users** and verify that the users you created are listed.
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
The rows in a downloaded CSV template are as follows:
## To bulk delete users
-1. [Sign in to the Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
1. Browse to **Azure Active Directory** > **Users** > **Bulk operations** > **Bulk delete**. 1. On the **Bulk delete user** page, select **Download** to download the latest version of the CSV template. 1. Open the CSV file and add a line for each user you want to delete. The only required value is **User principal name**. Save the file.
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
Both admin and non-admin users can download user lists.
## To download a list of users
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Navigate to **Azure Active Directory** > **Users**. 3. In Azure AD, select **Users** > **Download users**. By default, all user profiles are exported. 4. On the **Download users** page, select **Start** to receive a CSV file listing user profile properties. If there are errors, you can download and view the results file on the **Bulk operation results** page. The file contains the reason for each error.
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
The rows in a downloaded CSV template are as follows:
## To bulk restore users
-1. [Sign in to the Azure portal](https://portal.azure.com) with an account that is a User Administrator in the Azure AD organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the Azure AD organization.
1. Browse to **Azure Active Directory** > **Users** > **Deleted**. 1. On the **Deleted users** page, select **Bulk restore** to upload a valid CSV file of properties of the users to restore.
Next, you can check to see that the users you restored exist in the Azure AD org
## View restored users in the Azure portal
-1. [Sign in to the Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User Administrator in the organization.
1. In the navigation pane, select **Azure Active Directory**. 1. Under **Manage**, select **Users**. 1. Under **Show**, select **All users** and verify that the users you restored are listed.
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
To publish your application in the gallery, you must first read and agree to spe
- For password SSO, make sure that your application supports form authentication so that password vaulting can be used. - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/). Enterprise gallery applications must support multiple user configurations and not any specific user. - For federated applications (OpenID and SAML/WS-Fed), the application can be single **or** multitenanted
- - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be correctly implemented.
+ - For Open ID Connect, if the application is multitenanted the [Azure AD consent framework](../develop/consent-framework.md) must be correctly implemented.
- Provisioning is optional yet highly recommended. To learn more about Azure AD SCIM, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md). You can sign up for a free, test Development account. It's free for 90 days and you get all of the premium Azure AD features with it. You can also extend the account if you use it for development work: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
This article is for eligible members or owners who want to activate their group
When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> My roles -> Groups**. >[!NOTE]
If the [role requires approval](pim-resource-roles-approval-workflow.md) to acti
You can view the status of your pending requests to activate. It is specifically important when your requests undergo approval of another person.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> My requests -> Groups**.
You can view the status of your pending requests to activate. It is specifically
## Cancel a pending request
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> My requests -> Groups**.
When you select **Cancel**, the request will be canceled. To activate the role a
## Next steps - [Approve activation requests for group members and owners](groups-approval-workflow.md)-
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
Follow the steps in this article to approve or deny requests for group membershi
As a delegated approver, you receive an email notification when an Azure resource role request is pending your approval. You can view pending requests in Privileged Identity Management.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management** > **Approve requests** > **Groups**.
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
Follow these steps to make a user eligible member or owner of a group. You will
> [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
Follow these steps to update or remove an existing role assignment. You will nee
> [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com) with appropriate role permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com) with appropriate role permissions.
1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
Follow these steps to update or remove an existing role assignment. You will nee
- [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md) - [Approve activation requests for group members and owners](groups-approval-workflow.md)-
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Follow these steps to view the audit history for groups in Privileged Identity M
**Resource audit** gives you a view of all activity associated with groups in PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> Groups**.
Follow these steps to view the audit history for groups in Privileged Identity M
**My audit** enables you to view your personal role activity for groups in PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> Groups**.
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
- [Assign eligibility for a group in Privileged Identity Management](groups-assign-member-owner.md) - [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md)-- [Approve activation requests for group members and owners](groups-approval-workflow.md)
+- [Approve activation requests for group members and owners](groups-approval-workflow.md)
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
Role settings are defined per role per group. All assignments for the same role
To open the settings for a group role:
-1. [Sign in to the Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management** > **Groups**.
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
PIM role settings are also known as PIM policies.
To open the settings for an Azure AD role:
-1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management** > **Azure AD Roles** > **Roles**. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles. :::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
PIM role settings are also known as PIM policies.
To open the settings for an Azure resource role:
-1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure AD Privileged Identity Management** > **Azure Resources**. This page shows a list of Azure resources discovered in Privileged Identity Management. Use the **Resource type** filter to select all required resource types.
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and ha
Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions).
-## Cognitive Services
+## Azure AI services
### Upgrade to the latest version of the Immersive Reader SDK We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
+Learn more about [Azure AI Immersive Reader](/azure/ai-services/immersive-reader/).
## Compute
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
The latest version of Azure Front Door Standard and Premium Client Library or SD
Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison).
-## Cognitive Services
+## Azure AI services
### 429 Throttling Detected on this resource We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors.
-Learn more about [Cognitive Service - AzureAdvisor429LimitHit (429 Throttling Detected on this resource)](/azure/cognitive-services/autoscale?tabs=portal).
+Learn more about [Azure AI services autoscale](/azure/ai-services/autoscale?tabs=portal).
-### Upgrade to the latest Cognitive Service Text Analytics API version
+### Upgrade to the latest Azure AI Language SDK version
-Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in personally identifiable information endpoint
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in personally identifiable information endpoint.
-Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api).
-
-### Upgrade to the latest API version of Azure Cognitive Service for Language
-
-Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
-
-Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
-
-### Upgrade to the latest Cognitive Service Text Analytics SDK version
-
-Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in personally identifiable information endpoint
-
-Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp).
-
-### Upgrade to the latest Cognitive Service Language SDK version
-
-Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
-
-Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
+Learn more about [Azure AI Language](/azure/ai-services/language-service/language-detection/overview).
## Communication services
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/how-to/call-api.md
The labels are *positive*, *negative*, and *neutral*. At the document level, the
| At least one `negative` sentence and at least one `positive` sentence are in the document. | `mixed` | | All sentences in the document are `neutral`. | `neutral` |
-Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/legal/cognitive-services/text-analytics/transparency-note?context=/azure/cognitive-services/text-analytics/context/context).
+Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/legal/cognitive-services/text-analytics/transparency-note?context=/azure/ai-services/text-analytics/context/context).
## Opinion Mining
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/overview.md
As you use Custom sentiment analysis, see the following reference documentation
## Responsible AI
-An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
## Next steps
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
When functions are provided, by default the `function_call` will be set to `"aut
import os import openai
-openai.api_key = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
openai.api_version = "2023-07-01-preview" openai.api_type = "azure"
-openai.api_base = os.getenv("AZURE_OPENAI_KEY")
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
messages= [ {"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
ai-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-speech-overview.md
Here's more information about the sequence of steps shown in the previous diagra
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
## Next steps
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
An AI system includes not only the technology, but also the people who will use
### Speech to text
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
### Pronunciation Assessment
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
### Custom Neural Voice
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
-* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/ai-services/speech-service/context/context)
+* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
### Speaker Recognition
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
## Next steps
ai-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md
After you press the stop button, you can see **Pronunciation score**, **Accuracy
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
## Next steps
ai-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speaker-recognition-overview.md
As with all of the Azure AI services resources, developers who use the speaker r
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
## Next steps
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
Customization options vary by language or locale. To verify support, see [Langua
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
## Next steps
ai-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech.md
Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour)
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
-* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/ai-services/speech-service/context/context)
+* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
## Next steps
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
* If running the commands in this guide locally (instead of Azure Cloud Shell): * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
- * Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
+ * Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS. * Make sure you've been assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
Clone the sample code for this guide. The sample is on [GitHub](https://github.c
There are a few samples in the repository. We'll use *java-app/*. Here's the file structure of the application.
+```azurecli-interactive
+git clone https://github.com/Azure-Samples/open-liberty-on-aks.git
+cd open-liberty-on-aks
+git checkout 20230723
+```
+
+If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you have checked out a tag.
+ ``` java-app Γö£ΓöÇ src/main/
The directories *java*, *resources*, and *webapp* contain the source code of the
In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image. In the *docker* directory, there are two files to create the application image with either Open Liberty or WebSphere Liberty.
-In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+In directory *liberty/config*, the *server.xml* file is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
### Build the project
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
To complete this tutorial:
With caching policies shown in this example, the first request to the **GetSpeakers** operation returns a response from the backend service. This response is cached, keyed by the specified headers and query string parameters. Subsequent calls to the operation, with matching parameters, will have the cached response returned, until the cache duration interval has expired.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Browse to your APIM instance. 3. Select the **API** tab. 4. Click **Demo Conference API** from your API list.
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
In this example, you create three virtual machines to be used as backend servers for the application gateway. You also install IIS on the virtual machines to verify that the application gateway works as expected.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the Azure portal, select **Create a resource**. 2. Select **Windows Server 2016 Datacenter** in the Popular list. 3. Enter these values for the virtual machine:
application-gateway Alb Controller Backend Health Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-backend-health-metrics.md
+
+ Title: ALB Controller - Backend Health and Metrics
+description: Identify and troubleshoot issues using ALB Controller's backend health & metrics endpoints for Application Gateway for Containers.
+++++ Last updated : 07/24/2023+++
+# ALB Controller - Backend Health and Metrics
+
+Understanding backend health of your Kubernetes services and pods is crucial in identifying issues and assistance in troubleshooting. To help facilitate visibility into backend health, ALB Controller exposes backend health and metrics endpoints in all ALB Controller deployments.
+
+ALB Controller's backend health exposes three different experiences:
+1. Summarized backend health by Application Gateway for Containers resource
+2. Summarized backend health by Kubernetes service
+3. Detailed backend health for a specified Kubernetes service
+
+ALB Controller's metric endpoint exposes both metrics and summary of backend health. This endpoint enables exposure to Prometheus.
+
+Access to these endpoints can be reached via the following URLs:
+- Backend Health - http://\<alb-controller-pod-ip\>:8000/backendHealth
+ - Output is JSON format
+- Metrics - http://\<alb-controller-pod-ip\>:8001/metrics
+ - Output is text format
+
+Any clients or pods that have connectivity to this pod and port may access these endpoints. To restrict access, we recommend using [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to restrict access to certain clients.
+
+## Backend Health
+
+### Discovering backend health
+
+Run the following kubectl command to identify your ALB Controller pod and its corresponding IP address.
+
+```bash
+kubectl get pods -n azure-alb-system -o wide
+```
+
+Example output:
+
+| NAME | READY | STATUS | RESTARTS | AGE | IP | NODE | NOMINATED NODE | READINESS GATES |
+| | -- | - | -- | - | - | -- | -- | |
+| alb-controller-74df7896b-gfzfc | 1/1 | Running | 0 | 60m | 10.1.0.247 | aks-userpool-21921599-vmss000000 | \<none\> | \<none\> |
+| alb-controller-bootstrap-5f7f8f5d4f-gbstq | 1/1 | Running | 0 | 60m | 10.1.1.183 | aks-userpool-21921599-vmss000001 | \<none\> | \<none\> |
+
+Once you have the IP address of your alb-controller pod, you may validate the backend health service is running by browsing to http://\<pod-ip\>:8000.
+
+For example, the following command may be run:
+```bash
+curl http://10.1.0.247:8000
+```
+
+Example response:
+```
+Available paths:
+Path: /backendHealth
+Description: Prints the backend health of the ALB.
+Query Parameters:
+ detailed: if true, prints the detailed view of the backend health
+ alb-id: Resource ID of the Application Gateway for Containers to filter backend health for.
+ service-name: Service to filter backend health for. Expected format: \<namespace\>/\<service\>/\<service-port-number\>
+
+Path: /
+Description: Prints the help
+```
+
+### Summarized backend health by Application Gateway for Containers
+
+This experience summarizes of all Kubernetes services with references to Application Gateway for Containers and their corresponding health status.
+
+This experience may be accessed by specifying the Application Gateway for Containers resource ID in the query of the request to the alb-controller pod.
+
+The following command can be used to probe backend health for the specified Application Gateway for Containers resource.
+```bash
+curl http://\<alb-controller-pod-ip-address\>:8000/backendHealth?alb-id=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzzzzzz
+```
+
+Example output:
+```json
+{
+ "services": [
+ {
+ "serviceName": "default/service-hello-world/80",
+ "serviceHealth": [
+ {
+ "albId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzzzzzz",
+ "totalEndpoints": 1,
+ "totalHealthyEndpoints": 1,
+ "totalUnhealthyEndpoints": 0
+ }
+ ]
+ },
+ {
+ "serviceName": "default/service-contoso/443",
+ "serviceHealth": [
+ {
+ "albId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzzzzzz",
+ "totalEndpoints": 1,
+ "totalHealthyEndpoints": 1,
+ "totalUnhealthyEndpoints": 0
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Summarized backend health by Kubernetes service
+
+This experience searches for the health summary status of a given service.
+
+This experience may be accessed by specifying the name of the namespace, service, and port number of the service in the following format of the query string to the alb-controller pod: _\<namespace\>/\<service\>/\<service-port-number\>_
+
+The following command can be used to probe backend health for the specified Kubernetes service.
+```bash
+curl http://\<alb-controller-pod-ip-address\>:8000/backendHealth?service-name=default/service-hello-world/80
+```
+
+Example output:
+```json
+{
+ "services": [
+ {
+ "serviceName": "default/service-hello-world/80",
+ "serviceHealth": [
+ {
+ "albId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzzzzzz",
+ "totalEndpoints": 1,
+ "totalHealthyEndpoints": 1,
+ "totalUnhealthyEndpoints": 0
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Detailed backend health for a specified Kubernetes service
+
+This experience shows all endpoints that make up the service, including their corresponding health status and IP address. Endpoint status is reported as either _HEALTHY_ or _UNHEALTHY_.
+
+This experience may be accessed by specifying detailed=true in the query string to the alb-controller pod.
+
+For example, we can verify individual endpoint health by executing the following command:
+```bash
+curl http://\<alb-controller-pod-ip-address\>:8000/backendHealth?service-name=default/service-hello-world/80\&detailed=true
+```
+
+Example output:
+```json
+{
+ "services": [
+ {
+ "serviceName": "default/service-hello-world/80",
+ "serviceHealth": [
+ {
+ "albId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzzzzzz",
+ "totalEndpoints": 1,
+ "totalHealthyEndpoints": 1,
+ "totalUnhealthyEndpoints": 0,
+ "endpoints": [
+ {
+ "address": "10.1.1.22",
+ "health": {
+ "status": "HEALTHY"
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Metrics
+
+ALB Controller currently surfaces metrics following [text based format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format) to be exposed to Prometheus.
+
+The following Application Gateway for Containers specific metrics are currently available today:
+
+| Metric Name | Description |
+| -- | - |
+| alb_connection_status | Connection status to an Application Gateway for Containers resource |
+| alb_reconnection_count | Number of reconnection attempts to an Application Gateway for Containers resources |
+| total_config_updates | Number of service routing config operations |
+| total_endpoint_updates | Number of backend pool config operations |
+| total_deployments | Number of Application Gateway for Containers resource deployments |
+| total_endpoints | Number of endpoints in a service |
+| total_healthy_endpoints | Number of healthy endpoints in a service |
+| total_unhealthy_endpoints | Number of unhealthy endpoints in a service |
application-gateway Alb Controller Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md
+
+ Title: Release notes for ALB Controller
+description: This article lists updates made to the Application Gateway for Containers ALB Controller
+++++ Last updated : 07/24/2023+++
+# Release notes for ALB Controller
+
+This article provides details about changes to the ALB Controller for Application Gateway for Containers.
+
+The ALB Controller is a Kubernetes deployment that orchestrates configuration and deployment of Application Gateway for Containers. It uses both ARM and configuration APIs to propagate configuration to the Application Gateway for Containers Azure deployment.
+
+Each release of ALB Controller has a documented helm chart version and supported Kubernetes cluster version.
+
+Instructions for new or existing deployments of ALB Controller are found in the following links:
+- [New deployment of ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-new-deployments)
+- [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments)
+
+## Release history
+
+July 24, 2023 - 0.4.023921 - Initial release of ALB Controller
+* Minimum supported Kubernetes version: v1.25
application-gateway Api Specification Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/api-specification-kubernetes.md
++
+ Title: Application Gateway for Containers API Specification for Kubernetes (preview)
+description: This article provides documentation for Application Gateway for Containers' API specification for Kubernetes.
+++++ Last updated : 7/24/2023+++
+# Application Gateway for Containers API specification for Kubernetes (preview)
+
+## Packages
+
+Package v1 is the v1 version of the API.
+
+### alb.networking.azure.io/v1
+This document defines each of the resource types for `alb.networking.azure.io/v1`.
+
+### Resource Types:
+<h3 id="alb.networking.azure.io/v1.AffinityType">AffinityType
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.SessionAffinity">SessionAffinity</a>)
+</p>
+<div>
+<p>AffinityType defines the affinity type for the Service</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;application-cookie&#34;</p></td>
+<td><p>AffinityTypeApplicationCookie is a session affinity type for an application cookie</p>
+</td>
+</tr><tr><td><p>&#34;managed-cookie&#34;</p></td>
+<td><p>AffinityTypeManagedCookie is a session affinity type for a managed cookie</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.AlbConditionReason">AlbConditionReason
+(<code>string</code> alias)</h3>
+<div>
+<p>AlbConditionReason defines the set of reasons that explain
+why a particular condition type has been raised on the Application Gateway for Containers resource.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>AlbReasonAccepted indicates the Application Gateway for Containers resource
+has been accepted by the controller.</p>
+</td>
+</tr><tr><td><p>&#34;Ready&#34;</p></td>
+<td><p>AlbReasonDeploymentReady indicates the Application Gateway for Containers resource
+deployment status.</p>
+</td>
+</tr><tr><td><p>&#34;InProgress&#34;</p></td>
+<td><p>AlbReasonInProgress indicates whether the Application Gateway for Containers resource
+is in the process of being created, updated or deleted.</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.AlbConditionType">AlbConditionType
+(<code>string</code> alias)</h3>
+<div>
+<p>AlbConditionType is a type of condition associated with an Application Gateway for Containers resource. This type should be used with the AlbStatus.Conditions
+field.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>AlbConditionTypeAccepted indicates whether the Application Gateway for Containers resource
+has been accepted by the controller.</p>
+</td>
+</tr><tr><td><p>&#34;Deployment&#34;</p></td>
+<td><p>AlbConditionTypeDeployment indicates the deployment status of the Application Gateway for Containers resource.</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.AlbSpec">AlbSpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.ApplicationLoadBalancer">ApplicationLoadBalancer</a>)
+</p>
+<div>
+<p>AlbSpec defines the specifications for Application Gateway for Containers resource.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>associations</code><br/>
+<em>
+[]string
+</em>
+</td>
+<td>
+<p>Associations are subnet resource IDs the Application Gateway for Containers resource will be associated with.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.AlbStatus">AlbStatus
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.ApplicationLoadBalancer">ApplicationLoadBalancer</a>)
+</p>
+<div>
+<p>AlbStatus defines the observed state of Application Gateway for Containers resource.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>conditions</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition">
+[]Kubernetes meta/v1.Condition
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Known condition types are:</p>
+<ul>
+<li>&ldquo;Accepted&rdquo;</li>
+<li>&ldquo;Ready&rdquo;</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.ApplicationLoadBalancer">ApplicationLoadBalancer
+</h3>
+<div>
+<p>ApplicationLoadBalancer is the schema for the Application Gateway for Containers resource.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.AlbSpec">
+AlbSpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the specifications for Application Gateway for Containers resource.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>associations</code><br/>
+<em>
+[]string
+</em>
+</td>
+<td>
+<p>Associations are subnet resource IDs the Application Gateway for Containers resource will be associated with.</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+<tr>
+<td>
+<code>status</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.AlbStatus">
+AlbStatus
+</a>
+</em>
+</td>
+<td>
+<p>Status defines the current state of Application Gateway for Containers resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicy">BackendTLSPolicy
+</h3>
+<div>
+<p>BackendTLSPolicy is the schema for the BackendTLSPolicys API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicySpec">
+BackendTLSPolicySpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the BackendTLSPolicy specification.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">
+BackendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">
+BackendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+<tr>
+<td>
+<code>status</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyStatus">
+BackendTLSPolicyStatus
+</a>
+</em>
+</td>
+<td>
+<p>Status defines the current state of BackendTLSPolicy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicyConditionReason">BackendTLSPolicyConditionReason
+(<code>string</code> alias)</h3>
+<div>
+<p>BackendTLSPolicyConditionReason defines the set of reasons that explain why a
+particular BackendTLSPolicy condition type has been raised.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;InvalidCertificateRef&#34;</p></td>
+<td><p>BackendTLSPolicyInvalidCertificateRef is used when an invalid certificate is referenced</p>
+</td>
+</tr><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>BackendTLSPolicyReasonAccepted is used to set the BackendTLSPolicyConditionReason to Accepted
+When the given BackendTLSPolicy is correctly configured</p>
+</td>
+</tr><tr><td><p>&#34;InvalidBackendTLSPolicy&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalid is the reason when the BackendTLSPolicy isn't Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidKind&#34;</p></td>
+<td><p>BackendTLSPolicyReasonInvalidKind is used when the kind/group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
+<td><p>BackendTLSPolicyReasonNoTargetReference is used when there is no target reference</p>
+</td>
+</tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
+<td><p>BackendTLSPolicyReasonRefNotPermitted is used when the ref isn't permitted</p>
+</td>
+</tr><tr><td><p>&#34;ServiceNotFound&#34;</p></td>
+<td><p>BackendTLSPolicyReasonServiceNotFound is used when the ref service isn't found</p>
+</td>
+</tr><tr><td><p>&#34;Degraded&#34;</p></td>
+<td><p>ReasonDegraded is the backendTLSPolicyConditionReason when the backendTLSPolicy has been incorrectly programmed</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicyConditionType">BackendTLSPolicyConditionType
+(<code>string</code> alias)</h3>
+<div>
+<p>BackendTLSPolicyConditionType is a type of condition associated with a
+BackendTLSPolicy. This type should be used with the BackendTLSPolicyStatus.Conditions
+field.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>BackendTLSPolicyConditionAccepted is used to set the BackendTLSPolicyCondition to Accepted</p>
+</td>
+</tr><tr><td><p>&#34;Ready&#34;</p></td>
+<td><p>BackendTLSPolicyConditionReady is used to set the condition to Ready</p>
+</td>
+</tr><tr><td><p>&#34;ResolvedRefs&#34;</p></td>
+<td><p>BackendTLSPolicyConditionResolvedRefs is used to set the BackendTLSPolicyCondition to ResolvedRefs
+This is used with the following Reasons :
+*BackendTLSPolicyReasonRefNotPermitted
+*BackendTLSPolicyReasonInvalidKind
+*BackendTLSPolicyReasonServiceNotFound
+*BackendTLSPolicyInvalidCertificateRef
+*ReasonDegraded</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicyConfig">BackendTLSPolicyConfig
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicySpec">BackendTLSPolicySpec</a>)
+</p>
+<div>
+<p>BackendTLSPolicyConfig defines the policy specification for the Backend TLS
+Policy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>CommonTLSPolicy</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.CommonTLSPolicy">
+CommonTLSPolicy
+</a>
+</em>
+</td>
+<td>
+<p>
+(Members of <code>CommonTLSPolicy</code> are embedded into this type.)
+</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>sni</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Sni is the server name to use for the TLS connection to the backend.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>ports</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyPort">
+[]BackendTLSPolicyPort
+</a>
+</em>
+</td>
+<td>
+<p>Ports specifies the list of ports where the policy is applied.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>clientCertificateRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.SecretObjectReference">
+Gateway API .SecretObjectReference
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>ClientCertificateRef is the reference to the client certificate to
+use for the TLS connection to the backend.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicyPort">BackendTLSPolicyPort
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">BackendTLSPolicyConfig</a>)
+</p>
+<div>
+<p>BackendTLSPolicyPort defines the port to use for the TLS connection to the backend</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>port</code><br/>
+<em>
+int
+</em>
+</td>
+<td>
+<p>Port is the port to use for the TLS connection to the backend</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicySpec">BackendTLSPolicySpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicy">BackendTLSPolicy</a>)
+</p>
+<div>
+<p>BackendTLSPolicySpec defines the desired state of BackendTLSPolicy</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">
+BackendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">
+BackendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.BackendTLSPolicyStatus">BackendTLSPolicyStatus
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicy">BackendTLSPolicy</a>)
+</p>
+<div>
+<p>BackendTLSPolicyStatus defines the observed state of BackendTLSPolicy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>conditions</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition">
+[]Kubernetes meta/v1.Condition
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Conditions describe the current conditions of the BackendTLSPolicy.</p>
+<p>Implementations should prefer to express BackendTLSPolicy conditions
+using the <code>BackendTLSPolicyConditionType</code> and <code>BackendTLSPolicyConditionReason</code>
+constants so that operators and tools can converge on a common
+vocabulary to describe BackendTLSPolicy state.</p>
+<p>Known condition types are:</p>
+<ul>
+<li>&ldquo;Accepted&rdquo;</li>
+<li>&ldquo;Ready&rdquo;</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.CommonTLSPolicy">CommonTLSPolicy
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">BackendTLSPolicyConfig</a>)
+</p>
+<div>
+<p>CommonTLSPolicy is the schema for the CommonTLSPolicy API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>verify</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.CommonTLSPolicyVerify">
+CommonTLSPolicyVerify
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Verify provides the options to verify the backend certificate</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.CommonTLSPolicyVerify">CommonTLSPolicyVerify
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.CommonTLSPolicy">CommonTLSPolicy</a>)
+</p>
+<div>
+<p>CommonTLSPolicyVerify defines the schema for the CommonTLSPolicyVerify API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>caCertificateRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.SecretObjectReference">
+Gateway API .SecretObjectReference
+</a>
+</em>
+</td>
+<td>
+<p>CaCertificateRef is the CA certificate used to verify peer certificate of
+the backend.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>subjectAltName</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>SubjectAltName is the subject alternative name used to verify peer
+certificate of the backend.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.CustomTargetRef">CustomTargetRef
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicySpec">FrontendTLSPolicySpec</a>)
+</p>
+<div>
+<p>CustomTargetRef is a reference to a custom resource that isn't part of the
+Kubernetes core API.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>name</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Name is the name of the referent.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>kind</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Kind is the kind of the referent.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>gateway</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Gateway is the name of the Gateway.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>listeners</code><br/>
+<em>
+[]string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Listener is the name of the Listener.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>namespace</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Namespace">
+Gateway API .Namespace
+</a>
+</em>
+</td>
+<td>
+<p>Namespace is the namespace of the referent. When unspecified, the local</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>group</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Group">
+Gateway API .Group
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Group is the group of the referent.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicy">FrontendTLSPolicy
+</h3>
+<div>
+<p>FrontendTLSPolicy is the schema for the FrontendTLSPolicy API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.FrontendTLSPolicySpec">
+FrontendTLSPolicySpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the FrontendTLSPolicy specification.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.CustomTargetRef">
+CustomTargetRef
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.FrontendTLSPolicyConfig">
+FrontendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+<tr>
+<td>
+<code>status</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.FrontendTLSPolicyStatus">
+FrontendTLSPolicyStatus
+</a>
+</em>
+</td>
+<td>
+<p>Status defines the current state of FrontendTLSPolicy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicyConditionReason">FrontendTLSPolicyConditionReason
+(<code>string</code> alias)</h3>
+<div>
+<p>FrontendTLSPolicyConditionReason defines the set of reasons that explain why a
+particular FrontTLSPolicy condition type has been raised.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;InvalidGroup&#34;</p></td>
+<td><p>FrontTLSPolicyReasonInvalidGroup is used when the group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidKind&#34;</p></td>
+<td><p>FrontTLSPolicyReasonInvalidKind is used when the kind/group is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidName&#34;</p></td>
+<td><p>FrontTLSPolicyReasonInvalidName is used when the name is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidPolicyName&#34;</p></td>
+<td><p>FrontTLSPolicyReasonInvalidPolicyName is used when the name is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidPolicyType&#34;</p></td>
+<td><p>FrontTLSPolicyReasonInvalidPolicyType is used when the type is invalid</p>
+</td>
+</tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
+<td><p>FrontTLSPolicyReasonNoTargetReference is used when there is no target reference</p>
+</td>
+</tr><tr><td><p>&#34;RefNotPermitted&#34;</p></td>
+<td><p>FrontTLSPolicyReasonRefNotPermitted is used when the ref isn't permitted</p>
+</td>
+</tr><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonAccepted is used to set the FrontTLSPolicyConditionReason to Accepted
+When the given FrontTLSPolicy is correctly configured</p>
+</td>
+</tr><tr><td><p>&#34;InvalidFrontendTLSPolicy&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonInvalid is the reason when the FrontendTLSPolicy isn't Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidGateway&#34;</p></td>
+<td><p>FrontendTLSPolicyReasonInvalidGateway is used when the gateway is invalid</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicyConditionType">FrontendTLSPolicyConditionType
+(<code>string</code> alias)</h3>
+<div>
+<p>FrontendTLSPolicyConditionType is a type of condition associated with a
+FrontendTLSPolicy. This type should be used with the FrontendTLSPolicyStatus.Conditions
+field.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>FrontendTLSPolicyConditionAccepted is used to set the FrontendTLSPolicyCondition to Accepted</p>
+</td>
+</tr><tr><td><p>&#34;ResolvedRefs&#34;</p></td>
+<td><p>FrontendTLSPolicyConditionResolvedRefs is used to set the FrontendTLSPolicyCondition to ResolvedRefs</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicyConfig">FrontendTLSPolicyConfig
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicySpec">FrontendTLSPolicySpec</a>)
+</p>
+<div>
+<p>FrontendTLSPolicyConfig defines the policy specification for the Frontend TLS
+Policy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>policyType</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.PolicyType">
+PolicyType
+</a>
+</em>
+</td>
+<td>
+<p>Type is the type of the policy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicySpec">FrontendTLSPolicySpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicy">FrontendTLSPolicy</a>)
+</p>
+<div>
+<p>FrontendTLSPolicySpec defines the desired state of FrontendTLSPolicy</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.CustomTargetRef">
+CustomTargetRef
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.FrontendTLSPolicyConfig">
+FrontendTLSPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicyStatus">FrontendTLSPolicyStatus
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicy">FrontendTLSPolicy</a>)
+</p>
+<div>
+<p>FrontendTLSPolicyStatus defines the observed state of FrontendTLSPolicy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>conditions</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition">
+[]Kubernetes meta/v1.Condition
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Conditions describe the current conditions of the FrontendTLSPolicy.</p>
+<p>Implementations should prefer to express FrontendTLSPolicy conditions
+using the <code>FrontendTLSPolicyConditionType</code> and <code>FrontendTLSPolicyConditionReason</code>
+constants so that operators and tools can converge on a common
+vocabulary to describe FrontendTLSPolicy state.</p>
+<p>Known condition types are:</p>
+<ul>
+<li>&ldquo;Accepted&rdquo;</li>
+<li>&ldquo;Ready&rdquo;</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.FrontendTLSPolicyType">FrontendTLSPolicyType
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.PolicyType">PolicyType</a>)
+</p>
+<div>
+<p>FrontendTLSPolicyType is the type of the Frontend TLS Policy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;predefined&#34;</p></td>
+<td><p>PredefinedFrontendTLSPolicyType is the type of the predefined Frontend TLS Policy.</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HTTPMatch">HTTPMatch
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HTTPSpecifiers">HTTPSpecifiers</a>)
+</p>
+<div>
+<p>HTTPMatch defines the HTTP matchers to use for HealthCheck checks.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>body</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Body defines the HTTP body matchers to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>statusCodes</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.StatusCodes">
+[]StatusCodes
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>StatusCodes defines the HTTP status code matchers to use for HealthCheck checks.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HTTPSpecifiers">HTTPSpecifiers
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>)
+</p>
+<div>
+<p>HTTPSpecifiers defines the schema for HTTP HealthCheck check specification</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>host</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Host is the host header value to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>path</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Path is the path to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>match</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPMatch">
+HTTPMatch
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Match defines the HTTP matchers to use for HealthCheck checks.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy
+</h3>
+<div>
+<p>HealthCheckPolicy is the schema for the HealthCheckPolicy API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicySpec">
+HealthCheckPolicySpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the HealthCheckPolicy specification.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">
+HealthCheckPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">
+HealthCheckPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+<tr>
+<td>
+<code>status</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyStatus">
+HealthCheckPolicyStatus
+</a>
+</em>
+</td>
+<td>
+<p>Status defines the current state of HealthCheckPolicy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicyConditionReason">HealthCheckPolicyConditionReason
+(<code>string</code> alias)</h3>
+<div>
+<p>HealthCheckPolicyConditionReason defines the set of reasons that explain why a
+particular HealthCheckPolicy condition type has been raised.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;InvalidReference&#34;</p></td>
+<td><p>HealthCheckPolicyInvalidReference is used when the reference is invalid</p>
+</td>
+</tr><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>HealthCheckPolicyReasonAccepted is used to set the HealthCheckPolicyConditionReason to Accepted
+When the given HealthCheckPolicy is correctly configured</p>
+</td>
+</tr><tr><td><p>&#34;InvalidHealthCheckPolicy&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn't Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidServiceReference&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidServiceReference is used when the service is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidTargetReference&#34;</p></td>
+<td><p>HealthCheckPolicyReasonInvalidTargetReference is used when the target is invalid</p>
+</td>
+</tr><tr><td><p>&#34;NoTargetReference&#34;</p></td>
+<td><p>HealthCheckPolicyReasonNoTargetReference is used when the target isn't found</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicyConditionType">HealthCheckPolicyConditionType
+(<code>string</code> alias)</h3>
+<div>
+<p>HealthCheckPolicyConditionType is a type of condition associated with a
+HealthCheckPolicy. This type should be used with the HealthCheckPolicyStatus.Conditions
+field.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>HealthCheckPolicyConditionAccepted is used to set the HealthCheckPolicyConditionType to Accepted</p>
+</td>
+</tr><tr><td><p>&#34;ResolvedRefs&#34;</p></td>
+<td><p>HealthCheckPolicyConditionResolvedRefs is used to set the HealthCheckPolicyCondition to ResolvedRefs</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec</a>, <a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>)
+</p>
+<div>
+<p>HealthCheckPolicyConfig defines the schema for HealthCheck check specification</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>port</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Port is the port to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>protocol</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.Protocol">
+Protocol
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Protocol is the protocol to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>interval</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
+Kubernetes meta/v1.Duration
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Interval is the number of seconds between HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>timeout</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
+Kubernetes meta/v1.Duration
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Timeout is the number of seconds after which the HealthCheck check is
+considered failed.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>unhealthyThreshold</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>UnhealthyThreshold is the number of consecutive failed HealthCheck checks</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>healthyThreshold</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>HealthyThreshold is the number of consecutive successful HealthCheck checks</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>http</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HTTPSpecifiers">
+HTTPSpecifiers
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>HTTP defines the HTTP constraint specification for the HealthCheck of a
+target resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy</a>)
+</p>
+<div>
+<p>HealthCheckPolicySpec defines the desired state of HealthCheckPolicy</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">
+HealthCheckPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">
+HealthCheckPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.HealthCheckPolicyStatus">HealthCheckPolicyStatus
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy</a>)
+</p>
+<div>
+<p>HealthCheckPolicyStatus defines the observed state of HealthCheckPolicy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>conditions</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition">
+[]Kubernetes meta/v1.Condition
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Conditions describe the current conditions of the HealthCheckPolicy.</p>
+<p>Implementations should prefer to express HealthCheckPolicy conditions
+using the <code>HealthCheckPolicyConditionType</code> and <code>HealthCheckPolicyConditionReason</code>
+constants so that operators and tools can converge on a common
+vocabulary to describe HealthCheckPolicy state.</p>
+<p>Known condition types are:</p>
+<ul>
+<li>&ldquo;Accepted&rdquo;</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressBackendOverride">IngressBackendOverride
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
+</p>
+<div>
+<p>IngressBackendOverride allows a user to change the hostname on a request before it is sent to a backend service</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>service</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Service is the name of a backend service that this override refers to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>backendHost</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>BackendHost is the hostname that an incoming request will be mutated to use before being forwarded to the backend</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressBackendPort">IngressBackendPort
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>)
+</p>
+<div>
+<p>IngressBackendPort describes a port on a backend.
+Only one of Name/Number should be defined.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>number</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Number indicates the TCP port number being referred to</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>name</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Name must refer to a name on a port on the backend service</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>protocol</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.Protocol">
+Protocol
+</a>
+</em>
+</td>
+<td>
+<p>Protocol should be one of &ldquo;HTTP&rdquo;, &ldquo;HTTPS&rdquo;</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionSpec">IngressExtensionSpec</a>)
+</p>
+<div>
+<p>IngressBackendSettings provides extended configuration options for a backend service</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>service</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Service is the name of a backend service that this configuration applies to</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>ports</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressBackendPort">
+[]IngressBackendPort
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Ports can be used to indicate if the backend service is listening on HTTP or HTTPS</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>trustedRootCertificate</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>TrustedRootCertificate can be used to supply a certificate for the gateway to trust when communciating to the
+backend on a port specified as https</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>pathPrefixOverride</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>PathPrefixOverride will mutate requests going to the backend to be prefixed with this value</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>sessionAffinity</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.SessionAffinity">
+SessionAffinity
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>SessionAffinity allows client requests to be consistently given to the same backend</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>timeouts</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressTimeouts">
+IngressTimeouts
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Timeouts define a set of timeout parameters to be applied to an Ingress</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>healthCheck</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">
+HealthCheckPolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>HealthCheck defines a health probe which is used to determine if a backend is healthy</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressCertificate">IngressCertificate
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS</a>)
+</p>
+<div>
+<p>IngressCertificate defines a certificate and private key to be used with TLS.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>type</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Type indicates where the Certificate is stored.
+Can be KubernetesSecret, or KeyVaultCertificate</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>name</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Name is the name of a KubernetesSecret containing the TLS cert and key</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>secretId</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>SecretID is the resource ID of a KeyVaultCertificate</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressExtension">IngressExtension
+</h3>
+<div>
+<p>IngressExtension is the schema for the IngressExtension API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressExtensionSpec">
+IngressExtensionSpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the IngressExtension specification.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>listenerSettings</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressListenerSetting">
+[]IngressListenerSetting
+</a>
+</em>
+</td>
+<td>
+<p>Listeners defines a list of listeners to configure</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>backendSettings</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressBackendSettings">
+[]IngressBackendSettings
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>BackendSettings defines a set of configuration options for Ingress service backends</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressExtensionSpec">IngressExtensionSpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtension">IngressExtension</a>)
+</p>
+<div>
+<p>IngressExtensionSpec defines the desired configuration of IngressExtension</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>listenerSettings</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressListenerSetting">
+[]IngressListenerSetting
+</a>
+</em>
+</td>
+<td>
+<p>Listeners defines a list of listeners to configure</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>backendSettings</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressBackendSettings">
+[]IngressBackendSettings
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>BackendSettings defines a set of configuration options for Ingress service backends</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressListenerPort">IngressListenerPort
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
+</p>
+<div>
+<p>IngressListenerPort describes a port a listener will listen on.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>port</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<p>Port defines what TCP port the listener will listen on</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>protocol</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.Protocol">
+Protocol
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Protocol indicates if the port will be used for HTTP or HTTPS traffic.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>sslRedirectTo</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>SSLRedirectTo can be used to redirect HTTP traffic to HTTPS on the indicated port</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressExtensionSpec">IngressExtensionSpec</a>)
+</p>
+<div>
+<p>IngressListenerSetting provides configuration options for listeners</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>host</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Host is used to match against Ingress rules with the same hostname in order to identify which rules are affected by these settings</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>tls</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressListenerTLS">
+IngressListenerTLS
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>TLS defines TLS settings for the Listener</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>additionalHostnames</code><br/>
+<em>
+[]string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>AdditionalHostnames specifies additional hostnames to listen on</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>ports</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressListenerPort">
+[]IngressListenerPort
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Defines what ports and protocols a listener should listen on</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>overrideBackendHostnames</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressBackendOverride">
+[]IngressBackendOverride
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>OverrideBackendHostnames is a list of services on which incoming requests will have the value of the host header changed</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerSetting">IngressListenerSetting</a>)
+</p>
+<div>
+<p>IngressListenerTLS provides options for configuring TLS settings on a listener</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>certificate</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressCertificate">
+IngressCertificate
+</a>
+</em>
+</td>
+<td>
+<p>Certificate specifies a TLS Certificate to configure a Listener with</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>policy</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.IngressTLSPolicy">
+IngressTLSPolicy
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Policy configures a particular TLS Policy</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.IngressTLSPolicy">IngressTLSPolicy
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressListenerTLS">IngressListenerTLS</a>)
+</p>
+<div>
+<p>IngressTLSPolicy describes cipher suites and related TLS configuration options</p>
+</div>
+<h3 id="alb.networking.azure.io/v1.IngressTimeouts">IngressTimeouts
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>)
+</p>
+<div>
+<p>IngressTimeouts can be used to configure timeout properties for an Ingress</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>requestTimeout</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
+Kubernetes meta/v1.Duration
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>RequestTimeout defines the timeout used by the load balancer when forwarding requests to a backend service</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.PolicyType">PolicyType
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicyConfig">FrontendTLSPolicyConfig</a>)
+</p>
+<div>
+<p>PolicyType is the type of the policy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>name</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<p>Name is the name of the policy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>type</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.FrontendTLSPolicyType">
+FrontendTLSPolicyType
+</a>
+</em>
+</td>
+<td>
+<p>PredefinedFrontendTLSPolicyType is the type of the predefined Frontend TLS Policy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.Protocol">Protocol
+(<code>string</code> alias)</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>, <a href="#alb.networking.azure.io/v1.IngressBackendPort">IngressBackendPort</a>, <a href="#alb.networking.azure.io/v1.IngressListenerPort">IngressListenerPort</a>)
+</p>
+<div>
+<p>Protocol defines the protocol used for certain properties.
+Valid Protocol values are:</p>
+<ul>
+<li>HTTP</li>
+<li>HTTPS</li>
+<li>TCP</li>
+</ul>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;HTTP&#34;</p></td>
+<td><p>HTTP implies that the service will use HTTP</p>
+</td>
+</tr><tr><td><p>&#34;HTTPS&#34;</p></td>
+<td><p>HTTPS implies that the service will be use HTTPS</p>
+</td>
+</tr><tr><td><p>&#34;TCP&#34;</p></td>
+<td><p>TCP implies that the service will be use plain TCP</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicy">RoutePolicy
+</h3>
+<div>
+<p>RoutePolicy is the schema for the RoutePolicy API</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>metadata</code><br/>
+<em>
+<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
+Kubernetes meta/v1.ObjectMeta
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Object&rsquo;s metadata.</p>
+Refer to the Kubernetes API documentation for the fields of the
+<code>metadata</code> field.
+</td>
+</tr>
+<tr>
+<td>
+<code>spec</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicySpec">
+RoutePolicySpec
+</a>
+</em>
+</td>
+<td>
+<p>Spec is the RoutePolicy specification.</p>
+<br/>
+<br/>
+<table>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicyConfig">
+RoutePolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicyConfig">
+RoutePolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+<tr>
+<td>
+<code>status</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicyStatus">
+RoutePolicyStatus
+</a>
+</em>
+</td>
+<td>
+<p>Status defines the current state of RoutePolicy.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicyConditionReason">RoutePolicyConditionReason
+(<code>string</code> alias)</h3>
+<div>
+<p>RoutePolicyConditionReason defines the set of reasons that explain why a
+particular RoutePolicy condition type has been raised.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>RoutePolicyReasonAccepted is used to set the RoutePolicyConditionReason to Accepted
+When the given RoutePolicy is correctly configured</p>
+</td>
+</tr><tr><td><p>&#34;InvalidRoutePolicy&#34;</p></td>
+<td><p>RoutePolicyReasonInvalid is the reason when the RoutePolicy isn't Accepted</p>
+</td>
+</tr><tr><td><p>&#34;InvalidHTTPRoute&#34;</p></td>
+<td><p>RoutePolicyReasonInvalidHTTPRoute is used when the HTTPRoute is invalid</p>
+</td>
+</tr><tr><td><p>&#34;InvalidTargetReference&#34;</p></td>
+<td><p>RoutePolicyReasonInvalidTargetReference is used when there is no target reference</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicyConditionType">RoutePolicyConditionType
+(<code>string</code> alias)</h3>
+<div>
+<p>RoutePolicyConditionType is a type of condition associated with a
+RoutePolicy. This type should be used with the RoutePolicyStatus.Conditions
+field.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody><tr><td><p>&#34;Accepted&#34;</p></td>
+<td><p>RoutePolicyConditionAccepted is used to set the RoutePolicyConditionType to Accepted</p>
+</td>
+</tr><tr><td><p>&#34;ResolvedRefs&#34;</p></td>
+<td><p>RoutePolicyConditionResolvedRefs is used to set the RoutePolicyCondition to ResolvedRefs</p>
+</td>
+</tr></tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicyConfig">RoutePolicyConfig
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicySpec">RoutePolicySpec</a>)
+</p>
+<div>
+<p>RoutePolicyConfig defines the schema for RoutePolicy specification. This allows the specification of the following attributes:
+* Timeouts
+* Session Affinity</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>timeouts</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RouteTimeouts">
+RouteTimeouts
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Custom Timeouts
+Timeout for the target resource.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>sessionAffinity</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.SessionAffinity">
+SessionAffinity
+</a>
+</em>
+</td>
+<td>
+<p>SessionAffinity defines the schema for Session Affinity specification</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicySpec">RoutePolicySpec
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicy">RoutePolicy</a>)
+</p>
+<div>
+<p>RoutePolicySpec defines the desired state of RoutePolicy</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>targetRef</code><br/>
+<em>
+<a href="https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference">
+Gateway API .PolicyTargetReference
+</a>
+</em>
+</td>
+<td>
+<p>TargetRef identifies an API object to apply policy to.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>override</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicyConfig">
+RoutePolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Override defines policy configuration that should override policy
+configuration attached below the targeted resource in the hierarchy.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>default</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.RoutePolicyConfig">
+RoutePolicyConfig
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Default defines default policy configuration for the targeted resource.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RoutePolicyStatus">RoutePolicyStatus
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicy">RoutePolicy</a>)
+</p>
+<div>
+<p>RoutePolicyStatus defines the observed state of RoutePolicy.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>conditions</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition">
+[]Kubernetes meta/v1.Condition
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Conditions describe the current conditions of the RoutePolicy.</p>
+<p>Implementations should prefer to express RoutePolicy conditions
+using the <code>RoutePolicyConditionType</code> and <code>RoutePolicyConditionReason</code>
+constants so that operators and tools can converge on a common
+vocabulary to describe RoutePolicy state.</p>
+<p>Known condition types are:</p>
+<ul>
+<li>&ldquo;Accepted&rdquo;</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.RouteTimeouts">RouteTimeouts
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicyConfig">RoutePolicyConfig</a>)
+</p>
+<div>
+<p>RouteTimeouts defines the schema for Timeouts specification</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>routeTimeout</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
+Kubernetes meta/v1.Duration
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>RouteTimeout is the timeout for the route.</p>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.SessionAffinity">SessionAffinity
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>, <a href="#alb.networking.azure.io/v1.RoutePolicyConfig">RoutePolicyConfig</a>)
+</p>
+<div>
+<p>SessionAffinity defines the schema for Session Affinity specification</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>affinityType</code><br/>
+<em>
+<a href="#alb.networking.azure.io/v1.AffinityType">
+AffinityType
+</a>
+</em>
+</td>
+<td>
+</td>
+</tr>
+<tr>
+<td>
+<code>cookieName</code><br/>
+<em>
+string
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+</td>
+</tr>
+<tr>
+<td>
+<code>cookieDuration</code><br/>
+<em>
+<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
+Kubernetes meta/v1.Duration
+</a>
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+</td>
+</tr>
+</tbody>
+</table>
+<h3 id="alb.networking.azure.io/v1.StatusCodes">StatusCodes
+</h3>
+<p>
+(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HTTPMatch">HTTPMatch</a>)
+</p>
+<div>
+<p>StatusCodes defines the HTTP status code matchers to use for HealthCheck checks.</p>
+</div>
+<table>
+<thead>
+<tr>
+<th>Field</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>
+<code>start</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>Start defines the start of the range of status codes to use for HealthCheck checks.</p>
+</td>
+</tr>
+<tr>
+<td>
+<code>end</code><br/>
+<em>
+int32
+</em>
+</td>
+<td>
+<em>(Optional)</em>
+<p>End defines the end of the range of status codes to use for HealthCheck checks.</p>
+</td>
+</tr>
+</tbody>
+</table>
application-gateway Application Gateway For Containers Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/application-gateway-for-containers-components.md
++
+ Title: Application Gateway for Containers components (preview)
+description: This article provides information about how Application Gateway for Containers accepts incoming requests and routes them to a backend target.
+++++ Last updated : 07/24/2023+++
+# Application Gateway for Containers components (preview)
+
+This article provides detailed descriptions and requirements for components of Application Gateway for Containers. Information about how Application Gateway for Containers accepts incoming requests and routes them to a backend target is provided. For a general overview of Application Gateway for Containers, see [What is Application Gateway for Containers?](overview.md).
+
+### Core components
+- Application Gateway for Containers is an Azure parent resource that deploys the control plane
+- The control plane is responsible for orchestrating proxy configuration based on customer intent.
+- Application Gateway for Containers has two child resources; associations and frontends
+ - Child resources are exclusive to only their parent Application Gateway for Containers and may not be referenced by additional Application Gateway for Containers
+
+### Application Gateway for Containers frontends
+- An Application Gateway for Containers frontend resource is an Azure child resource of the Application Gateway for Containers parent resource
+- An Application Gateway for Containers frontend defines the entry point client traffic should be received by a given Application Gateway for Containers
+ - A frontend can't be associated to multiple Application Gateway for Containers
+ - Each frontend provides a unique FQDN that can be referenced by a customer's CNAME record
+ - Private IP addresses are currently unsupported
+- A single Application Gateway for Containers can support multiple frontends
+
+### Application Gateway for Containers associations
+- An Application Gateway for Containers association resource is an Azure child resource of the Application Gateway for Containers parent resource
+- An Application Gateway for Containers association defines a connection point into a virtual network. An association is a 1:1 mapping of an association resource to an Azure Subnet that has been delegated.
+- Application Gateway for Containers is designed to allow for multiple associations
+ - At this time, the current number of associations is currently limited to 1
+- During creation of an association, the underlying data plane is provisioned and connected to a subnet within the defined virtual network's subnet
+- Each association should assume at least 256 addresses are available in the subnet at time of provisioning.
+ - A minimum /24 subnet mask for new deployment, assuming nothing has been provisioning in the subnet).
+ - If n number of Application Gateway for Containers are provisioned, with the assumption each Application Gateway for Containers contains one association, and the intent is to share the same subnet, the available required addresses should be n*256.
+ - All Application Gateway for Containers association resources should match the same region as the Application Gateway for Containers parent resource
+
+### Application Gateway for Containers ALB Controller
+- An Application Gateway for Containers ALB Controller is a Kubernetes deployment that orchestrates configuration and deployment of Application Gateway for Containers by watching Kubernetes both Custom Resources and Resource configurations, such as, but not limited to, Ingress, Gateway, and ApplicationLoadBalancer. It uses both ARM / Application Gateway for Containers configuration APIs to propagate configuration to the Application Gateway for Containers Azure deployment.
+- ALB Controller is deployed / installed via Helm
+- ALB Controller consists of two running pods
+ - alb-controller pod is responsible for orchestrating customer intent to Application Gateway for Containers load balancing configuration
+ - alb-controller-bootstrap pod is responsible for management of CRDs
+
+## Azure / general concepts
+
+### Private IP address
+- A private IP address isn't explicitly defined as an Azure Resource Manager resource. A private IP address would refer to a specific host address within a given virtual network's subnet.
+
+### Subnet delegation
+
+- Microsoft.ServiceNetworking/trafficControllers is the namespace adopted by Application Gateway for Containers and may be delegated to a virtual network's subnet.
+- When delegation occurs, provisioning of Application Gateway for Containers resources doesn't happen, nor is there an exclusive mapping to an Application Gateway for Containers association resource.
+- Any number of subnets can have a subnet delegation that is the same or different to Application Gateway for Containers. Once defined, no other resources, other than the defined service, can be provisioned into the subnet unless explicitly defined by the service's implementation.
+
+### User-assigned managed identity
+
+- Managed identities for Azure resources eliminate the need to manage credentials in code.
+- A User Managed Identity is required for each Azure Load Balancer Controller to make changes to Application Gateway for Containers
+- _AppGw for Containers Configuration Manager_ is a built-in RBAC role that allows ALB Controller to access and configure the Application Gateway for Containers resource.
+
+ > [!Note]
+> The _AppGw for Containers Configuration Manager_ role has [data action permissions](../../role-based-access-control/role-definitions.md#control-and-data-actions) that the Owner and Contributor roles do not have. It is critical proper permissions are delegated to prevent issues with ALB Controller making changes to the Application Gateway for Containers service.
+
+## How Application Gateway for Containers accepts a request
+
+Each Application Gateway for Containers frontend provides a generated Fully Qualified Domain Name managed by Azure. The FQDN may be used as-is or customers may opt to mask the FQDN with a CNAME record.
+
+Before a client sends a request to Application Gateway for Containers, the client resolves a CNAME that points to the frontend's FQDN; or the client may directly resolve the FQDN provided by Application Gateway for Containers by using a DNS server.
+
+The DNS resolver translates the DNS record to an IP address.
+
+When the client initiates the request, the DNS name specified is passed as a host header to Application Gateway for Containers on the defined frontend.
+
+A set of routing rules evaluates how the request for that hostname should be initiated to a defined backend target.
+
+## How Application Gateway for Containers routes a request
+
+### Modifications to the request
+
+Application Gateway for Containers inserts three additional headers to all requests before requests are initiated from Application Gateway for Containers to a backend target:
+- x-forwarded-for
+- x-forwarded-proto
+- x-request-id
+
+**x-forwarded-for** is the original requestor's client IP address. If the request is coming through a proxy, the header value will append the address received, comma delimited. In example: 1.2.3.4,5.6.7.8; where 1.2.3.4 is the client IP address to the proxy in front of Application Gateway for Containers, and 5.6.7.8 is the address of the proxy forwarding traffic to Application Gateway for Containers.
+
+**x-forwarded-proto** returns the protocol received by Application Gateway for Containers from the client. The value is either http or https.
+
+**X-request-id** is a unique guid generated by Application Gateway for Containers for each client request and presented in the forwarded request to the backend target. The guid consists of 32 alphanumeric characters, separated by dashes (for example: d23387ab-e629-458a-9c93-6108d374bc75). This guid can be used to correlate a request received by Application Gateway for Containers and initiated to a backend target as defined in access logs.
+
application-gateway Application Gateway For Containers Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/application-gateway-for-containers-metrics.md
+
+ Title: Azure Monitor metrics for Application Gateway for Containers
+description: Learn how to use metrics to monitor performance of Application Gateway for Containers
+++++ Last updated : 07/24/2023+++
+# Metrics for Application Gateway for Containers
+
+Application Gateway for Containers publishes data points to [Azure Monitor](../../azure-monitor/overview.md) for the performance of your Application Gateway for Containers and backend instances. These data points are called metrics, and are numerical values in an ordered set of time-series data. Metrics describe some aspect of your application gateway at a particular time. If there are requests flowing through the Application Gateway, it measures and sends its metrics in 60-second intervals. If there are no requests flowing through the Application Gateway or no data for a metric, the metric isn't reported. For more information, see [Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md).
+
+## Metrics supported by Application Gateway for Containers
+
+| Metric Name | Description | Aggregation Type | Dimensions |
+| -- | -- | - | - |
+| Backend Connection Timeouts | Count of requests that timed out waiting for a response from the backend target (includes all retry requests initiated from Application Gateway for Containers to the backend target) | Total | Backend Service |
+| Backend Healthy Targets | Count of healthy backend targets | Avg | Backend Service |
+| Backend HTTP Response Status | HTTP response status returned by the backend target to Application Gateway for Containers | Total | Backend Service, HTTP Response Code |
+| Connection Timeouts | Count of connections closed due to timeout between clients and Application Gateway for Containers | Total | Frontend |
+| HTTP Response Status | HTTP response status returned by Application Gateway for Containers | Total | Frontend, HTTP Response Code |
+| Total Connection Idle Timeouts | Count of connections closed, between client and Application Gateway for Containers frontend, due to exceeding idle timeout | Total | Frontend |
+| Total Requests | Count of requests Application Gateway for Containers has served | Total | Frontend |
+
+## View Application Gateway for Containers metrics
+
+Use the following steps to view Application Gateway for Containers in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+2. In **Search resources, service, and docs**, type **Application Gateways for Containers** and select your Application Gateway for Containers name.
+3. Under **Monitoring**, select **Metrics**.
+4. Next to **Chart Title**, enter a title for your metrics view.
+5. **Scope** and **Metric Namespace** are is automatically populated. Under **Metric**, select a metric such as: **Total Requests**. For the **Total Requests** metric, the **Aggregation** is set to **Sum**.
+6. Select **Add filter**. **Property** is set to **Frontend**. Choose the **=** (equals) **Operator**.
+7. Enter values to use for filtering under **Values**. For example:
+ - **frontend-primary:80**
+ - **ingress-frontend:443**
+ - **ingress-frontend:80**
+8. Select the values you want to actively filter from the entries you create.
+9. Choose **Apply Splitting**, select **Frontend**, and accept default values for **Limit** and **Sort**. See the following example:
+
+ **Total Requests**
+
+ ![Application Gateway for Containers metrics total requests](./media/application-gateway-for-containers-metrics/total-requests.png)
+
+ The following are some other useful chart examples:
+
+ **HTTP Response Status**
+
+ In the HTTP response status example is shown, a filter can be applied to quickly identify trends on response codes returned to client requests. Optionally, a filter may be applied to further monitor a specific grouping of HTTP response codes.
+
+ ![Application Gateway for Containers metrics HTTP response status](./media/application-gateway-for-containers-metrics/http-response-status.png)
+
+ **Backend Health Targets**
+
+ In the backend health targets example shown, the number of healthy backend targets increases from 1 to 2 for Kubernetes service _echo_ and then decreases to 1. This validates that Application Gateway for Containers is able to detect the new replica, begin load balancing to the replica, and then remove the replica when the replica count is decreased in Kubernetes.
+
+ ![Application Gateway for Containers metrics backend healthy targets](./media/application-gateway-for-containers-metrics/backend-healthy-targets.png)
++
+## Next steps
+
+* [Using Azure Log Analytics in Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-overview)
+* [Configure Azure Log Analytics for Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-configure)
+* [Visualize Azure Cognitive Search Logs and Metrics with Power BI](/azure/search/search-monitor-logs-powerbi)
application-gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/diagnostics.md
+
+ Title: Diagnostic logs for Application Gateway for Containers (preview)
+description: Learn how to enable access logs for Application Gateway for Containers
+++++ Last updated : 07/24/2023+++
+# Diagnostic logs for Application Gateway for Containers (preview)
+
+Learn how to troubleshoot common problems in Application Gateway for Containers.
+
+You can monitor Azure Application Gateway for Containers resources in the following ways:
+
+* Logs: Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
+
+* Metrics: Application Gateway for Containers has several metrics that help you verify your system is performing as expected.
+
+## Diagnostic logs
+
+You can use different types of logs in Azure to manage and troubleshoot Application Gateway for Containers. You can access some of these logs through the portal. All logs can be extracted from Azure Blob storage and viewed in different tools, such as [Azure Monitor logs](../../azure-monitor/logs/data-platform-logs.md), Excel, and Power BI. You can learn more about the different types of logs from the following list:
+
+* **Activity log**: You can use [Azure activity logs](../../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal.
+* **Access log**: You can use this log to view Application Gateway for Containers access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. The data may be stored in a storage account, log analytics workspace, or event hub that is specified at time of enable logging.
+
+### Configure access log
+
+Activity logging is automatically enabled for every Resource Manager resource. You must enable access logging to start collecting the data available through those logs. To enable logging, you may configure diagnostic settings in Azure Monitor.
+
+ # [Azure portal](#tab/configure-log-portal)
+
+ Use the following steps to enable all logging to a storage account for Application Gateway for Containers using the Azure portal. You must have an available storage account in the same region as your Application Gateway for Containers.
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+ 2. In **Search resources, service, and docs**, type **Application Gateways for Containers** and select your Application Gateway for Containers name.
+ 3. Under **Monitoring**, select **Diagnostic settings**.
+ 4. Select **Add diagnostic setting**.
+ 5. Enter a **Diagnostic setting name** (ex: agfc-logs), choose the logs and metrics to save and choose a destination, such as **Archive to a storage account**. To save all logs, select **allLogs** and **AllMetrics**.
+ 6. Select **Save** to save your settings. See the following example:
+
+ ![Configure diagnostic logs](./media/diagnostics/enable-diagnostic-logs.png)
+
+ # [PowerShell](#tab/configure-log-powershell)
+
+ The following PowerShell sample enables all logging to a storage account for Application Gateway for Containers. Replace the resource group name, storage account name, and subscription ID with your own values. The storage account and resource group must be in the same region as your Application Gateway for Containers.
+
+ ```PowerShell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName acctest5097 -Name centraluseuaptclogs
+ $metric = @()
+ $log = @()
+ $metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
+ $log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
+ New-AzDiagnosticSetting -Name 'AppGWForContainersLogs' -ResourceId "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/acctest5097/providers/Microsoft.ServiceNetworking/trafficControllers/myagfc" -StorageAccountId $storageAccount.Id -Log $log -Metric $metric
+ ```
+> [!Note]
+> After initially enabling diagnostic logs, it may take up to one hour before logs are available at your selected destination.
+
+For more information and Azure Monitor deployment tutorials, see [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md).
+
+### Access log format
+
+Each access log entry in Application Gateway for Containers contains the following information.
+
+| Value | Description |
+| -- | -- |
+| backendHost | Address of backend target with appended port. For example \<ip\>:\<port\> |
+| backendIp | IP address of backend target Application Gateway for Containers proxies the request to. |
+| backendPort | Port number of the backend target. |
+| backendResponseLatency | Time in milliseconds to receive first byte from Application Gateway for Containers to the backend target. |
+| backendTimeTaken | Time in milliseconds for the response to be transmitted from the backend target to Application Gateway for Containers. |
+| clientIp | IP address of the client initiating the request to the frontend of Application Gateway for Containers |
+| frontendName | Name of the Application Gateway for Containers frontend that received the request from the client |
+| frontendPort | Port number the request was listened on by Application Gateway for Containers |
+| hostName | Host header value received from the client by Application Gateway for Containers |
+| httpMethod | HTTP Method of the request received from the client by Application Gateway for Containers as per [RFC 7231](https://datatracker.ietf.org/doc/html/rfc7231#section-4.3). |
+| httpStatusCode | HTTP Status code returned from Application Gateway for Containers to the client |
+| httpVersion | HTTP version of the request received from the client by Application Gateway for Containers  |
+| referrer | Referrer header of the request received from the client by Application Gateway for Containers  |
+| requestBodyBytes | Size in bytes of the body payload of the request received from the client by Application Gateway for Containers  |
+| requestHeaderBytes | Size in bytes of the headers of the request received from the client by Application Gateway for Containers  |
+| requestUri | URI of the request received from the client by Application Gateway for Containers (everything after \<protocol\>://\<host\> of the URL)  |
+| responseBodyBytes | Size in bytes of the body payload of the response returned to the client by Application Gateway for Containers |
+| responseHeaderBytes | Size in bytes of the headers of the response returned to the client by Application Gateway for Containers |
+| timeTaken | Time in milliseconds of the client request received by Application Gateway for Containers and the last byte returned to the client from Application Gateway for Containers |
+| tlsCipher | TLS cipher suite negotiated between the client and Application Gateway for Containers frontend |
+| tlsProtocol | TLS version negotiated between the client and Application Gateway for Containers frontend |
+| trackingId | Generated guid by Application Gateway for Containers to help with tracking and debugging. This value correlates to the x-request-id header returned to the client from Application Gateway for Containers. |
+| userAgent | User-Agent header of the request received from the client by Application Gateway for Containers |
+
+Here an example of the access log emitted in JSON format to a storage account.
+```JSON
+{
+ "category": "TrafficControllerAccessLog",
+ "operationName": "ReqRespLogs",
+ "properties": {
+ "backendHost": "10.1.0.15:80",
+ "backendIp": "10.1.0.15",
+ "backendPort": "80",
+ "backendResponseLatency": "2",
+ "backendTimeTaken": "-",
+ "clientIp": "xxx.xxx.xxx.xxx:52526",
+ "frontendName": "frontend-primary",
+ "frontendPort": "80",
+ "hostName": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzXX.alb.azure.com",
+ "httpMethod": "GET",
+ "httpStatusCode": "200",
+ "httpVersion": "HTTP\/1.1",
+ "referer": "-",
+ "requestBodyBytes": "0",
+ "requestHeaderBytes": "223",
+ "requestUri": "\/index.php",
+ "responseBodyBytes": "91",
+ "responseHeaderBytes": "190",
+ "timeTaken": "2",
+ "tlsCipher": "-",
+ "tlsProtocol": "-",
+ "trackingId": "0ef125db-7fb7-48a0-b3fe-03fe0ffed873",
+ "userAgent": "curl\/7.81.0"
+ },
+ "resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/YYYYYY/PROVIDERS/MICROSOFT.SERVICENETWORKING/TRAFFICCONTROLLERS/ZZZZZZZ",
+ "time": "2023-07-22T06:26:58.895Z",
+ "location": "northcentralus"
+}
+```
application-gateway How To Backend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md
++
+ Title: Backend MTLS with Application Gateway for Containers - Gateway API (preview)
+description: Learn how to configure Application Gateway for Containers with support for backend MTLS authentication.
+++++ Last updated : 07/24/2023+++
+# Backend MTLS with Application Gateway for Containers - Gateway API (preview)
+
+This document helps set up an example application that uses the following resources from Gateway API:
+- [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - creating a gateway with one https listener
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - creating an HTTP route that references a backend service
+- [BackendTLSPolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.BackendTLSPolicy) - creating a backend TLS policy that has a client and CA certificate for the backend service referenced in the HTTPRoute
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate backend mutual authentication (mTLS).
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/end-to-end-ssl-with-backend-mtls/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 1 service called `mtls-app` in the `test-infra` namespace
+ - 1 deployment called `mtls-app` in the `test-infra` namespace
+ - 1 config map called `mtls-app-nginx-cm` in the `test-infra` namespace
+ - 4 secrets called `backend.com`, `frontend.com`, `gateway-client-cert`, and `ca.bundle` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create a gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: frontend.com
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: frontend.com
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway has been created, create an HTTPRoute
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: https-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ rules:
+ - backendRefs:
+ - name: mtls-app
+ port: 443
+EOF
+```
+
+Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get httproute https-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+Create a BackendTLSPolicy
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: BackendTLSPolicy
+metadata:
+ name: mtls-app-tls-policy
+ namespace: test-infra
+spec:
+ targetRef:
+ group: ""
+ kind: Service
+ name: mtls-app
+ namespace: test-infra
+ default:
+ sni: backend.com
+ ports:
+ - port: 443
+ clientCertificateRef:
+ name: gateway-client-cert
+ group: ""
+ kind: Secret
+ verify:
+ caCertificateRef:
+ name: ca.bundle
+ group: ""
+ kind: Secret
+ subjectAltName: backend.com
+EOF
+```
+
+Once the BackendTLSPolicy object has been create check the status on the object to ensure that the policy is valid.
+
+```bash
+kubectl get backendtlspolicy -n test-infra mtls-app-tls-policy -o yaml
+```
+
+Example output of valid BackendTLSPolicy object creation.
+
+```yaml
+status:
+ conditions:
+ - lastTransitionTime: "2023-06-29T16:54:42Z"
+ message: Valid BackendTLSPolicy
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
+
+```bash
+curl --insecure https://$fqdn/
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via the ingress on Application Gateway for Containers.
application-gateway How To Ssl Offloading Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-gateway-api.md
++
+ Title: SSL offloading with Application Gateway for Containers - Gateway API (preview)
+description: Learn how to configure SSL offloading with Application Gateway for Containers using the Gateway API.
+++++ Last updated : 07/24/2023+++
+# SSL offloading with Application Gateway for Containers - Gateway API (preview)
+
+This document helps set up an example application that uses the following resources from Gateway API:
+- [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - creating a gateway with one https listener
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - creating an HTTP route that references a backend service
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTPS application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
+
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 1 service called `echo` in the `test-infra` namespace
+ - 1 deployment called `echo` in the `test-infra` namespace
+ - 1 secret called `listener-tls-secret` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+1. Create a Gateway
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1beta1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: listener-tls-secret
+ EOF
+ ```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: listener-tls-secret
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway has been created, create an HTTPRoute
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: https-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ rules:
+ - backendRefs:
+ - name: echo
+ port: 80
+EOF
+```
+
+Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get httproute https-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
+
+```bash
+curl --insecure https://$fqdn/
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via the ingress on Application Gateway for Containers.
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
++
+ Title: SSL offloading with Application Gateway for Containers - Ingress API (preview)
+description: Learn how to configure SSL offloading with Application Gateway for Containers using the Ingress API.
+++++ Last updated : 07/24/2023+++
+# SSL offloading with Application Gateway for Containers - Ingress API (preview)
+
+This document helps set up an example application that uses the _Ingress_ resource from [Ingress API](https://kubernetes.io/docs/concepts/services-networking/ingress/):
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTPS application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
+
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 1 service called `echo` in the `test-infra` namespace
+ - 1 deployment called `echo` in the `test-infra` namespace
+ - 1 secret called `listener-tls-secret` in the `test-infra` namespace
+
+## Deploy the required Ingress API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+1. Create an Ingress
+ ```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-name: alb-test
+ alb.networking.azure.io/alb-namespace: test-infra
+spec:
+ ingressClassName: azure-alb-external
+ tls:
+ - hosts:
+ - example.com
+ secretName: listener-tls-secret
+ rules:
+ - host: example.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: echo
+ port:
+ number: 80
+ EOF
+ ```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create an Ingress
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ alb.networking.azure.io/alb-frontend: $FRONTEND_NAME
+spec:
+ ingressClassName: azure-alb-external
+ tls:
+ - hosts:
+ - example.com
+ secretName: listener-tls-secret
+ rules:
+ - host: example.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: echo
+ port:
+ number: 80
+EOF
+```
+++
+Once the ingress resource has been created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
+```bash
+kubectl get ingress ingress-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ alb.networking.azure.io/alb-frontend: FRONTEND_NAME
+ alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"example.com","http":{"paths":[{"backend":{"service":{"name":"echo","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["example.com"],"secretName":"listener-tls-secret"}]}}
+ creationTimestamp: "2023-07-22T18:02:13Z"
+ generation: 2
+ name: ingress-01
+ namespace: test-infra
+ resourceVersion: "278238"
+ uid: 17c34774-1d92-413e-85ec-c5a8da45989d
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: example.com
+ http:
+ paths:
+ - backend:
+ service:
+ name: echo
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - example.com
+ secretName: listener-tls-secret
+status:
+ loadBalancer:
+ ingress:
+ - hostname: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzyy.alb.azure.com
+ ports:
+ - port: 80
+ protocol: TCP
+ - port: 443
+ protocol: TCP
+```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the command below to get the FQDN.
+
+```bash
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'')
+```
+
+Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -vik --resolve example.com:443:$fqdnIp https://example.com
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via Ingress on Application Gateway for Containers.
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
++
+ Title: Traffic Splitting with Application Gateway for Containers - Gateway API (preview)
+description: Learn how to configure traffic splitting / weighted round robin with Application Gateway for Containers.
+++++ Last updated : 07/24/2023+++
+# Traffic splitting with Application Gateway for Containers - Gateway API (preview)
+
+This document helps set up an example application that uses the following resources from Gateway API:
+- [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - creating a gateway with one http listener
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - creating an HTTP route that references two backend services having different weights
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create a gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: gateway-01-http
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway has been created, create an HTTPRoute
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: traffic-split-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ rules:
+ - backendRefs:
+ - name: backend-v1
+ port: 8080
+ weight: 50
+ - name: backend-v2
+ port: 8080
+ weight: 50
+EOF
+```
+
+Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get httproute https-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+## Test Access to the Application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the command below to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+Curling this FQDN should return responses from the backends/pods as configured on the HTTPRoute.
+
+```bash
+# this curl command will return 50% of the responses from backend-v1
+# and the remaining 50% of the responses from backend-v2
+watch -n 1 curl http://$fqdn
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via the ingress on Application Gateway for Containers.
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/overview.md
+
+ Title: What is Application Gateway for Containers? (preview)
+description: Overview of Azure Application Load Balancer Application Gateway for Containers features, resources, architecture, and implementation. Learn how Application Gateway for Containers works and how to use Application Gateway for Containers resources in Azure.
++++++ Last updated : 07/24/2023+++
+# What is Application Gateway for Containers? (preview)
+
+Application Gateway for Containers is a new application (layer 7) [load balancing](/azure/architecture/guide/technology-choices/load-balancing-overview) and dynamic traffic management product for workloads running in a Kubernetes cluster. It extends Azure's Application Load Balancing portfolio and is a new offering under the Application Gateway product family.
+
+Application Gateway for Containers is the evolution of the [Application Gateway Ingress Controller](../ingress-controller-overview.md) (AGIC), a [Kubernetes](/azure/aks) application that enables Azure Kubernetes Service (AKS) customers to use Azure's native Application Gateway application load-balancer. In its current form, AGIC monitors a subset of Kubernetes Resources for changes and applies them to the Application Gateway, utilizing Azure Resource Manager (ARM).
+
+## How does it work?
+
+Application Gateway for Containers is made up of three components:
+- Application Gateway for Containers
+- Frontends
+- Associations
+
+The following dependencies are also referenced in an Application Gateway for Containers deployment:
+- Private IP address
+- Subnet Delegation
+- User-assigned Managed Identity
+
+The architecture of Application Gateway for Containers is summarized in the following figure:
+
+![Diagram depicting traffic from the Internet ingressing into Application Gateway for Containers and being sent to backend pods in AKS.](./media/overview/application-gateway-for-containers-kubernetes-conceptual.png)
+
+For details about how Application Gateway for Containers accepts incoming requests and routes them to a backend target, see [Application Gateway for Containers components](application-gateway-for-containers-components.md).
+
+## Features and benefits
+
+Application Gateway for Containers offers some entirely new features at release, such as:
+- Traffic splitting / Weighted round robin
+- Mutual authentication to the backend target
+- Kubernetes support for Ingress and Gateway API
+- Flexible [deployment strategies](#deployment-strategies)
+- Increased performance, offering near real-time updates to add or move pods, routes, and probes
+
+Application Gateway for Containers offers an elastic and scalable ingress to AKS clusters and comprises a new data plane as well as control plane with [new set of ARM APIs](#implementation-of-gateway-api), different from existing Application Gateway. These APIs are different from the current implementation of Application Gateway. Application Gateway for Containers is outside the AKS cluster data plane and is responsible for ingress. The service is managed by an ALB controller component that runs inside the AKS cluster and adheres to Kubernetes Gateway APIs.
+
+### Load balancing features
+
+Application Gateway for Containers supports the following features for traffic management:
+- Layer 7 HTTP/HTTPS request forwarding based on prefix/exact match on:
+ - Hostname
+ - Path
+ - Headers
+ - Query string match
+ - Methods
+ - Ports (80/443)
+- HTTPS traffic management:
+ - SSL termination
+ - End to End SSL
+- Ingress and Gateway API support
+- Traffic Splitting / weighted round robin
+- Mutual Authentication (mTLS) to backend target
+- Health checks: Application Gateway for Containers determines the health of a backend before it registers it as healthy and capable of handling traffic
+- Automatic retries
+- TLS Policies
+- Autoscaling
+- Availability zone resiliency
+
+### Deployment strategies
+
+There are two deployment strategies for management of Application Gateway for Containers:
+
+- **Bring your own (BYO) deployment:** In this deployment strategy, deployment and lifecycle of the Application Gateway for Containers resource, Association and Frontend resource is assumed via Azure portal, CLI, PowerShell, Terraform, etc. and referenced in configuration within Kubernetes.
+ - **In Gateway API:** Every time you wish to create a new Gateway resource in Kubernetes, a Frontend resource should be provisioned in Azure prior and referenced by the Gateway resource. Deletion of the Frontend resource is responsible by the Azure administrator and isn't deleted when the Gateway resource in Kubernetes is deleted.
+- **Managed by ALB Controller:** In this deployment strategy ALB Controller deployed in Kubernetes is responsible for the lifecycle of the Application Gateway for Containers resource and its sub resources. ALB Controller creates Application Gateway for Containers resource when an ApplicationLoadBalancer custom resource is defined on the cluster and its lifecycle is based on the lifecycle of the custom resource.
+ - **In Gateway API:** Every time a Gateway resource is created referencing the ApplicationLoadBalancer resource, ALB Controller provisions a new Frontend resource and manage its lifecycle based on the lifecycle of the Gateway resource.
+
+### Supported regions
+
+Application Gateway for Containers is currently offered in the following regions:
+- Australia East
+- Central US
+- East Asia
+- East US
+- East US2
+- North Central US
+- North Europe
+- South Central US
+- Southeast Asia
+- UK South
+- West US
+- West Europe
+
+### Implementation of Gateway API
+
+ALB Controller implements version [v1beta1](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1) of the [Gateway API](https://gateway-api.sigs.k8s.io/)
+
+| Gateway API Resource | Support | Comments |
+| - | - | |
+| [GatewayClass](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.GatewayClass) | Yes | |
+| [Gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.Gateway) | Yes | Support for HTTP and HTTPS protocol on the listener. The only ports allowed on the listener are 80 and 443. |
+| [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1beta1.HTTPRoute) | Yes | Currently doesn't support [HTTPRouteFilter](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRouteFilter) |
+| [ReferenceGrant](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io%2fv1alpha2.ReferenceGrant) | Yes | Currently supports version v1alpha1 of this api |
+
+### Implementation of Ingress API
+
+ALB Controller implements support for [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+
+| Ingress API Resource | Support | Comments |
+| - | - | |
+| [Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingress-v1-networking-k8s-io) | Yes | Support for HTTP and HTTPS protocol on the listener. |
+
+## Report issues and provide feedback
+
+For feedback, post a new idea in [feedback.azure.com](https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789?&c=69637543-1829-ee11-bdf4-000d3a1ab360)
+For issues, raise a support request via the Azure portal on your Application Gateway for Containers resource.
+
+## Pricing and SLA
+
+For Application Gateway for Containers pricing information, see [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/).
+
+While in Public Preview, Application Gateway for Containers follows [Preview supplemental terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## What's new
+
+To learn what's new with Application Gateway for Containers, see [Azure updates](https://azure.microsoft.com/updates/?category=networking&query=Application%20Gateway%20for%20Containers).
+
+## Next steps
+
+- [Concepts: Application Gateway for Containers components](application-gateway-for-containers-components.md)
+- [Quickstart: Deploy Application Gateway for Containers ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
application-gateway Quickstart Create Application Gateway For Containers Byo Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-byo-deployment.md
+
+ Title: 'Quickstart: Create Application Gateway for Containers - bring your own deployment (preview)'
+description: In this quickstart, you learn how to provision and manage the Application Gateway for Containers Azure resources independent from Kubernetes configuration.
+++++ Last updated : 07/24/2023+++
+# Quickstart: Create Application Gateway for Containers - bring your own deployment (preview)
+
+This guide assumes you're following the **bring your own** [deployment strategy](overview.md#deployment-strategies), where ALB Controller references the Application Gateway for Containers resources precreated in Azure. It's assumed that resource lifecycles are managed in Azure, independent from what is defined within Kubernetes.
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Ensure you have first deployed ALB Controller into your Kubernetes cluster. You may follow the [Quickstart: Deploy Application Gateway for Containers ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) guide if you haven't already deployed the ALB Controller.
+
+## Create the Application Gateway for Containers resource
+
+Execute the following command to create the Application Gateway for Containers resource.
+
+```azurecli-interactive
+RESOURCE_GROUP='<your resource group name>'
+AGFC_NAME='alb-test' # Name of the Application Gateway for Containers resource to be created
+az network alb create -g $RESOURCE_GROUP -n $AGFC_NAME
+```
+
+## Create a frontend resource
+
+Execute the following command to create the Application Gateway for Containers frontend resource.
+
+```azurecli-interactive
+FRONTEND_NAME='test-frontend'
+az network alb frontend create -g $RESOURCE_GROUP -n $FRONTEND_NAME --alb-name $AGFC_NAME
+```
+
+## Create an association resource
+
+### Delegate a subnet to association resource
+
+To create an association resource, you first need to reference a subnet for Application Gateway for Containers to establish connectivity to. Ensure the subnet for an Application Gateway for Containers association is at least a class C or larger (/24 or smaller CIDR prefix). For this step, you may either reuse an existing subnet and enable subnet delegation on it. or create a new VNET, subnet, and enable subnet delegation.
+
+# [Reference existing VNet and Subnet](#tab/existing-vnet-subnet)
+To reference an existing subnet, execute the following command to set the variables for reference to the subnet in later steps.
+```azurecli-interactive
+VNET_NAME='<name of the virtual network to use>'
+VNET_RESOURCE_GROUP='<the resource group of your VNET>'
+ALB_SUBNET_NAME='subnet-alb' # subnet name can be any non-reserved subnet name (i.e. GatewaySubnet, AzureFirewallSubnet, AzureBastionSubnet would all be invalid)
+```
+
+# [New VNet and Subnet](#tab/new-vnet-subnet)
+If you would like to use a new virtual network for the Application Gateway for Containers association resource, you can create a new vnet with the following commands.
+
+> [!WARNING]
+> Upon creation of the virtual network, ensure you establish connectivity between this virtual network/subnet and the AKS node pool to enable communication between Application Gateway for Containers and the pods running in AKS. This may be achieved by establishing [virtual network peering](../../virtual-network/virtual-network-peering-overview.md) between both virtual networks.
+
+```azurecli-interactive
+VNET_NAME='<name of the virtual network to use>'
+VNET_RESOURCE_GROUP='<the resource group of your VNET>'
+VNET_ADDRESS_PREFIX='<address space of the vnet that will contain various subnets. The vnet must be able to handle at least 250 available addresses (/24 or smaller cidr prefix for the subnet)>'
+SUBNET_ADDRESS_PREFIX='<an address space under the vnet that has at least 250 available addresses (/24 or smaller cidr prefix for the subnet)>'
+ALB_SUBNET_NAME='subnet-alb' # subnet name can be any non-reserved subnet name (i.e. GatewaySubnet, AzureFirewallSubnet, AzureBastionSubnet would all be invalid)
+az network vnet create \
+ --name $VNET_NAME \
+ --resource-group $VNET_RESOURCE_GROUP \
+ --address-prefix $VNET_ADDRESS_PREFIX \
+ --subnet-name $ALB_SUBNET_NAME \
+ --subnet-prefixes $SUBNET_ADDRESS_PREFIX
+```
+++
+Enable subnet delegation for the Application Gateway for Containers service. The delegation for Application Gateway for Containers is identified by the _Microsoft.ServiceNetworking/trafficControllers_ resource type.
+```azurecli-interactive
+az network vnet subnet update \
+ --resource-group $VNET_RESOURCE_GROUP \
+ --name $ALB_SUBNET_NAME \
+ --vnet-name $VNET_NAME \
+ --delegations 'Microsoft.ServiceNetworking/trafficControllers'
+ALB_SUBNET_ID=$(az network vnet subnet list --resource-group $VNET_RESOURCE_GROUP --vnet-name $VNET_NAME --query "[?name=='$ALB_SUBNET_NAME'].id" --output tsv)
+echo $ALB_SUBNET_ID
+```
+
+### Delegate permissions to managed identity
+
+ALB Controller will need the ability to provision new Application Gateway for Containers resources as well as join the subnet intended for the Application Gateway for Containers association resource.
+
+In this example, we will delegate the _AppGW for Containers Configuration Manager_ role to the resource group and delegate the _Network Contributor_ role to the subnet used by the Application Gateway for Containers association subnet, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission.
+
+If desired, you can [create and assign a custom role](../../role-based-access-control/custom-roles-portal.md) with the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission to eliminate other permissions contained in the _Network Contributor_ role. Learn more about [managing subnet permissions](../../virtual-network/virtual-network-manage-subnet.md#permissions).
+
+```azurecli-interactive
+IDENTITY_RESOURCE_NAME='azure-alb-identity'
+
+resourceGroupId=$(az group show --name $RESOURCE_GROUP --query id -otsv)
+principalId=$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)
+
+# Delegate AppGw for Containers Configuration Manager role to RG containing Application Gateway for Containers resource
+az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role "fbc52c3f-28ad-4303-a892-8a056630b8f1"
+
+# Delegate Network Contributor permission for join to association subnet
+az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $ALB_SUBNET_ID --role "4d97b98b-1d4f-4787-a291-c67834d212e7"
+```
+
+### Create an association resource
+
+Execute the following command to create the association resource and connect it to the referenced subnet. It can take 5-6 minutes for the Application Gateway for Containers association to be created.
+
+```azurecli-interactive
+ASSOCIATION_NAME='association-test'
+az network alb association create -g $RESOURCE_GROUP -n $ASSOCIATION_NAME --alb-name $AGFC_NAME --subnet $ALB_SUBNET_ID
+```
+
+## Next steps
+
+Congratulations, you have installed ALB Controller on your cluster and deployed the Application Gateway for Containers resources in Azure!
+
+Try out a few of the how-to guides to deploy a sample application, demonstrating some of Application Gateway for Container's load balancing concepts.
+- [Backend MTLS](how-to-backend-mtls-gateway-api.md?tabs=byo)
+- [SSL/TLS Offloading](how-to-ssl-offloading-gateway-api.md?tabs=byo)
+- [Traffic Splitting / Weighted Round Robin](how-to-traffic-splitting-gateway-api.md?tabs=byo)
application-gateway Quickstart Create Application Gateway For Containers Managed By Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md
+
+ Title: 'Quickstart: Create Application Gateway for Containers managed by ALB Controller (preview)'
+description: In this quickstart, you learn how to provision the Application Gateway for Containers resources via Kubernetes definition.
+++++ Last updated : 07/24/2023+++
+# Quickstart: Create Application Gateway for Containers managed by ALB Controller (preview)
+
+This guide assumes you're following the **managed by ALB controller** [deployment strategy](overview.md#deployment-strategies), where all the Application Gateway for Containers resources are managed by ALB controller. Lifecycle is determined the resources defined in Kubernetes. ALB Controller creates the Application Gateway for Containers resource when an _ApplicationLoadBalancer_ custom resource is defined on the cluster. The Application Gateway for Containers lifecycle is based on the lifecycle of the custom resource.
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Ensure you have first deployed ALB Controller into your Kubernetes cluster. See [Quickstart: Deploy Application Gateway for Containers ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) if you haven't already deployed the ALB Controller.
+
+### Prepare your virtual network / subnet for Application Gateway for Containers
+
+If you don't have a subnet available with at least 250 available IP addresses and delegated to the Application Gateway for Containers resource, use the following steps to create a new subnet and enable subnet delegation. The new subnet address space can't overlap any existing subnets in the VNet.
+
+# [New subnet in AKS managed virtual network](#tab/new-subnet-aks-vnet)
+If you wish to deploy Application Gateway for Containers into the virtual network containing your AKS cluster, run the following command to find and assign the cluster's virtual network. This information is used in the next step.
+```azurecli-interactive
+AKS_NAME='<your cluster name>'
+RESOURCE_GROUP='<your resource group name>'
+
+MC_RESOURCE_GROUP=$(az aks show --name $AKS_NAME --resource-group $RESOURCE_GROUP --query "nodeResourceGroup" -o tsv)
+CLUSTER_SUBNET_ID=$(az vmss list --resource-group $MC_RESOURCE_GROUP --query '[0].virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].subnet.id' -o tsv)
+read -d '' VNET_NAME VNET_RESOURCE_GROUP VNET_ID <<< $(az network vnet show --ids $CLUSTER_SUBNET_ID --query '[name, resourceGroup, id]' -o tsv)
+```
+
+# [New subnet in non-AKS managed virtual network](#tab/new-subnet-non-aks-vnet)
+If you wish to create a subnet in an existing virtual network, run the following command to set the variables for reference to the vnet and subnet prefix to be used during creation.
+
+> [!WARNING]
+> Upon creation of the subnet in the next step, ensure you establish connectivity between this virtual network/subnet and the AKS node pool to enable communication between Application Gateway for Containers and the pods running in AKS.
+
+```azurecli-interactive
+VNET_RESOURCE_GROUP=<resource group name of the virtual network>
+VNET_NAME=<name of the virtual network to use>
+```
+++
+Run the following command to create a new subnet containing at least 250 available IP addresses and enable subnet delegation for the Application Gateway for Containers association resource:
+```azurecli-interactive
+SUBNET_ADDRESS_PREFIX='<network address and prefix for an address space under the vnet that has at least 250 available addresses (/24 or larger subnet)>'
+ALB_SUBNET_NAME='subnet-alb' # subnet name can be any non-reserved subnet name (i.e. GatewaySubnet, AzureFirewallSubnet, AzureBastionSubnet would all be invalid)
+az network vnet subnet create \
+ --resource-group $VNET_RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name $ALB_SUBNET_NAME \
+ --address-prefixes $SUBNET_ADDRESS_PREFIX \
+ --delegations 'Microsoft.ServiceNetworking/trafficControllers'
+ALB_SUBNET_ID=$(az network vnet subnet show --name $ALB_SUBNET_NAME --resource-group $VNET_RESOURCE_GROUP --vnet-name $VNET_NAME --query '[id]' --output tsv)
+```
+
+## Delegate permissions to managed identity
+
+ALB Controller needs the ability to provision new Application Gateway for Containers resources and to join the subnet intended for the Application Gateway for Containers association resource.
+
+In this example, we delegate the _AppGW for Containers Configuration Manager_ role to the resource group the managed cluster and delegate the _Network Contributor_ role to the subnet used by the Application Gateway for Containers association subnet, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission.
+
+If desired, you can [create and assign a custom role](../../role-based-access-control/custom-roles-portal.md) with the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission to eliminate other permissions contained in the _Network Contributor_ role. Learn more about [managing subnet permissions](../../virtual-network/virtual-network-manage-subnet.md#permissions).
+
+```azurecli-interactive
+IDENTITY_RESOURCE_NAME='azure-alb-identity'
+
+MC_RESOURCE_GROUP=$(az aks show --name $AKS_NAME --resource-group $RESOURCE_GROUP --query "nodeResourceGroup" -otsv | tr -d '\r')
+
+mcResourceGroupId=$(az group show --name $MC_RESOURCE_GROUP --query id -otsv)
+principalId=$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)
+
+# Delegate AppGw for Containers Configuration Manager role to AKS Managed Cluster RG
+az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "fbc52c3f-28ad-4303-a892-8a056630b8f1"
+
+# Delegate Network Contributor permission for join to association subnet
+az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $ALB_SUBNET_ID --role "4d97b98b-1d4f-4787-a291-c67834d212e7"
+```
+
+## Create ApplicationLoadBalancer Kubernetes resource
+
+1. Define the Kubernetes namespace for the ApplicationLoadBalancer resource
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: alb-test-infra
+EOF
+```
+
+2. Define the _ApplicationLoadBalancer_ resource, specifying the subnet ID the Application Gateway for Containers association resource should deploy into. The association establishes connectivity from Application Gateway for Containers to the defined subnet (and connected networks where applicable) to be able to proxy traffic to a defined backend.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: ApplicationLoadBalancer
+metadata:
+ name: alb-test
+ namespace: alb-test-infra
+spec:
+ associations:
+ - $ALB_SUBNET_ID
+EOF
+```
+
+## Validate creation of the Application Gateway for Containers resources
+
+Once the _ApplicationLoadBalancer_ resource has been created, you can track deployment progress of the Application Gateway for Containers resources. The deployment transitions from _InProgress_ to _Programmed_ state when provisioning has completed. It can take 5-6 minutes for the Application Gateway for Containers resources to be created.
+
+You can check the status of the _ApplicationLoadBalancer_ resource by running the following command:
+
+```bash
+kubectl get applicationloadbalancer alb-test -n alb-test-infra -o yaml -w
+```
+
+Example output of a successful provisioning of the Application Gateway for Containers resource from Kubernetes.
+```yaml
+status:
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:03:29Z"
+ message: Valid Application Gateway for Containers resource
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:03:29Z"
+ message: alb-id=/subscriptions/xxx/resourceGroups/yyy/providers/Microsoft.ServiceNetworking/trafficControllers/alb-zzz
+ observedGeneration: 1
+ reason: Ready
+ status: "True"
+ type: Deployment
+```
+
+## Next steps
+
+Congratulations, you have installed ALB Controller on your cluster and deployed the Application Gateway for Containers resources in Azure!
+
+Try out a few of the how-to guides to deploy a sample application, demonstrating some of Application Gateway for Container's load balancing concepts.
+- [Backend MTLS](how-to-backend-mtls-gateway-api.md?tabs=alb-managed)
+- [SSL/TLS Offloading](how-to-ssl-offloading-gateway-api.md?tabs=alb-managed)
+- [Traffic Splitting / Weighted Round Robin](how-to-traffic-splitting-gateway-api.md?tabs=alb-managed)
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
+
+ Title: 'Quickstart: Deploy Application Gateway for Containers ALB Controller (preview)'
+
+description: In this quickstart, you learn how to provision the Application Gateway for Containers ALB Controller in an AKS cluster.
+++++ Last updated : 07/24/2023+++
+# Quickstart: Deploy Application Gateway for Containers ALB Controller (preview)
+
+The [ALB Controller](application-gateway-for-containers-components.md#application-gateway-for-containers-alb-controller) is responsible for translating Gateway API and Ingress API configuration within Kubernetes to load balancing rules within Application Gateway for Containers. The following guide walks through the steps needed to provision an ALB Controller into a new or existing AKS cluster.
+
+## Prerequisites
+
+You need to complete the following tasks prior to deploying Application Gateway for Containers on Azure and installing ALB Controller on your cluster:
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. Prepare your Azure subscription and your `az-cli` client.
+
+ ```azurecli-interactive
+ # Sign in to your Azure subscription.
+ SUBSCRIPTION_ID='<your subscription id>'
+ az login
+ az account set --subscription $SUBSCRIPTION_ID
+
+ # Register required resource providers on Azure.
+ az provider register --namespace Microsoft.ContainerService
+ az provider register --namespace Microsoft.Network
+ az provider register --namespace Microsoft.NetworkFunction
+ az provider register --namespace Microsoft.ServiceNetworking
+
+ # Install Azure CLI extensions.
+ az extension add --name alb
+ ```
+
+2. Set an AKS cluster for your workload.
+
+ > [!NOTE]
+ > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions)
+ > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).
+ > AKS cluster should have the workload identity feature enabled. [Learn how](../../aks/workload-identity-deploy-cluster.md#update-an-existing-aks-cluster) to enable and use an existing AKS cluster section.
+
+ If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following:
+
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait
+ ```
+
+ If you don't have an existing cluster, use the following commands to create a new AKS cluster with Azure CNI and workload identity enabled.
+
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ LOCATION='northeurope' # The list of available regions may grow as we roll out to more preview regions
+ VM_SIZE='<the size of the vm in AKS>' # The size needs to be available in your location
+
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP \
+ --name $AKS_NAME \
+ --location $LOCATION \
+ --node-vm-size $VM_SIZE \
+ --network-plugin azure \
+ --enable-oidc-issuer \
+ --enable-workload-identity \
+ --generate-ssh-key
+ ```
+
+3. Install Helm
+
+ [Helm](https://github.com/helm/helm) is an open-source packaging tool that is used to install ALB controller.
+
+ > [!NOTE]
+ > Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.
+
+ You can also use the following steps to install Helm on a local device running Windows or Linux. Ensure that you have the latest version of helm installed.
+
+ # [Windows](#tab/install-helm-windows)
+ See the [instructions for installation](https://github.com/helm/helm#install) for various options of installation. Similarly, if your version of Windows has [Windows Package Manager winget](/windows/package-manager/winget/) installed, you may execute the following command:
+ ```powershell
+ winget install helm.helm
+ ```
+
+ # [Linux](#tab/install-helm-linux)
+ The following command can be used to install Helm. Commands that use Helm with Azure CLI in this article can also be run using Bash.
+ ```bash
+ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
+ ```
++
+## Install the ALB Controller
+
+1. Create a user managed identity for ALB controller and federate the identity as Pod Identity to use in the AKS cluster.
+
+ ```azurecli-interactive
+ RESOURCE_GROUP='<your resource group name>'
+ AKS_NAME='<your aks cluster name>'
+ IDENTITY_RESOURCE_NAME='azure-alb-identity'
+
+ mcResourceGroup=$(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query "nodeResourceGroup" -o tsv)
+ mcResourceGroupId=$(az group show --name $mcResourceGroup --query id -otsv)
+
+ echo "Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP"
+ az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_RESOURCE_NAME
+ principalId="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)"
+
+ echo "Waiting 60 seconds to allow for replication of the identity..."
+ sleep 60
+
+ echo "Apply Reader role to the AKS managed cluster resource group for the newly provisioned identity"
+ az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "acdd72a7-3385-48ef-bd42-f606fba81ae7" # Reader role
+
+ echo "Set up federation with AKS OIDC issuer"
+ AKS_OIDC_ISSUER="$(az aks show -n "$AKS_NAME" -g "$RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ az identity federated-credential create --name "azure-alb-identity" \
+ --identity-name "$IDENTITY_RESOURCE_NAME" \
+ --resource-group $RESOURCE_GROUP \
+ --issuer "$AKS_OIDC_ISSUER" \
+ --subject "system:serviceaccount:azure-alb-system:alb-controller-sa"
+ ```
+ ALB Controller requires a federated credential with the name of _azure-alb-identity_. Any other federated credential name is unsupported.
+
+ > [!Note]
+ > Assignment of the managed identity immediately after creation may result in an error that the principalId does not exist. Allow about a minute of time to elapse for the identity to replicate in Azure AD prior to delegating the identity.
+
+2. Install ALB Controller using Helm
+
+ ### For new deployments
+ ALB Controller can be installed by running the following commands:
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023921 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
+
+ > [!Note]
+ > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter.
+
+ ### For existing deployments
+ ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_):
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023921 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
+
+### Verify the ALB Controller installation
+
+1. Verify the ALB Controller pods are ready:
+
+ ```azurecli-interactive
+ kubectl get pods -n azure-alb-system
+ ```
+ You should see the following:
+
+ | NAME | READY | STATUS | RESTARTS | AGE |
+ | - | -- | - | -- | - |
+ | alb-controller-bootstrap-6648c5d5c-hrmpc | 1/1 | Running | 0 | 4d6h |
+ | alb-controller-6648c5d5c-au234 | 1/1 | Running | 0 | 4d6h |
+
+2. Verify GatewayClass `azure-application-lb` is installed on your cluster:
+
+ ```azurecli-interactive
+ kubectl get gatewayclass azure-alb-external -o yaml
+ ```
+ You should see that the GatewayClass has a condition that reads **Valid GatewayClass** . This indicates that a default GatewayClass has been set up and that any gateway resources that reference this GatewayClass is managed by ALB Controller automatically.
+
+## Next Steps
+
+Now that you have successfully installed an ALB Controller on your cluster, you can provision the Application Gateway For Containers resources in Azure.
+
+The next step is to link your ALB controller to Application Gateway for Containers. How you create this link depends on your deployment strategy.
+
+There are two deployment strategies for management of Application Gateway for Containers:
+- **Bring your own (BYO) deployment:** In this deployment strategy, deployment and lifecycle of the Application Gateway for Containers resource, Association and Frontend resource is assumed via Azure portal, CLI, PowerShell, Terraform, etc. and referenced in configuration within Kubernetes.
+ - To use a BYO deployment, see [Create Application Gateway for Containers - bring your own deployment](quickstart-create-application-gateway-for-containers-byo-deployment.md)
+- **Managed by ALB controller:** In this deployment strategy, ALB Controller deployed in Kubernetes is responsible for the lifecycle of the Application Gateway for Containers resource and its sub resources. ALB Controller creates an Application Gateway for Containers resource when an **ApplicationLoadBalancer** custom resource is defined on the cluster. The service lifecycle is based on the lifecycle of the custom resource.
+ - To use an ALB managed deployment, see [Create Application Gateway for Containers managed by ALB Controller](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md)
+
+## Uninstall Application Gateway for Containers and ALB Controller
+
+If you wish to uninstall the ALB Controller, complete the following steps.
+
+1. Delete the Application Gateway for Containers, you can delete the Resource Group containing the Application Gateway for Containers resources:
+
+```azurecli-interactive
+az group delete --resource-group $RESOURCE_GROUP
+```
+
+2. Uninstall ALB Controller and its resources from your cluster run the following commands:
+
+```azurecli-interactive
+helm uninstall alb-controller
+kubectl delete ns azure-alb-system
+kubectl delete gatewayclass azure-alb-external
+```
+> [!Note]
+> If a different namespace was used for alb-controller installation, ensure you specify the -n parameter on the helm uninstall command to define the proper namespace to be used. For example: `helm uninstall alb-controller -n unique-namespace`
application-gateway Tls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/tls-policy.md
+
+ Title: TLS policy overview for Azure Application Gateway for Containers
+description: Learn how to configure TLS policy for Azure Application Gateway for Containers.
+++++ Last updated : 07/24/2023+++
+# Application Gateway for Containers TLS policy overview
+
+You can use Azure Application Gateway for Containers to control TLS ciphers to meet compliance and security goals of the organization.
+
+TLS policy includes definition of the TLS protocol version, cipher suites, and order in which ciphers are preferred during a TLS handshake. Application Gateway for Containers currently offers two predefined policies to choose from.
+
+## Usage and version details
+
+- A custom TLS policy allows you to configure the minimum protocol version, ciphers, and elliptical curves for your gateway.
+- If no TLS policy is defined, a [default TLS policy](tls-policy.md#default-tls-policy) is used.
+- TLS cipher suites used for the connection are also based on the type of the certificate being used. The cipher suites negotiated between client and Application Gateway for Containers is based on the _Gateway listener_ configuration as defined in YAML. The cipher suites used in establishing connections between Application Gateway for Containers and the backend target are based on the type of server certificates presented by the backend target.
+
+## Predefined TLS policy
+
+Application Gateway for Containers offers two predefined security policies. You can choose either of these policies to achieve the appropriate level of security. Policy names are defined by year and month (YYYY-MM) of introduction. Additionally, an **-S** variant may exist to denote a more strict variant of ciphers that may be negotiated. Each policy offers different TLS protocol versions and cipher suites. These predefined policies are configured keeping in mind the best practices and recommendations from the Microsoft Security team. We recommend that you use the newest TLS policies to ensure the best TLS security.
+
+The following table shows the list of cipher suites and minimum protocol version support for each predefined policy. The ordering of the cipher suites determines the priority order during TLS negotiation. To know the exact ordering of the cipher suites for these predefined policies.
+
+| Predefined policy names | 2023-06 | 2023-06-S |
+| - | - | - |
+| **Minimum protocol version** | TLS 1.2 | TLS 1.2 |
+| **Enabled protocol versions** | TLS 1.2 | TLS 1.2 |
+| TLS_AES_256_GCM_SHA384 | &check; | &check; |
+| TLS_AES_128_GCM_SHA256 | &check; | &check; |
+| TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 | &check; | &check; |
+| TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 | &check; | &check; |
+| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 | &check; | &check; |
+| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 | &check; | &check; |
+| TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 | &check; | &cross; |
+| TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 | &check; | &cross; |
+| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 | &check; | &cross; |
+| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 | &check; | &cross; |
+| **Elliptical curves** | | |
+| P-384 | &check; | &check; |
+| P-256 | &check; | &check; |
+
+Protocol versions, ciphers, and elliptical curves not specified in the table above are not supported and won't be negotiated.
+
+### Default TLS policy
+
+When no TLS Policy is specified within your Kubernetes configuration, **predefined policy 2023-06** will be applied.
+
+## How to configure a TLS policy
+
+# [Gateway API](#tab/tls-policy-gateway-api)
+
+TLS policy can be defined in a [FrontendTLSPolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.FrontendTLSPolicy) resource, which targets defined gateway listeners. Specify a policyType of type `predefinned` and use choose either predefined policy name: `2023-06` or `2023-06-S`
+
+Example command to create a new FrontendTLSPolicy resource with the predefined TLS policy 2023-06-S.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: FrontendTLSPolicy
+metadata:
+ name: policy-default
+ namespace: test-infra
+spec:
+ targetRef:
+ kind: Gateway
+ name: target-01
+ namespace: test-infra
+ gateway: gateway-01
+ listeners:
+ - https-listener
+ group : gateway.networking.k8s.io
+ default:
+ policyType:
+ type: predefined
+ name: 2023-06-S
+EOF
+```
+
+# [Ingress API](#tab/tls-policy-ingress-api)
+
+TLS policy is currently not supported for Ingress resources and will automatically be configured to use the default TLS policy `2023-06`.
+++++
application-gateway Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md
+
+ Title: Troubleshoot Application Gateway for Containers (preview)
+description: Learn how to troubleshoot common issues with Application Gateway for Containers
+++++ Last updated : 07/24/2023+++
+# Troubleshooting in Application Gateway for Containers (preview)
+
+Learn how to troubleshoot common problems in Application Gateway for Containers.
+
+## Collect ALB Controller logs
+Logs may be collected from the ALB Controller by using the _kubectl logs_ command referencing the ALB Controller pod.
+
+1. Get the running ALB Controller pod name
+
+ Execute the following kubectl command:
+ ```bash
+ kubectl get pods -n azure-alb-system
+ ```
+
+ You should see the following (pod names may differ slightly from the following table):
+
+ | NAME | READY | STATUS | RESTARTS | AGE |
+ | - | -- | - | -- | - |
+ | alb-controller-bootstrap-6648c5d5c-hrmpc | 1/1 | Running | 0 | 4d6h |
+ | alb-controller-6648c5d5c-au234 | 1/1 | Running | 0 | 4d6h |
+
+ Copy the name of the alb-controller pod (not the bootstrap pod, in this case, alb-controller-6648c5d5c-au234).
+
+2. Collect the logs
+ Logs from ALB Controller will be returned in JSON format.
+
+ Execute the following kubectl command, replacing the name with the pod name returned in step 1:
+ ```bash
+ kubectl logs -n azure-alb-system alb-controller-6648c5d5c-au234
+ ```
+
+ Similarly, you can redirect the output of the existing command to a file by specifying the greater than (>) sign and the filename to write the logs to:
+ ```bash
+ kubectl logs -n azure-alb-system alb-controller-6648c5d5c-au234 > alb-controller-logs.json
+ ```
+
+## Configuration errors
+
+### Application Gateway for Containers returns 500 status code
+
+Scenarios in which you would notice a 500-error code on Application Gateway for Containers are as follows:
+1. __Invalid backend Entries__ : A backend is defined as invalid in the following scenarios:
+ - It refers to an unknown or unsupported kind of resource. In this case, the HTTPRoute's status has a condition with reason set to `InvalidKind` and the message explains which kind of resource is unknown or unsupported.
+ - It refers to a resource that doesn't exist. In this case, the HTTPRoute's status has a condition with reason set to `BackendNotFound` and the message explains that the resource doesn't exist.
+ - It refers to a resource in another namespace when the reference isn't explicitly allowed by a ReferenceGrant (or equivalent concept). In this case, the HTTPRoute's status has a condition with reason set to `RefNotPermitted` and the message explains which cross-namespace reference isn't allowed.
+
+ For instance, if an HTTPRoute has two backends specified with equal weights, and one is invalid 50 percent of the traffic must receive a 500. This is based on the specifications provided by Gateway API [here](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io%2fv1beta1.HTTPRouteRule)
+
+2. No endpoints found for all backends: when there are no endpoints found for all the backends referenced in an HTTPRoute, a 500 error code is obtained.
+
+### Kubernetes Gateway resource fails to get token from credential chain
+
+#### Symptoms
+
+No changes to HttpRoutes are being applied to Application Gateway for Containers.
+
+The following error message is returned on the Kubernetes Gateway resource and no changes to HttpRoutes
+
+```YAML
+status:
+ conditions:
+ - lastTransitionTime: "2023-04-28T22:08:34Z"
+ message: The Gateway is not scheduled
+ observedGeneration: 2
+ reason: Scheduled
+ status: "False"
+ type: Scheduled
+ - lastTransitionTime: "2023-04-28T22:08:34Z"
+ message: "No addresses have been assigned to the Gateway : failed to get token
+ from credential chain: [FromAssertion(): http call(https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/oauth2/v2.0/token)(POST)
+ error: reply status code was 401:\n{\"error\":\"unauthorized_client\",\"error_description\":\"AADSTS70021:
+ No matching federated identity record found for presented assertion. Assertion
+ Issuer: 'https://azureregion.oic.prod-aks.azure.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/'.
+ Assertion Subject: 'system:serviceaccount:azure-application-lb-system:gateway-controller-sa'.
+ Assertion Audience: 'api://AzureADTokenExchange'. https://docs.microsoft.com/en-us/azure/active-directory/develop/workload-identity-federation\\r\\nTrace
+ ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\\r\\nCorrelation ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\\r\\nTimestamp:
+ 2023-04-28 22:08:46Z\",\"error_codes\":[70021],\"timestamp\":\"2023-04-28 22:08:46Z\",\"trace_id\":\"08079978-7238-4ae3-9406-ba3b479db000\",\"correlation_id\":\"b2f10283-8dc6-4493-bb0e-b0cd009b17fb\",\"error_uri\":\"https://login.microsoftonline.com/error?code=70021\"}
+ DefaultAzureCredential: failed to acquire a token.\nAttempted credentials:\n\tEnvironmentCredential:
+ incomplete environment variable configuration. Only AZURE_TENANT_ID and AZURE_CLIENT_ID
+ are set\n\tManagedIdentityCredential: IMDS token request timed out\n\tAzureCLICredential:
+ fork/exec /bin/sh: no such file or directory]"
+ observedGeneration: 2
+ reason: AddressNotAssigned
+ status: "False"
+ type: Ready
+```
+
+#### Solution
+
+Ensure the federated credentials of the managed identity for the ALB Controller pod to make changes to Application Gateway for Containers are configured in Azure. Instructions on how to configure federated credentials can be found in the quickstart guides:
+- [Quickstart: Deploy ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#install-the-alb-controller)
application-gateway Ingress Controller Expose Service Over Http Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md
Previously updated : 04/27/2023 Last updated : 07/23/2023
These tutorials help illustrate the usage of [Kubernetes Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose an example Kubernetes service through the [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) over HTTP or HTTPS.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ ## Prerequisites - Installed `ingress-azure` helm chart.
application-gateway Ingress Controller Expose Websocket Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-websocket-server.md
Previously updated : 11/4/2019 Last updated : 07/23/2023 # Expose a WebSocket server to Application Gateway
-As outlined in the Application Gateway v2 documentation - it [provides native support for the WebSocket and HTTP/2 protocols](features.md#websocket-and-http2-traffic). Please note, that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support.
+As outlined in the Application Gateway v2 documentation - it [provides native support for the WebSocket and HTTP/2 protocols](features.md#websocket-and-http2-traffic). Note that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support.
-The Kubernetes deployment YAML below shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server:
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+
+The following Kubernetes deployment YAML shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server:
```yaml apiVersion: apps/v1 kind: Deployment
spec:
servicePort: 80 ```
-Given that all the prerequisites are fulfilled, and you have an Application Gateway controlled by a Kubernetes Ingress in your AKS, the deployment above would result in a WebSockets server exposed on port 80 of your Application Gateway's public IP and the `ws.contoso.com` domain.
+Given that all the prerequisites are fulfilled, and you have an Application Gateway controlled by a Kubernetes Ingress in your AKS, the deployment shown previously would result in a WebSockets server exposed on port 80 of your Application Gateway's public IP and the `ws.contoso.com` domain.
The following cURL command would test the WebSocket server deployment: ```shell
curl -i -N -H "Connection: Upgrade" \
If your deployment doesn't explicitly define health probes, Application Gateway would attempt an HTTP GET on your WebSocket server endpoint. Depending on the server implementation ([here is one we love](https://github.com/gorilla/websocket/blob/master/examples/chat/main.go)) WebSocket specific headers may be required (`Sec-Websocket-Version` for instance).
-Since Application Gateway doesn't add WebSocket headers, the Application Gateway's health probe response from your WebSocket server will most likely be `400 Bad Request`.
-As a result Application Gateway will mark your pods as unhealthy, which will eventually result in a `502 Bad Gateway` for the consumers of the WebSocket server.
-To avoid this you may need to add an HTTP GET handler for a health check to your server (`/health` for instance, which returns `200 OK`).
+Since Application Gateway doesn't add WebSocket headers, the Application Gateway's health probe response from your WebSocket server is most likely `400 Bad Request`.
+As a result Application Gateway marks your pods as unhealthy, which eventually results in a `502 Bad Gateway` for the consumers of the WebSocket server.
+To avoid this, you may need to add an HTTP GET handler for a health check to your server (`/health` for instance, which returns `200 OK`).
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Previously updated : 05/25/2023 Last updated : 07/22/2023
The Application Gateway Ingress Controller (AGIC) is a pod within your Azure Kub
AGIC monitors the Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources, and creates and applies Application Gateway config based on the status of the Kubernetes cluster.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ ## Outline - [Prerequisites](#prerequisites)
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Previously updated : 04/27/2023 Last updated : 07/22/2023
The instructions below assume Application Gateway Ingress Controller (AGIC) will be installed in an environment with no pre-existing components.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ ## Required Command Line Tools We recommend the use of [Azure Cloud Shell](https://shell.azure.com/) for all command-line operations below. Launch your shell from shell.azure.com or by clicking the link:
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Previously updated : 04/27/2023 Last updated : 07/23/2023
This section configures your AKS to use [LetsEncrypt.org](https://letsencrypt.org/) and automatically obtain a TLS/SSL certificate for your domain. The certificate is installed on Application Gateway, which performs SSL/TLS termination for your AKS cluster. The setup described here uses the [cert-manager](https://github.com/jetstack/cert-manager) Kubernetes add-on, which automates the creation and management of certificates.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ Use the following steps to install [cert-manager](https://docs.cert-manager.io) on your existing AKS cluster. 1. Helm Chart
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
Create a `ClusterIssuer` resource. This is required by `cert-manager` to represent the `Lets Encrypt` certificate authority where the signed certificate is obtained.
- Using the non-namespaced `ClusterIssuer` resource, cert-manager issues certificates that can be consumed from multiple namespaces. `LetΓÇÖs Encrypt` uses the ACME protocol to verify that you control a given domain name and to issue a certificate. More details on configuring `ClusterIssuer` properties [here](https://docs.cert-manager.io/en/latest/tasks/issuers/https://docsupdatetracker.net/index.html). `ClusterIssuer` instructs `cert-manager` to issue certificates using the `Lets Encrypt` staging environment used for testing (the root certificate not present in browser/client trust stores).
+ The non-namespaced `ClusterIssuer` resource enables cert-manager to issue certificates that can be consumed from multiple namespaces. `LetΓÇÖs Encrypt` uses the ACME protocol to verify that you control a given domain name and to issue a certificate. More details on configuring `ClusterIssuer` properties [here](https://docs.cert-manager.io/en/latest/tasks/issuers/https://docsupdatetracker.net/index.html). `ClusterIssuer` instructs `cert-manager` to issue certificates using the `Lets Encrypt` staging environment used for testing (the root certificate not present in browser/client trust stores).
The default challenge type in the following YAML is `http01`. Other challenges are documented on [letsencrypt.org - Challenge Types](https://letsencrypt.org/docs/challenge-types/)
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
4. Production Certificate
- Once your staging certificate is set up successfully you can switch to a production ACME server:
+ Once your staging certificate is set up successfully, you can switch to a production ACME server:
1. Replace the staging annotation on your Ingress resource with: `certmanager.k8s.io/cluster-issuer: letsencrypt-prod`
- 1. Delete the existing staging `ClusterIssuer` you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with `https://acme-v02.api.letsencrypt.org/directory`
+ 1. Delete the existing staging `ClusterIssuer` you created in the previous step and create a new one by replacing the ACME server from the previous ClusterIssuer YAML with `https://acme-v02.api.letsencrypt.org/directory`
5. Certificate Expiration and Renewal
application-gateway Ingress Controller Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md
Previously updated : 03/02/2021 Last updated : 07/22/2023 # Migrate from AGIC Helm to AGIC add-on
-If you already have AGIC deployed through Helm but want to migrate to AGIC deployed as an AKS add-on, the following steps will help guide you through the migration process.
+If you already have AGIC deployed through Helm but want to migrate to AGIC deployed as an AKS add-on, the following steps help to guide you through the migration process.
+
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
## Prerequisites Before you start the migration process, there are a few things to check.
Before you start the migration process, there are a few things to check.
- Are you using more than one AGIC Helm deployment per AKS cluster? - Are you using multiple AGIC Helm deployments to target one Application Gateway?
-If you answered yes to any of the questions above, AGIC add-on won't support your use case yet so it will be best to continue using AGIC Helm in the meantime. Otherwise, continue with the migration process below during off-business hours.
+If you answered yes to any of the questions above, AGIC add-on won't support your use case yet so it is be best to continue using AGIC Helm in the meantime. Otherwise, continue with the migration process below during off-business hours.
## Find the Application Gateway resource ID that AGIC Helm is currently targeting
-Navigate to the Application Gateway that your AGIC Helm deployment is targeting. Copy and save the resource ID of that Application Gateway. You will need the resource ID in a later step. The resource ID can be found in Portal, under the Properties tab of your Application Gateway or through Azure CLI. The following example saves the Application Gateway resource ID to *appgwId* for a gateway named *myApplicationGateway* in the resource group *myResourceGroup*.
+Navigate to the Application Gateway that your AGIC Helm deployment is targeting. Copy and save the resource ID of that Application Gateway. You need the resource ID in a later step. The resource ID can be found in Portal, under the Properties tab of your Application Gateway or through Azure CLI. The following example saves the Application Gateway resource ID to *appgwId* for a gateway named *myApplicationGateway* in the resource group *myResourceGroup*.
```azurecli-interactive appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id") ``` ## Delete AGIC Helm from your AKS cluster
-Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway will continue to have the last configuration applied by AGIC so existing routing rules won't be affected.
+Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway continues to have the last configuration applied by AGIC so existing routing rules won't be affected.
## Enable AGIC add-on using your existing Application Gateway You can now enable the AGIC add-on in your AKS cluster to target your existing Application Gateway through Azure CLI or Portal. Run the following Azure CLI command to enable the AGIC add-on in your AKS cluster. The example enables the add-on in a cluster called *myCluster*, in a resource group called *myResourceGroup*, using the Application Gateway resource ID *appgwId* we saved above in the earlier step.
application-gateway Ingress Controller Multiple Namespace Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-multiple-namespace-support.md
Title: Enable multiple namespace supports for Application Gateway Ingress Controller
+ Title: Enable multiple namespace support for Application Gateway Ingress Controller
description: This article provides information on how to enable multiple namespace support in a Kubernetes cluster with an Application Gateway Ingress Controller. Previously updated : 11/4/2019 Last updated : 07/23/2023
infrastructure with finer controls of resources, security, configuration etc.
Kubernetes allows for one or more ingress resources to be defined independently within each namespace.
-As of version 0.7 [Azure Application Gateway Kubernetes
+As of version 0.7, [Azure Application Gateway Kubernetes
IngressController](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/README.md) (AGIC) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use [App Gateway](https://azure.microsoft.com/services/application-gateway/) as an
-ingress, all namespaces will use the same instance of Application Gateway. A single
-installation of Ingress Controller will monitor accessible namespaces and will
-configure the Application Gateway it is associated with.
+ingress, all namespaces use the same instance of Application Gateway. A single
+installation of Ingress Controller monitors accessible namespaces and
+configures the Application Gateway it is associated with.
-Version 0.7 of AGIC will continue to exclusively observe the `default`
+Version 0.7 of AGIC continues to exclusively observe the `default`
namespace, unless this is explicitly changed to one or more different
-namespaces in the Helm configuration (see section below).
+namespaces in the Helm configuration (see the following section).
+
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
## Enable multiple namespace support To enable multiple namespace support: 1. modify the [helm-config.yaml](#sample-helm-config-file) file in one of the following ways:
- - delete the `watchNamespace` key entirely from [helm-config.yaml](#sample-helm-config-file) - AGIC will observe all namespaces
- - set `watchNamespace` to an empty string - AGIC will observe all namespaces
- - add multiple namespaces separated by a comma (`watchNamespace: default,secondNamespace`) - AGIC will observe these namespaces exclusively
+ - delete the `watchNamespace` key entirely from [helm-config.yaml](#sample-helm-config-file) - AGIC observes all namespaces
+ - set `watchNamespace` to an empty string - AGIC observes all namespaces
+ - add multiple namespaces separated by a comma (`watchNamespace: default,secondNamespace`) - AGIC observes these namespaces exclusively
2. apply Helm template changes with: `helm install -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure`
-Once deployed with the ability to observe multiple namespaces, AGIC will:
- - list ingress resources from all accessible namespaces
- - filter to ingress resources annotated with `kubernetes.io/ingress.class: azure/application-gateway`
- - compose combined [Application Gateway config](https://github.com/Azure/azure-sdk-for-go/blob/37f3f4162dfce955ef5225ead57216cf8c1b2c70/services/network/mgmt/2016-06-01/network/models.go#L1710-L1744)
- - apply the config to the associated Application Gateway via [ARM](../azure-resource-manager/management/overview.md)
+Once deployed with the ability to observe multiple namespaces, AGIC performs the following actions:
+ - lists ingress resources from all accessible namespaces
+ - filters to ingress resources annotated with `kubernetes.io/ingress.class: azure/application-gateway`
+ - composes combined [Application Gateway config](https://github.com/Azure/azure-sdk-for-go/blob/37f3f4162dfce955ef5225ead57216cf8c1b2c70/services/network/mgmt/2016-06-01/network/models.go#L1710-L1744)
+ - applies the config to the associated Application Gateway via [ARM](../azure-resource-manager/management/overview.md)
## Conflicting Configurations Multiple namespaced [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource)
At the top of the hierarchy - **listeners** (IP address, port, and host) and **r
backend pool, and HTTP settings) could be created and shared by multiple namespaces/ingresses. On the other hand - paths, backend pools, HTTP settings, and TLS certificates could be created by one namespace only
-and duplicates will be removed.
+and duplicates are removed.
For example, consider the following duplicate ingress resources defined namespaces `staging` and `production` for `www.contoso.com`:
spec:
servicePort: 80 ```
-Despite the two ingress resources demanding traffic for `www.contoso.com` to be
+Despite the two ingress resources demanding traffic for `www.contoso.com` be
routed to the respective Kubernetes namespaces, only one backend can service the traffic. AGIC would create a configuration on "first come, first served" basis for one of the resources. If two ingresses resources are created at the
-same time, the one earlier in the alphabet will take
-precedence. From the example above we will only be able to create settings for
-the `production` ingress. Application Gateway will be configured with the following
+same time, the one earlier in the alphabet takes
+precedence. From the example above we are only able to create settings for
+the `production` ingress. Application Gateway is configured with the following
resources: - Listener: `fl-www.contoso.com-80`
resources:
- HTTP Settings: `bp-production-contoso-web-service-80-80-websocket-ingress` - Health Probe: `pb-production-contoso-web-service-80-websocket-ingress`
-Note that except for *listener* and *routing rule*, the Application Gateway resources created include the name
+Note: Except for *listener* and *routing rule*, the Application Gateway resources created include the name
of the namespace (`production`) for which they were created. If the two ingress resources are introduced into the AKS cluster at different points in time, it is likely for AGIC to end up in a scenario where it
-reconfigures Application Gateway and re-routes traffic from `namespace-B` to
+reconfigures Application Gateway and reroutes traffic from `namespace-B` to
`namespace-A`.
-For example if you added `staging` first, AGIC will configure Application Gateway to route
+For example, if you added `staging` first, AGIC configures Application Gateway to route
traffic to the staging backend pool. At a later stage, introducing `production`
-ingress, will cause AGIC to reprogram Application Gateway, which will start routing traffic
+ingress, causes AGIC to reprogram Application Gateway, which starts routing traffic
to the `production` backend pool. ## Restrict Access to Namespaces
-By default AGIC will configure Application Gateway based on annotated Ingress within
+By default AGIC configures Application Gateway based on annotated Ingress within
any namespace. Should you want to limit this behavior you have the following options: - limit the namespaces, by explicitly defining namespaces AGIC should observe via the `watchNamespace` YAML key in [helm-config.yaml](#sample-helm-config-file)
options:
verbosityLevel: 3 ################################################################################
- # Specify which application gateway the ingress controller will manage
+ # Specify which application gateway the ingress controller manages
# appgw: subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
- # Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD.
+ # Setting appgw.shared to "true" creates an AzureIngressProhibitedTarget CRD.
# This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false ################################################################################
- # Specify which kubernetes namespace the ingress controller will watch
+ # Specify which kubernetes namespace the ingress controller watches
# Default value is "default" # Leaving this variable out or setting it to blank or empty string would # result in Ingress Controller observing all acessible namespaces.
application-gateway Ingress Controller Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-overview.md
Previously updated : 03/02/2021 Last updated : 07/22/2023 # What is Application Gateway Ingress Controller?
-The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) customers to leverage Azure's native [Application Gateway](https://azure.microsoft.com/services/application-gateway/) L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet.
+The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) customers to leverage Azure's native [Application Gateway](https://azure.microsoft.com/services/application-gateway/) L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it's hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet.
The Ingress Controller runs in its own pod on the customerΓÇÖs AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md).
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ ## Benefits of Application Gateway Ingress Controller AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and doesn't require NodePort or KubeProxy services. This also brings better performance to your deployments.
-Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also brings you autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster.
+Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also enables autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster.
Using Application Gateway in addition to AGIC also helps protect your AKS cluster by providing TLS policy and Web Application Firewall (WAF) functionality. ![Azure Application Gateway + AKS](./media/application-gateway-ingress-controller-overview/architecture.png)
-AGIC is configured via the Kubernetes [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/), along with Service and Deployments/Pods. It provides a number of features, leveraging AzureΓÇÖs native Application Gateway L7 load balancer. To name a few:
+AGIC is configured via the Kubernetes [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/), along with Service and Deployments/Pods. It provides many features, using AzureΓÇÖs native Application Gateway L7 load balancer. To name a few:
- URL routing - Cookie-based affinity - TLS termination
There are two ways to deploy AGIC for your AKS cluster. The first way is through
The AGIC add-on is still deployed as a pod in the customer's AKS cluster, however, there are a few differences between the Helm deployment version and the add-on version of AGIC. Below is a list of differences between the two versions: - Helm deployment values can't be modified on the AKS add-on:
- - `verbosityLevel` will be set to 5 by default
- - `usePrivateIp` will be set to be false by default; this can be overwritten by the [use-private-ip annotation](ingress-controller-annotations.md#use-private-ip)
+ - `verbosityLevel` is set to 5 by default
+ - `usePrivateIp` is set to be false by default; this can be overwritten by the [use-private-ip annotation](ingress-controller-annotations.md#use-private-ip)
- `shared` isn't supported on add-on - `reconcilePeriodSeconds` isn't supported on add-on - `armAuth.type` isn't supported on add-on - AGIC deployed via Helm supports ProhibitedTargets, which means AGIC can configure the Application Gateway specifically for AKS clusters without affecting other existing backends. AGIC add-on doesn't currently support this.
- - Since AGIC add-on is a managed service, customers will automatically be updated to the latest version of AGIC add-on, unlike AGIC deployed through Helm where the customer must manually update AGIC.
+ - Since AGIC add-on is a managed service, customers are automatically updated to the latest version of AGIC add-on, unlike AGIC deployed through Helm where the customer must manually update AGIC.
> [!NOTE] > Customers can only deploy one AGIC add-on per AKS cluster, and each AGIC add-on currently can only target one Application Gateway. For deployments that require more than one AGIC per cluster or multiple AGICs targeting one Application Gateway, please continue to use AGIC deployed through Helm.
application-gateway Ingress Controller Private Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md
Previously updated : 04/27/2023 Last updated : 07/23/2023
This feature exposes the ingress endpoint within the `Virtual Network` using a private IP.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ ## Prerequisites Application Gateway with a [Private IP configuration](./configure-application-gateway-with-private-frontend-ip.md)
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
Previously updated : 04/27/2023 Last updated : 07/23/2023 # Troubleshoot common questions or issues with Ingress Controller [Azure Cloud Shell](https://shell.azure.com/) is the most convenient way to troubleshoot any problems with your AKS
-and AGIC installation. Launch your shell from [shell.azure.com](https://shell.azure.com/) or by clicking the link:
+and AGIC installation. Launch your shell from [shell.azure.com](https://shell.azure.com/) or by selecting the link:
[![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
## Test with a simple Kubernetes app
application-gateway Ingress Controller Update Ingress Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md
Previously updated : 04/27/2023 Last updated : 07/23/2023
The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage.
+> [!TIP]
+> Also see [What is Application Gateway for Containers?](for-containers/overview.md), currently in public preview.
+ Before beginning the upgrade procedure, ensure that you've added the required repository: - View your currently added Helm repositories with:
application-gateway Redirect Http To Https Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-portal.md
Export-PfxCertificate `
A virtual network is needed for communication between the resources that you create. Two subnets are created in this example: one for the application gateway, and the other for the backend servers. You can create a virtual network at the same time that you create the application gateway.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Click **Create a resource** found on the upper left-hand corner of the Azure portal. 3. Select **Networking** and then select **Application Gateway** in the Featured list. 4. Enter these values for the application gateway:
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-standalone-account.md
When you create an Automation account, Azure generates two 512-bit automation ac
### View Automation account keys To view and copy your Automation account access keys, follow these steps:
-1. In the [Azure portal](https://portal.azure.com/), go to your Automation account.
+1. In the [Azure portal](https://portal.azure.com), go to your Automation account.
1. Under **Account Settings**, select **Keys** to view your Automation account's primary and secondary access keys. You can use any of the two keys to access your Automation account. However, we recommend that you use the first key and reserve the use of second key.
Choose a client
# [Azure portal](#tab/azureportal) Follow these steps:
-1. Go to your Automation account in [Azure portal](https://portal.azure.com/).
+1. Go to your Automation account in the [Azure portal](https://portal.azure.com).
1. Under **Account Settings**, select **Keys**. 1. Select **Regenerate primary** to regenerate the primary access key for your Automation account. 1. Select the **Regenerate secondary** to regenerate the secondary access key.
azure-arc Identity Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md
Title: "Azure Arc-enabled Kubernetes identity and access overview" Previously updated : 05/04/2023 Last updated : 07/21/2023 description: "Understand identity and access options for Arc-enabled Kubernetes clusters." # Azure Arc-enabled Kubernetes identity and access overview
-You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. Kubernetes role-based access control (Kubernetes RBAC) lets you grant users, groups, and service accounts access to only the resources they need. You can further enhance the security and permissions structure by using Azure Active Directory and Azure role-based access control (Azure RBAC).
+You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. This topic provides an overview of the options for doing so with your Arc-enabled Kubernetes clusters.
-While Kubernetes RBAC works only on Kubernetes resources within your cluster, Azure RBAC works on resources across your Azure subscription.
+This image shows the ways that these different options can be used:
-This topic provides an overview of these two RBAC systems and how you can use them with your Arc-enabled Kubernetes clusters.
-## Kubernetes RBAC
+You can also use both cluster connect and Azure RBAC together if that is most appropriate for your needs.
-[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, you assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster.
+## Connectivity options
+
+When planning how users will authenticate and access Arc-enabled Kubernetes clusters, the first decision is whether or not you want to use the cluster connect feature.
+
+### Cluster connect
+
+The Azure Arc-enabled Kubernetes [cluster connect](conceptual-cluster-connect.md) feature provides connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
+
+With cluster connect, your Arc-enabled clusters can be accessed either within Azure or from the internet. This feature can help enable interactive debugging and troubleshooting scenarios. Cluster connect may also require less interaction for updates when permissions are needed for new users. All of the authorization and authentication options described in this article work with cluster connect.
+
+Cluster connect is required if you want to use [custom locations](conceptual-custom-locations.md) or [viewing Kubernetes resources from Azure portal](kubernetes-resource-view.md).
+
+For more information, see [Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters](cluster-connect.md).
+
+### Azure AD and Azure RBAC without cluster connect
+
+If you don't want to use cluster connect, you can authenticate and authorize users so they can access the connected cluster by using [Azure Active Directory (Azure AD)](/azure/active-directory/fundamentals/active-directory-whatis) and [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview). Using [Azure RBAC on Azure Arc-enabled Kubernetes (preview)](conceptual-azure-rbac.md) lets you control the access that's granted to users in your tenant, managing access directly from Azure using familiar Azure identity and access features. You can also configure roles at the subscription or resource group scope, letting them roll out to all connected clusters within that scope.
+
+Azure RBAC supports [conditional access](azure-rbac.md#use-conditional-access-with-azure-ad), allowing you to enable [just-in-time cluster access](azure-rbac.md#configure-just-in-time-cluster-access-with-azure-ad) or limit access to approved clients or devices.
+
+Azure RBAC also supports a [direct mode of communication](azure-rbac.md#use-a-shared-kubeconfig-file), using Azure AD identities to access connected clusters directly from within the datacenter, rather than requiring all connections to go through Azure.
+
+Azure RBAC on Arc-enabled Kubernetes is currently in public preview. For more information, see [Use Azure RBAC on Azure Arc-enabled Kubernetes clusters (preview)](azure-rbac.md).
+
+## Authentication options
-The Azure Arc-enabled Kubernetes cluster connect feature uses Kubernetes RBAC to provide connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. Using the cluster connect feature helps enable interactive debugging and troubleshooting scenarios. It can also be used to provide cluster access to Azure services for [custom locations](conceptual-custom-locations.md).
+Authentication is the process of verifying a user's identity. There are two options for authenticating to an Arc-enabled Kubernetes cluster: cluster connect and Azure RBAC.
-For more information, see [Cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md) and [Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters](cluster-connect.md).
+### Azure AD authentication
-## Azure RBAC
+The [Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md) feature (currently in public preview) lets you use [Azure Active Directory (Azure AD)](/azure/active-directory/fundamentals/active-directory-whatis) to allow users in your Azure tenant to access your connected Kubernetes clusters.
-[Azure role-based access control (RBAC)](../../role-based-access-control/overview.md) is an authorization system built on Azure Resource Manager and Azure Active Directory (Azure AD) that provides fine-grained access management of Azure resources.
+You can also use Azure Active Directory authentication with cluster connect. For more information, see [Azure Active Directory authentication option](cluster-connect.md#azure-active-directory-authentication-option).
-With Azure RBAC, role definitions outline the permissions to be applied. You assign these roles to users or groups via a role assignment for a particular scope. The scope can be across the entire subscription or limited to a resource group or to an individual resource such as a Kubernetes cluster.
+### Service token authentication
+
+With cluster connect, you can choose to authenticate via [service accounts](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens).
+
+For more information, see [Service account token authentication option](cluster-connect.md#service-account-token-authentication-option).
+
+## Authorization options
+
+Authorization grants an authenticated user the permission to perform specified actions. With Azure Arc-enabled Kubernetes, there are two authorization options, both of which use role-based access control (RBAC):
+
+- [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview) uses Azure AD and Azure Resource Manager to provide fine-grained access management to Azure resources. This allows the benefits of Azure role assignments, such as activity logs tracking all changes made, to be used with your Azure Arc-enabled Kubernetes clusters.
+- [Kubernetes role-based access control (Kubernetes RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) lets you dynamically configure policies through the Kubernetes API so that users, groups, and service accounts only have access to specific cluster resources.
+
+While Kubernetes RBAC works only on Kubernetes resources within your cluster, Azure RBAC works on resources across your Azure subscription.
+
+### Azure RBAC authorization
+
+[Azure role-based access control (RBAC)](../../role-based-access-control/overview.md) is an authorization system built on Azure Resource Manager and Azure AD that provides fine-grained access management of Azure resources. With Azure RBAC, role definitions outline the permissions to be applied. You assign these roles to users or groups via a role assignment for a particular scope. The scope can be across the entire subscription or limited to a resource group or to an individual resource such as a Kubernetes cluster.
+
+If you're using Azure AD authentication without cluster connect, then Azure RBAC authorization is your only option for authorization.
+
+If you're using cluster connect with Azure AD authentication, you have the option to use Azure RBAC for connectivity to the `apiserver` of the cluster. For more information, see [Azure Active Directory authentication option](cluster-connect.md#azure-active-directory-authentication-option).
+
+### Kubernetes RBAC authorization
+
+[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, you assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster.
-Using Azure RBAC with your Arc-enabled Kubernetes clusters allows the benefits of Azure role assignments, such as activity logs that show all Azure RBAC changes to an Azure resource.
+If you're using cluster connect with the [service account token authentication option](cluster-connect.md#service-account-token-authentication-option), you must use Kubernetes RBAC to provide connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
-For more information, see [Azure RBAC on Azure Arc-enabled Kubernetes (preview)](conceptual-azure-rbac.md) and [Use Azure RBAC on Azure Arc-enabled Kubernetes clusters (preview)](azure-rbac.md).
+If you're using [cluster connect with Azure AD authentication](cluster-connect.md#azure-active-directory-authentication-option), you also have the option to use Kubernetes RBAC instead of Azure RBAC.
## Next steps
+- Learn more about [Azure Azure AD](/azure/active-directory/fundamentals/active-directory-whatis) and [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview).
- Learn about [cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md). - Learn about [Azure RBAC on Azure Arc-enabled Kubernetes (preview)](conceptual-azure-rbac.md) - Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](../../aks/concepts-identity.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Title: "Azure Arc-enabled Kubernetes validation" Previously updated : 12/07/2022 Last updated : 07/21/2023 description: "Describes Arc validation program for Kubernetes distributions" # Azure Arc-enabled Kubernetes validation
-Azure Arc-enabled Kubernetes works with any Kubernetes clusters that are certified by the Cloud Native Computing Foundation (CNCF). The Azure Arc team has also worked with key industry Kubernetes offering providers to validate Azure Arc-enabled Kubernetes with their Kubernetes distributions. Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes.
+The Azure Arc team works with key industry Kubernetes offering providers to validate Azure Arc-enabled Kubernetes with their Kubernetes distributions. Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes.
+
+> [!IMPORTANT]
+> Azure Arc-enabled Kubernetes works with any Kubernetes clusters that are certified by the Cloud Native Computing Foundation (CNCF), even if they haven't been validated through conformance tests and are not listed on this page.
## Validated distributions
The conformance tests run as part of the Azure Arc-enabled Kubernetes validation
2. Configuration: * Create configuration on top of Azure Arc-enabled Kubernetes resource.
- * [Flux](https://docs.fluxcd.io/), needed for setting up GitOps workflow, is deployed on the cluster.
+ * [Flux](https://docs.fluxcd.io/), needed for setting up [GitOps workflow](tutorial-use-gitops-flux2.md), is deployed on the cluster.
* Flux pulls manifests and Helm charts from demo Git repo and deploys to cluster. ## Next steps
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Azure Functions integrates with Application Insights to better enable you to mon
You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. For information about Application Insights costs, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
-Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. Common logging configuration can be set in the *[host.json]* file. By default, these settings also govern custom logs emitted by your code, though in some cases this behavior can be disabled in favor of options that give you more control over logging. See [Custom logs](#custom-logs) for more information.
+Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. Common logging configuration can be set in the *[host.json]* file. By default, these settings also govern custom logs emitted by your code, though in some cases this behavior can be disabled in favor of options that give you more control over logging. See [Custom application logs](#custom-application-logs) for more information.
> [!NOTE] > You can use specially configured application settings to represent specific settings in a *host.json* file for a specific environment. This lets you effectively change *host.json* settings without having to republish the *host.json* file in your project. For more information, see [Override host.json values](functions-host-json.md#override-hostjson-values).
-## Custom logs
+## Custom application logs
-By default, custom logs you write are sent to the Functions host, which then sends them to Application Insights through the ["Worker" category](#configure-categories). Some language stacks allow you to instead send the logs directly to Application Insights, giving you full control over how logs you write are emitted. The following table summarizes the options available to each stack:
+By default, custom application logs you write are sent to the Functions host, which then sends them to Application Insights through the ["Worker" category](#configure-categories). Some language stacks allow you to instead send the logs directly to Application Insights, giving you full control over how logs you write are emitted. The logging pipeline changes from `worker -> Functions host -> Application Insights` to `worker -> Application Insights`.
+
+The following table summarizes the options available to each stack:
| Language stack | Configuration of custom logs | |-|-|
By default, custom logs you write are sent to the Functions host, which then sen
| Java | By default: `host.json`<br/>Option to send logs directly: [Configure the Application Insights Java agent](../azure-monitor/app/monitor-functions.md#distributed-tracing-for-java-applications) | | PowerShell | `host.json` |
-When custom logs are sent directly, the host no longer be emits them, and `host.json` no longer controls their behavior. Similarly, the options exposed by each stack only apply to custom logs, and they do not change the behavior of the other runtime logs described in this article. To control the behavior of all logs, you may need to make changes for both configurations.
+When custom application logs are sent directly, the host no longer be emits them, and `host.json` no longer controls their behavior. Similarly, the options exposed by each stack only apply to custom logs, and they do not change the behavior of the other runtime logs described in this article. To control the behavior of all logs, you may need to make changes for both configurations.
## Configure categories
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) |
-| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Some service-specific SDK types](dotnet-isolated-process-guide.md#sdk-types-preview) |
+| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> |
| HTTP trigger model types| [HttpRequest] / [IActionResult] | [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (as a [public preview extension][aspnetcore-integration])| | Output binding interactions | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) | | Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) | | Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
+| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | | Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
Use the following table to compare feature and functional differences between th
<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
+<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview.
+ [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated worker process
description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which supports non-LTS versions of .NET and .NET Framework apps. Previously updated : 01/16/2023 Last updated : 07/21/2023 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
A .NET Functions isolated worker process project uses a unique set of packages,
The following packages are required to run your .NET functions in an isolated worker process:
-+ [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)
-+ [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/)
++ [Microsoft.Azure.Functions.Worker]++ [Microsoft.Azure.Functions.Worker.Sdk] ### Extension packages
The trigger attribute specifies the trigger type and binds input data to a metho
The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name.
-Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types-preview).
+Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types).
For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when using .NET Functions isolated worker process.
The data written to an output binding is always the return value of the function
The response from an HTTP trigger is always considered an output, so a return value attribute isn't required.
-### SDK types (preview)
+### SDK types
-For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide additional capability beyond what a serialized string or plain-old CLR object (POCO) may offer. Support for SDK types is currently in preview with limited scenario coverage.
+For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide additional capability beyond what a serialized string or plain-old CLR object (POCO) may offer. To use the newer types, your project needs to be updated to use newer versions of core dependencies.
-To use SDK type bindings, your project must reference [Microsoft.Azure.Functions.Worker 1.15.0-preview1 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.15.0-preview1) and [Microsoft.Azure.Functions.Worker.Sdk 1.11.0-preview1 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.11.0-preview1). Specific package versions will be needed for each of the service extensions as well. When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
+| Dependency | Version requirement |
+|-|-|
+|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 |
+|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.12.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 |
-The following service-specific bindings are currently included in the preview:
+When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
+
+Each trigger and binding extension also has its own minimum version requirement, which is described in the extension reference articles. The following service-specific bindings offer additional SDK types:
| Service | Trigger | Input binding | Output binding | |-|-|-|-|
-| [Azure Blobs][blob-sdk-types] | **Preview support** | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Blobs][blob-sdk-types] | **Generally Available** | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Queues][queue-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Service Bus][servicebus-sdk-types] | **Preview support<sup>2</sup>** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Preview support** | _SDK types not recommended<.sup>1</sup>_ |
+| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Event Grid][eventgrid-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
[blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types [cosmos-sdk-types]: ./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cextensionv4&pivots=programming-language-csharp#binding-types
This section shows how to work with the underlying HTTP request and response obj
## Logging
-In .NET isolated, you can write to logs by using an [`ILogger`][ILogger] instance obtained from a [FunctionContext] object passed to your function. Call the [GetLogger] method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
+In .NET isolated, you can write to logs by using an [`ILogger<T>`][ILogger&lt;T&gt;] or [`ILogger`][ILogger] instance. The logger can be obtained through [dependency injection](#dependency-injection) of an [`ILogger<T>`][ILogger&lt;T&gt;] or of an [ILoggerFactory]:
+
+```csharp
+public class MyFunction {
+
+ private readonly ILogger<MyFunction> _logger;
+
+ public MyFunction(ILogger<MyFunction> logger) {
+ _logger = logger;
+ }
+
+ [Function(nameof(MyFunction))]
+ public void Run([BlobTrigger("samples-workitems/{name}", Connection = "")] string myBlob, string name)
+ {
+ _logger.LogInformation($"C# Blob trigger function Processed blob\n Name: {name} \n Data: {myBlob}");
+ }
+
+}
+```
-The following example shows how to get an [`ILogger`][ILogger] and write logs inside a function:
+The logger can also be obtained from a [FunctionContext] object passed to your function. Call the [GetLogger&lt;T&gt;] or [GetLogger] method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
+Use the methods of [ILogger&lt;T&gt;] and [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories). You can customize the log levels for components added to your code by registering filters as part of the `HostBuilder` configuration:
-Use various methods of [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
+```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
-An [`ILogger`][ILogger] is also provided when using [dependency injection](#dependency-injection).
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services =>
+ {
+ // Registers IHttpClientFactory.
+ // By default this sends a lot of Information-level logs.
+ services.AddHttpClient();
+ })
+ .ConfigureLogging(logging =>
+ {
+ // Disable IHttpClientFactory Informational logs.
+ // Note -- you can also remove the handler that does the logging: https://github.com/aspnet/HttpClientFactory/issues/196#issuecomment-432755765
+ logging.AddFilter("System.Net.Http.HttpClient", LogLevel.Warning);
+ })
+ .Build();
+```
As part of configuring your app in `Program.cs`, you can also define the behavior for how errors are surfaced to your logs. By default, exceptions thrown by your code may end up wrapped in an `RpcException`. To remove this extra layer, set the `EnableUserCodeExceptions` property to "true" as part of configuring the builder: ```csharp
- var host = new HostBuilder()
- .ConfigureFunctionsWorkerDefaults(builder => {}, options =>
- {
- options.EnableUserCodeExceptions = true;
- })
- .Build();
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults(builder => {}, options =>
+ {
+ options.EnableUserCodeExceptions = true;
+ })
+ .Build();
``` ### Application Insights
-You can configure your isolated process application to emit logs directly [Application Insights](../azure-monitor/app/app-insights-overview.md?tabs=net), giving you control over how those logs are emitted. To do this, you will need to add a reference to [Microsoft.Azure.Functions.Worker.ApplicationInsights, version 1.0.0-preview5 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights/). You will also need to reference [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService). Add these packages to your isolated process project:
+You can configure your isolated process application to emit logs directly [Application Insights](../azure-monitor/app/app-insights-overview.md?tabs=net), giving you control over how those logs are emitted. This replaces the default behavior of [relaying custom logs through the host](./configure-monitoring.md#custom-application-logs). To work with Application Insights directly, you will need to add a reference to [Microsoft.Azure.Functions.Worker.ApplicationInsights, version 1.0.0-preview5 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights/). You will also need to reference [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService). Add these packages to your isolated process project:
```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService
dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerel
You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file: ```csharp
- var host = new HostBuilder()
- .ConfigureFunctionsWorkerDefaults()
- .ConfigureServices(services => {
- services.AddApplicationInsightsTelemetryWorkerService();
- services.ConfigureFunctionsApplicationInsights();
- })
- .Build();
-
- host.Run();
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
``` The call to `ConfigureFunctionsApplicationInsights()` adds an `ITelemetryModule` listening to a Functions-defined `ActivitySource`. This creates dependency telemetry needed to support distributed tracing in Application Insights. To learn more about `AddApplicationInsightsTelemetryWorkerService()` and how to use it, see [Application Insights for Worker Service applications](../azure-monitor/app/worker-service.md).
The call to `ConfigureFunctionsApplicationInsights()` adds an `ITelemetryModule`
> [!IMPORTANT] > The Functions host and the isolated process worker have separate configuration for log levels, etc. Any [Application Insights configuration in host.json](./functions-host-json.md#applicationinsights) will not affect the logging from the worker, and similarly, configuration made in your worker code will not impact logging from the host. You may need to apply changes in both places if your scenario requires customization at both layers.
-The rest of your application continues to work with `ILogger`. However, by default, the Application Insights SDK adds a logging filter that instructs `ILogger` to capture only warnings and more severe logs. If you want to disable this behavior, remove the filter rule as part of service configuration:
+The rest of your application continues to work with `ILogger` and `ILogger<T>`. However, by default, the Application Insights SDK adds a logging filter that instructs the logger to capture only warnings and more severe logs. If you want to disable this behavior, remove the filter rule as part of service configuration:
```csharp
- var host = new HostBuilder()
- .ConfigureFunctionsWorkerDefaults()
- .ConfigureServices(services => {
- services.AddApplicationInsightsTelemetryWorkerService();
- services.ConfigureFunctionsApplicationInsights();
- services.Configure<LoggerFilterOptions>(options =>
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .ConfigureLogging(logging =>
+ {
+ logging.Services.Configure<LoggerFilterOptions>(options =>
+ {
+ LoggerFilterRule defaultRule = options.Rules.FirstOrDefault(rule => rule.ProviderName
+ == "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider");
+ if (defaultRule is not null)
{
- LoggerFilterRule defaultRule = options.Rules.FirstOrDefault(rule => rule.ProviderName
- == "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider");
- if (defaultRule is not null)
- {
- options.Rules.Remove(defaultRule);
- }
- });
- })
- .Build();
-
- host.Run();
+ options.Rules.Remove(defaultRule);
+ }
+ });
+ })
+ .Build();
+
+host.Run();
``` ## Debugging when targeting .NET Framework
Because your isolated worker process app runs outside the Functions runtime, you
[supported-versions]: #supported-versions+
+[Microsoft.Azure.Functions.Worker]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/
+[Microsoft.Azure.Functions.Worker.Sdk]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/
+ [HostBuilder]: /dotnet/api/microsoft.extensions.hosting.hostbuilder [IHost]: /dotnet/api/microsoft.extensions.hosting.ihost [ConfigureFunctionsWorkerDefaults]: /dotnet/api/microsoft.extensions.hosting.workerhostbuilderextensions.configurefunctionsworkerdefaults?view=azure-dotnet&preserve-view=true#Microsoft_Extensions_Hosting_WorkerHostBuilderExtensions_ConfigureFunctionsWorkerDefaults_Microsoft_Extensions_Hosting_IHostBuilder_
Because your isolated worker process app runs outside the Functions runtime, you
[FunctionContext]: /dotnet/api/microsoft.azure.functions.worker.functioncontext?view=azure-dotnet&preserve-view=true [ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger [ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.ilogger-1
-[GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger?view=azure-dotnet&preserve-view=true
-[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient?view=azure-dotnet&preserve-view=true
-[DocumentClient]: /dotnet/api/microsoft.azure.documents.client.documentclient
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
+[GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger
+[GetLogger&lt;T&gt;]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger#microsoft-azure-functions-worker-functioncontextloggerextensions-getlogger-1
[HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true [HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions 1.x exposed types from the deprecated [Microsoft.WindowsAzure.Storage]
# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-The isolated worker process supports parameter types according to the tables below. Support for binding to `Stream`, and to types from [Azure.Storage.Blobs] is in preview.
+The isolated worker process supports parameter types according to the tables below.
**Blob trigger**
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/transparency-note.md
Organizations can use the Trial Matcher model to match patients to potentially s
Trial Matcher analyzes and matches clinical trial eligibility criteria and patientsΓÇÖ clinical information. Clinical trial eligibility criteria are extracted from clinical trials available on clinicaltrials.gov or provided by the service user as a custom trial. Patient clinical information is provided either as unstructured clinical note, FHIR bundles or key-value schema.
-Trial Matcher uses [Text Analytics for health](https://docs.microsoft.com/azure/cognitive-services/language-service/text-analytics-for-health/overview?tabs=ner) to identify and extract medical entities in case the information provided is unstructured, either from clinical trial protocols from clinicaltrials.gov, custom trials and patient clinical notes.
+Trial Matcher uses [Text Analytics for health](/azure/ai-services/language-service/text-analytics-for-health/overview) to identify and extract medical entities in case the information provided is unstructured, either from clinical trial protocols from clinicaltrials.gov, custom trials and patient clinical notes.
When Trial Matcher is in patient centric mode, it returns a list of potentially suitable clinical trials, based on the patient clinical information. When Trial Matcher is in trial centric mode, it returns a list of patients who are potentially eligible for a clinical trial. The Trial Matcher results should be reviewed by a human decision maker for a further full qualification. Trial Matcher results also include an explainability layer. When a patient appears to be ineligible for a trial, Trial Matcher provides evidence of why the patient is not eligible to meet the criteria of the specific trial.
We encourage customers to leverage Trial Matcher in their innovative solutions o
### Technical limitations, operational factors, and ranges * Trial Matcher is available only in English.
-* Since Trial Matcher is based on TA4H for analyzing unstructured text, please refer to [Text Analytics for health Transparency Note](https://learn.microsoft.com/legal/cognitive-services/language-service/transparency-note-health)
+* Since Trial Matcher is based on TA4H for analyzing unstructured text, please refer to [Text Analytics for health Transparency Note](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/azure-health-insights/trial-matcher/context/context)
for further information.
for further information.
## Learn more about responsible AI * [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai). * [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources).
-* [Microsoft Azure Learning courses on responsible AI](https://docs.microsoft.com/learn/paths/responsible-ai-business-principles/).
+* [Microsoft Azure Learning courses on responsible AI](/learn/paths/responsible-ai-business-principles/).
## Learn more about Text Analytics For Health
-[Text Analytics for Health Transparency Note](https://learn.microsoft.com/legal/cognitive-services/language-service/transparency-note-health).
--
+[Text Analytics for Health Transparency Note](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/azure-health-insights/trial-matcher/context/context).
## About this document
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
npm install @microsoft/applicationinsights-react-js
#### [React Native](#tab/reactnative)
-By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
+- **React Native Plugin**
-Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`.
+ By default, the React Native Plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
-```zsh
+ Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`.
+
+ ```zsh
+
+ npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web
+ npm install --save react-native-device-info
+ react-native link react-native-device-info
+
+ ```
+
+- **React Native Manual Device Plugin**
+
+ If you're using React Native Expo, add the React Native Manual Device Plugin instead of the React Native Plugin. The React Native Plugin uses the `react-native-device-info package` package, which React Native Expo doesn't support.
+
+ ```bash
+
+ npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web
+
+ ```
-npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web
-npm install --save react-native-device-info
-react-native link react-native-device-info
-```
#### [Angular](#tab/angular)
appInsights.loadAppInsights();
#### [React Native](#tab/reactnative)
-To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
+- **React Native Plug-in**
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
+ To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
-```typescript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
-// Add the Click Analytics plug-in.
-// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
-var RNPlugin = new ReactNativePlugin();
-// Add the Click Analytics plug-in.
-/* var clickPluginInstance = new ClickAnalyticsPlugin();
-var clickPluginConfig = {
+ > [!TIP]
+ > If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
+
+ ```typescript
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+ import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
+ // Add the Click Analytics plug-in.
+ // import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+ var RNPlugin = new ReactNativePlugin();
+ // Add the Click Analytics plug-in.
+ /* var clickPluginInstance = new ClickAnalyticsPlugin();
+ var clickPluginConfig = {
autoCapture: true
-}; */
-var appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
- extensions: [RNPlugin]
- // Add the Click Analytics plug-in.
- /* extensions: [RNPlugin, clickPluginInstance],
- extensionConfig: {
- [clickPluginInstance.identifier]: clickPluginConfig
- } */
- }
-});
-appInsights.loadAppInsights();
+ }; */
+ var appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
+ extensions: [RNPlugin]
+ // Add the Click Analytics plug-in.
+ /* extensions: [RNPlugin, clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ } */
+ }
+ });
+ appInsights.loadAppInsights();
-```
+ ```
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+ > [!TIP]
+ > If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
++
+- **React Native Manual Device Plugin**
+
+ To use this plugin, you must either disable automatic device info collection or use your own device info collection class after you add the extension to your code.
+
+ 1. Construct the plugin and add it as an `extension` to your existing Application Insights instance.
+
+ ```typescript
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+ import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
+
+ var RNMPlugin = new ReactNativePlugin();
+ var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ extensions: [RNMPlugin]
+ }
+ });
+ appInsights.loadAppInsights();
+ ```
+
+ 1. Do one of the following:
+
+ - Disable automatic device info collection.
+
+ ```typescript
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+ var RNMPlugin = new ReactNativeManualDevicePlugin();
+ var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ disableDeviceCollection: true,
+ extensions: [RNMPlugin]
+ }
+ });
+ appInsights.loadAppInsights();
+ ```
+
+ - Use your own device info collection class.
+
+ ```typescript
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+ // Simple inline constant implementation
+ const myDeviceInfoModule = {
+ getModel: () => "deviceModel",
+ getDeviceType: () => "deviceType",
+ // v5 returns a string while latest returns a promise
+ getUniqueId: () => "deviceId", // This "may" also return a Promise<string>
+ };
+
+ var RNMPlugin = new ReactNativeManualDevicePlugin();
+ RNMPlugin.setDeviceInfoModule(myDeviceInfoModule);
+
+ var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ extensions: [RNMPlugin]
+ }
+ });
+
+ appInsights.loadAppInsights();
+ ```
#### [Angular](#tab/angular)
export class AppComponent {
This section covers configuration settings for the framework extensions for Application Insights JavaScript SDK.
+### Track router history
+
+#### [React](#tab/react)
+
+| Name | Type | Required? | Default | Description |
+||--|--|||
+| history | object | Optional | null | Track router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main).<br><br>To track router history, most users can use the `enableAutoRouteTracking` field in the [JavaScript SDK configuration](./javascript-sdk-configuration.md#sdk-configuration). This field collects the same data for page views as the `history` object.<br><br>Use the `history` object when you're using a router implementation that doesn't update the browser URL, which is what the configuration listens to. You shouldn't enable both the `enableAutoRouteTracking` field and `history` object, because you'll get multiple page view events. |
+
+The following code example shows how to enable the `enableAutoRouteTracking` field.
+
+```javascript
+var reactPlugin = new ReactPlugin();
+var appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ enableAutoRouteTracking: true,
+ extensions: [reactPlugin]
+ }
+});
+appInsights.loadAppInsights();
+```
+
+#### [React Native](#tab/reactnative)
+
+React Native doesn't track router changes but does track [page views](./api-custom-events-metrics.md#page-views).
+
+#### [Angular](#tab/angular)
+
+| Name | Type | Required? | Default | Description |
+||--|--|||
+| router | object | Optional | null | Angular router for enabling Application Insights PageView tracking. |
+
+The following code example shows how to enable tracking of router history.
+
+```javascript
+import { Component } from '@angular/core';
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
+import { Router } from '@angular/router';
+++
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+export class AppComponent {
+ constructor(
+ private router: Router
+ ){
+ var angularPlugin = new AngularPlugin();
+ const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING',
+ extensions: [angularPlugin],
+ extensionConfig: {
+ [angularPlugin.identifier]: { router: this.router }
+ }
+ } });
+ appInsights.loadAppInsights();
+ }
+}
+```
+++ ### Track exceptions #### [React](#tab/react)
-[React error boundaries](https://react.dev/reference/react/Component#catching-rendering-errors-with-an-error-boundary) provide a way to gracefully handle an exception when it occurs within a React application. When such an exception occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the exception when it occurs.
+[React error boundaries](https://react.dev/reference/react/Component#catching-rendering-errors-with-an-error-boundary) provide a way to gracefully handle an uncaught exception when it occurs within a React application. When such an exception occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the exception when it occurs.
```javascript import React from "react";
The `AppInsightsErrorBoundary` requires two props to be passed to it. They're th
#### [React Native](#tab/reactnative)
-Exception tracking is enabled by default. If you want to disable it, set `disableExceptionCollection` to `true`.
+The tracking of uncaught exceptions is enabled by default. If you want to disable the tracking of uncaught exceptions, set `disableExceptionCollection` to `true`.
```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web';
N/A
#### [React Native](#tab/reactnative)
-In addition to user agent info from the browser, which is collected by Application Insights web package, React Native also collects device information. Device information is automatically collected when you add the plug-in.
+- **React Native Plugin**: In addition to user agent info from the browser, which is collected by Application Insights web package, React Native also collects device information. Device information is automatically collected when you add the plug-in.
+- **React Native Manual Device Plugin**: Depending on how you configured the plugin when you added the extension to your code, this plugin either:
+ - Doesn't collect device information
+ - Uses your own device info collection class
#### [Angular](#tab/angular)
N/A
#### [React](#tab/react)
-#### Track router history
-
-| Name | Type | Required? | Default | Description |
-||--|--|||
-| history | object | Optional | null | Track router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main).<br><br>To track router history, most users can use the `enableAutoRouteTracking` field in the [JavaScript SDK configuration](./javascript-sdk-configuration.md#sdk-configuration). This field collects the same data for page views as the `history` object.<br><br>Use the `history` object when you're using a router implementation that doesn't update the browser URL, which is what the configuration listens to. You shouldn't enable both the `enableAutoRouteTracking` field and `history` object, because you'll get multiple page view events. |
-
-The following code example shows how to enable the `enableAutoRouteTracking` field.
-
-```javascript
-var reactPlugin = new ReactPlugin();
-var appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- enableAutoRouteTracking: true,
- extensions: [reactPlugin]
- }
-});
-appInsights.loadAppInsights();
-```
- #### Track components usage A feature that's unique to the React plug-in is that you're able to instrument specific components and track them individually.
If events are getting "blocked" because the `Promise` returned via `getUniqueId`
#### [Angular](#tab/angular)
-#### Track router history
-
-| Name | Type | Required? | Default | Description |
-||--|--|||
-| router | object | Optional | null | Angular router for enabling Application Insights PageView tracking. |
-
-The following code example shows how to enable tracking of router history.
-
-```javascript
-import { Component } from '@angular/core';
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
-import { Router } from '@angular/router';
---
-@Component({
- selector: 'app-root',
- templateUrl: './app.component.html',
- styleUrls: ['./app.component.css']
-})
-export class AppComponent {
- constructor(
- private router: Router
- ){
- var angularPlugin = new AngularPlugin();
- const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING',
- extensions: [angularPlugin],
- extensionConfig: {
- [angularPlugin.identifier]: { router: this.router }
- }
- } });
- appInsights.loadAppInsights();
- }
-}
-```
+N/A
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
Last updated 11/09/2022
# Manage tables in a Log Analytics workspace
-A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
+A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
+
+## Permissions required
+
+You must have `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspaces you manage, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
## Table properties
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
For limits specific to Media Services v2 (legacy), see [Media Services v2 (legac
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise stated. [!INCLUDE [application-gateway-limits](../../../includes/application-gateway-limits.md)]
+### Application Gateway for Containers limits
++ ### Azure Bastion limits [!INCLUDE [Azure Bastion limits](../../../includes/bastion-limits.md)]
There are limits, per subscription, for deploying resources using Compute Galler
* [Understand Azure limits and increases](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests/) * [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) * [Sizes for Azure Cloud Services](../../cloud-services/cloud-services-sizes-specs.md)
-* [Naming rules and restrictions for Azure resources](resource-name-rules.md)
+* [Naming rules and restrictions for Azure resources](resource-name-rules.md)
backup Backup Azure Security Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature.md
The security features mentioned in this article provide defense mechanisms again
| Operation | Error details | Resolution | | | | | | Policy change |The backup policy couldn't be modified. Error: The current operation failed due to an internal service error [0x29834]. Please retry the operation after sometime. If the issue persists, please contact Microsoft support. |**Cause:**<br/>This error appears when security settings are enabled, you try to reduce retention range below the minimum values specified above and you're on an unsupported version (supported versions are specified in first note of this article). <br/>**Recommended Action:**<br/> In this case, you should set retention period above the minimum retention period specified (seven days for daily, four weeks for weekly, three weeks for monthly or one year for yearly) to proceed with policy-related updates. Optionally, a preferred approach would be to update the backup agent, Azure Backup Server and/or DPM UR to leverage all the security updates. |
-| Change Passphrase |Security PIN entered is incorrect. (ID: 100130) Provide the correct Security PIN to complete this operation. |**Cause:**<br/> This error comes when you enter invalid or expired Security PIN while performing critical operation (like change passphrase). <br/>**Recommended Action:**<br/> To complete the operation, you must enter valid Security PIN. To get the PIN, sign in to Azure portal and navigate to Recovery Services vault > Settings > Properties > Generate Security PIN. Use this PIN to change passphrase. |
+| Change Passphrase |Security PIN entered is incorrect. (ID: 100130) Provide the correct Security PIN to complete this operation. |**Cause:**<br/> This error comes when you enter invalid or expired Security PIN while performing critical operation (like change passphrase). <br/>**Recommended Action:**<br/> To complete the operation, you must enter valid Security PIN. To get the PIN, sign in to the Azure portal and navigate to Recovery Services vault > Settings > Properties > Generate Security PIN. Use this PIN to change passphrase. |
| Change Passphrase |Operation failed. ID: 120002 |**Cause:**<br/>This error appears when security settings are enabled, you try to change the passphrase and you're on an unsupported version (valid versions specified in first note of this article).<br/>**Recommended Action:**<br/> To change the passphrase, you must first update the backup agent to minimum version 2.0.9052, Azure Backup Server to minimum update 1, and/or DPM to minimum DPM 2012 R2 UR12 or DPM 2016 UR2 (download links below), then enter a valid Security PIN. To get the PIN, sign in to the Azure portal and navigate to Recovery Services vault > Settings > Properties > Generate Security PIN. Use this PIN to change passphrase. | ## Immutability support (preview)
The following table lists the disallowed operations for MARS when immutability i
- [Get started with Azure Recovery Services vault](backup-azure-vms-first-look-arm.md) to enable these features. - [Download the latest Azure Recovery Services agent](https://aka.ms/azurebackup_agent) to help protect Windows computers and guard your backup data against attacks.-
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
After you've created one or more user-assigned managed identities, you can creat
To create a Batch pool with a user-assigned managed identity through the Azure portal:
-1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search bar, enter and select **Batch accounts**. 1. On the **Batch accounts** page, select the Batch account where you want to create a Batch pool. 1. In the menu for the Batch account, under **Features**, select **Pools**.
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
To create the web app that you work with, follow the [static HTML quickstart](..
## Sign in to the Azure portal
-Open a browser and navigate to the [Azure portal](https://portal.azure.com).
+Open a browser and sign in to the [Azure portal](https://portal.azure.com).
### Dynamic site acceleration optimization If you want to optimize your CDN endpoint for dynamic site acceleration (DSA), you should use the [CDN portal](cdn-create-new-endpoint.md) to create your profile and endpoint. With [DSA optimization](cdn-dynamic-site-acceleration.md), the performance of web pages with dynamic content is measurably improved. For instructions about how to optimize a CDN endpoint for DSA from the CDN portal, see [CDN endpoint configuration to accelerate delivery of dynamic files](cdn-dynamic-site-acceleration.md#cdn-endpoint-configuration-to-accelerate-delivery-of-dynamic-files).
What you learned:
Learn how to optimize CDN performance in the following articles: > [!div class="nextstepaction"]
-> [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
+> [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Configuration - Azure portal
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
For more information, see [Push Notifications](../notifications.md).
## Build intelligent, AI powered chat experiences
-You can use [Azure Cognitive APIs](../../../ai-services/index.yml) with the Chat SDK to build use cases like:
+You can use [Azure AI APIs](../../../ai-services/index.yml) with the Chat SDK to build use cases like:
- Enable users to chat with each other in different languages. - Help a support agent prioritize tickets by detecting a negative sentiment of an incoming message from a customer. - Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content.
-One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service is responsible for listening to the messages exchanged by other participants [1], calling Cognitive APIs to translate content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
+One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service is responsible for listening to the messages exchanged by other participants [1], calling AI APIs to translate content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
-This way, the message history contains both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../ai-services/translator/quickstart-text-rest-api.md) to understand how to use Cognitive APIs to translate text to different languages.
+This way, the message history contains both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../ai-services/translator/quickstart-text-rest-api.md) to understand how to use AI APIs to translate text to different languages.
## Next steps
container-instances Container Instances Best Practices And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-best-practices-and-considerations.md
+
+ Title: Best practices and considerations
+description: Best practices and considerations for customers to account for in their Container Instances workloads.
+++++ Last updated : 07/22/2023++
+# Best practices and considerations for Azure Container Instances
+
+Azure Container Instances allow you to package, deploy and manage cloud applications without having to manage the underlying infrastructure. Common scenarios that run on ACI include burst workloads, task automation, and build jobs. You can use ACI by defining the resources they need per container group, including vCPU and memory. ACI is a great solution for any scenario that can operate in isolated container and provides fast start up times, hyper-visor level security, custom container sizes, and more. The information below will help you determine if Azure Container Instances is best for your scenario.
+
+## What to consider
+
+There are default limits that may require quota increases. For more details: [Resource availability & quota limits for ACI - Azure Container Instances | Microsoft Learn](./container-instances-resource-and-quota-limits.md)
+
+Container Images can't be larger than 15 GB, any images above this size may cause unexpected behavior: [How large can my container image be?](./container-instances-faq.yml)
+
+If your container image is larger than 15 GB, you can [mount an Azure Fileshare](container-instances-volume-azure-files.md) to store the image.
+
+If a container group restarts, the container groupΓÇÖs IP may change. We advise against using a hard coded IP address in your scenario. If you need a static public IP address, use Application Gateway: [Static IP address for container group - Azure Container Instances | Microsoft Learn](./container-instances-application-gateway.md).
+
+There are ports that are reserved for service functionality. We advise you not to use these ports, because their use will lead to unexpected behavior: [Does the ACI service reserve ports for service functionality?](./container-instances-faq.yml).
+
+Your container groups may restart due to platform maintenance events. These maintenance events are done to ensure the continuous improvement of the underlying infrastructure: [Container had an isolated restart without explicit user input](./container-instances-faq.yml).
+
+ACI doesn't allow [privileged container operations](./container-instances-faq.yml). We advise you not to depend on using the root directory for your scenario
+
+## Best practices
+
+We advise running container groups in multiple regions so your workloads can continue to run if there's an issue in one region.
+
+We advise against using a hard coded IP address in your scenario since a container group's IP address isn't guaranteed. To mitigate connectivity issues, we recommend configuring a gateway. If your container is behind a public IP address and you need a static public IP address, use [Application Gateway](./container-instances-application-gateway.md). If your container is behind a virtual network and you need a static IP address, we recommend using [NAT Gateway](./container-instances-nat-gateway.md).
+
+## Other Azure Container options
+
+### Azure Container Apps
+Azure Container Apps enables you to build serverless microservices based on containers. Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use Azure Kubernetes Service. However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams may prefer to start building container microservices with Azure Container Apps.
+
+### Azure App Service
+Azure App Service provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. If you plan to build web apps, Azure App Service is an ideal option.
+
+### Azure Container Instances
+Azure Container Instances (ACI) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates aren't provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through virtual nodes. If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
+
+### Azure Kubernetes Service
+Azure Kubernetes Service (AKS) provides a fully managed Kubernetes option in Azure. It supports direct access to the Kubernetes API and runs any Kubernetes workload. The full cluster resides in your subscription, with the cluster configurations and operations within your control and responsibility. Teams looking for a fully managed version of Kubernetes in Azure, Azure Kubernetes Service is an ideal option.
+
+### Azure Functions
+Azure Functions is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. If you plan to build FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change.
+
+### Azure Spring Apps
+Azure Spring Apps is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+
+### Azure Red Hat OpenShift
+Azure Red Hat OpenShift is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
+
+## Next steps
+
+Learn how to deploy a multi-container container group with an Azure Resource Manager template:
+
+> [!div class="nextstepaction"]
+> [Deploy a container group][resource-manager template]
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md
In the command output, note the `zoneRedundancy` property for the replica. When
## Create a zone-redundant registry - portal
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Create a resource** > **Containers** > **Container Registry**. 1. In the **Basics** tab, select or create a resource group, and enter a unique registry name. 1. In **Location**, select a region that supports zone redundancy for Azure Container Registry, such as *East US*.
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Restore-AzCosmosDBAccount `
``` ### To restore a continuous account that is configured with managed identity using CLI
-To restore Customer Managed Key (CMK) continuous account please refer to the steps provided [here](./how-to-setup-customer-managed-keys.md)
+To restore Customer Managed Key (CMK) continuous account, please refer to the steps provided [here](./how-to-setup-customer-managed-keys.md)
### <a id="get-the-restore-details-powershell"></a>Get the restore details from the restored account
The simplest way to trigger a restore is by issuing the restore command with nam
#### Create a new Azure Cosmos DB account by restoring from an existing account + ```azurecli-interactive az cosmosdb restore \
If `--enable-public-network` is not set, restored account is accessible from pub
++ #### Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account ```azurecli-interactive
az cosmosdb gremlin restorable-resource list \
``` ``` [ {
- "databaseName": "db1",
- "graphNames": [
- "graph1",
- "graph3",
- "graph2"
- ]
+```
+"databaseName": "db1",
+"graphNames": [
+ "graph1",
+ "graph3",
+ "graph2"
+]
+```
} ] ```
az cosmosdb table restorable-table list \
``` ``` [ {
- "id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/59781d91-682b-4cc2-93a3-c25d03fab159",
- "name": "59781d91-682b-4cc2-93a3-c25d03fab159",
- "resource": {
- "eventTimestamp": "2022-02-09T17:09:54Z",
- "operationType": "Create",
- "ownerId": "table1",
- "ownerResourceId": "tOdDAKYiBhQ=",
- "rid": "9pvDGwAAAA=="
- },
- "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+```
+"id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/59781d91-682b-4cc2-93a3-c25d03fab159",
+"name": "59781d91-682b-4cc2-93a3-c25d03fab159",
+"resource": {
+ "eventTimestamp": "2022-02-09T17:09:54Z",
+ "operationType": "Create",
+ "ownerId": "table1",
+ "ownerResourceId": "tOdDAKYiBhQ=",
+ "rid": "9pvDGwAAAA=="
+},
+"type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+```
},
- {"id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
- "name": "2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
- "resource": {
- "eventTimestamp": "2022-02-09T20:47:53Z",
- "operationType": "Create",
- "ownerId": "table3",
- "ownerResourceId": "tOdDALBwexw=",
- "rid": "01DtkgAAAA=="
- },
- "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+```
+{"id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
+"name": "2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
+"resource": {
+ "eventTimestamp": "2022-02-09T20:47:53Z",
+ "operationType": "Create",
+ "ownerId": "table3",
+ "ownerResourceId": "tOdDALBwexw=",
+ "rid": "01DtkgAAAA=="
+},
+"type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+```
}, ] ```
az cosmosdb table restorable-resource list \
``` { "tableNames": [
- "table1",
- "table3",
- "table2"
+```
+"table1",
+"table3",
+"table2"
+```
] } ```
az deployment group create -g <ResourceGroup> --template-file <RestoreTemplateFi
* [How to migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Continuous backup mode resource model.](continuous-backup-restore-resource-model.md) * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.+
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
The following Azure permissions, or scopes, are supported per subscription for b
## Sign in to Azure -- Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- Sign in to the [Azure portal](https://portal.azure.com).
## Create a budget in the Azure portal
cost-management-billing Capabilities Ingestion Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-ingestion-normalization.md
This article helps you understand the data ingestion and normalization capabilit
## Definition
-_Data ingestion and normalization refers to the process of collecting, transforming, and organizing data from various sources into a single, easily accessible repository._
+**Data ingestion and normalization refers to the process of collecting, transforming, and organizing data from various sources into a single, easily accessible repository.**
Gather cost, utilization, performance, and other business data from cloud providers, vendors, and on-premises systems. Gathering the data can include:
When you first start managing cost in the cloud, you use the native tools availa
- Determine the level of granularity required and how often the data needs to be refreshed. Daily cost data can be a challenge to manage for a large account. Consider monthly aggregates to reduce costs and increase query performance and reliability if that meets your reporting needs. - Consider using a third-party FinOps platform. - Review the available [third-party solutions in the Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/searchQuery/cost).
+ - If you decide to build your own solution, consider starting with [FinOps hubs](https://aka.ms/finops/hubs), part of the open source FinOps toolkit provided by Microsoft.
+ - FinOps hubs will accelerate your development and help you focus on building the features you need rather than infrastructure.
- Select the [cost details solution](../automate/usage-details-best-practices.md) that is right for you. We recommend scheduled exports, which push cost data to a storage account on a daily or monthly basis. - If you use daily exports, notice that data is pushed into a new file each day. Ensure that you only select the latest day when reporting on costs. - Determine if you need a data integration or workflow technology to process data.
At this point, you have a data pipeline and are ingesting data into a central da
- Normalize data to a standard schema to support aligning and blending data from multiple sources. - For cost data, we recommend using the [FinOps Open Cost & Usage Specification (FOCUS) schema](https://finops.org/focus).
+ - [FinOps hubs](https://aka.ms/finops/hubs) includes a Power BI report that normalizes data to the FOCUS schema, which can be a good starting point.
+ - For an example of the FOCUS schema with Azure data, see the [FOCUS sample report](https://github.com/flanakin/cost-management-powerbi#FOCUS).
- Complement cloud cost data with organizational hierarchies and budgets. - Consider labeling or tagging requirements to map cloud costs to organizational hierarchies. - Enrich cloud resource and solution data with internal CMDB or ITAM data.
This capability is a part of the FinOps Framework by the FinOps Foundation, a no
## Next steps - Read about [Cost allocation](capabilities-allocation.md) to learn how to allocate costs to business units and applications.-- Read about [Data analysis and showback](capabilities-analysis-showback.md) to learn how to analyze and report on costs.
+- Read about [Data analysis and showback](capabilities-analysis-showback.md) to learn how to analyze and report on costs.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
The following reservations aren't eligible for refunds:
## How to exchange or refund an existing reservation
-You can exchange your reservation from [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade).
+You can exchange your reservation from the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade).
1. Select the reservations that you want to refund and select **Exchange**. [![Example image showing reservations to return](./media/exchange-and-refund-azure-reservations/exchange-refund-return.png)](./media/exchange-and-refund-azure-reservations/exchange-refund-return.png#lightbox)
cost-management-billing Review Customer Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-customer-agreement-bill.md
It must be more than 30 days from the day that you subscribed to Azure. Azure bi
## Sign in to Azure -- Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- Sign in to the [Azure portal](https://portal.azure.com).
## Check access to a Microsoft Customer Agreement
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
It must be more than 30 days from the day that you subscribed to Azure. Azure bi
## Sign in to Azure -- Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- Sign in to the [Azure portal](https://portal.azure.com).
## Compare billed charges with your usage file
cost-management-billing Review Partner Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-partner-agreement-bill.md
It must be more than 30 days from the day that you subscribed to Azure. Azure bi
## Sign in to Azure -- Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- Sign in to the [Azure portal](https://portal.azure.com).
## Check access to a Microsoft Customer Agreement
data-factory Deactivate Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deactivate-activity.md
First, you may deactivate a single activity from its **General** tab.
:::image type="content" source="./media/deactivate-activity/deactivate-03-setup-single.png" alt-text="Deactive one activity at a time":::
-Alternatively, you can deactive multiple activities with right click.
+Alternatively, you can deactivate multiple activities with right click.
- Press down _Ctrl_ key to multi-select. Using your mouse, left click on all activities you want to deactivate - Right click to bring up the drop down menu-- Select _Deactivate_ to deactive them all-- To fine tune the settings fro _Mark activity as_, go to **General** tab of the activity, and make appropriate changes
+- Select _Deactivate_ to deactivate them all
+- To fine tune the settings for _Mark activity as_, go to **General** tab of the activity, and make appropriate changes
:::image type="content" source="./media/deactivate-activity/deactivate-04-setup-multiple.png" alt-text="Deactive multiple activities all at once":::
+In both cases, you do need to deploy the changes to deactivate the parts during pipeline run.
+ To reactivate the activities, choose _Active_ for the _Activity State_, and they revert back to their previous behaviors, as expected. ## Behaviors
Deactivation is a powerful tool for pipeline developer. It allows developers to
### Known limitations
-An inactive activity never actually runs. This means the activity won't have an output or an error field. Any references to these fields throw errors downstream.
+An inactive activity never actually runs. This means the activity won't have an error field, or its typical output fields. Any references to missing fields may throw errors downstream.
## Next steps
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md
After you create the user properties, you can monitor them in the monitoring lis
- `Filter` - Activity will behave as before. - `Until` Activity will evaluate the expression and will loop until the condition is satisfied. Inner activities may still be skipped based on the rerun rules. - `Foreach` Activity will always loop on the items it receives. Inner activities may still be skipped based on the rerun rules.-- `If and switch` - Conditions will always be evaluated. Inner activities may still be skipped based on the rerun rules.
+- `If and switch` - Conditions will always be evaluated. All inner activities will be evaluated. Inner activities may still be skipped based on the rerun rules, but acities such as Execute Pipeline will rerun.
- `Execute pipeline activity` - The child pipeline will be triggered, but all activities in the child pipeline may still be skipped based on the rerun rules.
databox-online Azure Stack Edge Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-create-iot-edge-module.md
Before you begin, make sure you have:
An Azure container registry is a private Docker registry in Azure where you can store and manage your private Docker container images. The two popular Docker registry services available in the cloud are Azure Container Registry and Docker Hub. This article uses the Container Registry.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. From a browser, ign in to the [Azure portal](https://portal.azure.com).
2. Select **Create a resource > Containers > Container Registry**. Click **Create**. 3. Provide:
databox-online Azure Stack Edge Gpu Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-iot-edge-module.md
Before you begin, make sure you have:
An Azure container registry is a private Docker registry in Azure where you can store and manage your private Docker container images. The two popular Docker registry services available in the cloud are Azure Container Registry and Docker Hub. This article uses the Container Registry.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. From a browser, ign in to the [Azure portal](https://portal.azure.com).
2. Select **Create a resource > Containers > Container Registry**. Click **Create**. 3. Provide:
databox Data Box Heavy Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-quickstart-portal.md
Before you begin, make sure that:
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Order
databox Data Box Quickstart Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-quickstart-export.md
Before you begin:
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Order
databox Data Box Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-quickstart-portal.md
Before you begin, make sure that you've:
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Order
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The optional Defender CSPM plan, provides advanced posture management capabiliti
### Plan pricing
-> [!NOTE]
-> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï
- Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Database, and Storage accounts at $5/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes.
+> [!NOTE]
+>
+> - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï
+>
+> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscription that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+ ## Plan availability Learn more about [Defender CSPM pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
The following table summarizes each plan and their cloud availability.
## Next steps Learn about Defender for Cloud's [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).+
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
description: Learn how to fine-tune the Microsoft Defender for Cloud security al
Previously updated : 11/09/2021 Last updated : 07/23/2023 # Quickstart: Configure email notifications for security alerts
To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails.
- approximately **two emails per day** for **medium-severity** alerts - approximately **one email per day** for **low-severity** alerts ## Availability
To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails.
|-|:-| |Release state:|General availability (GA)| |Pricing:|Email notifications are free; for security alerts, enable the enhanced security plans ([plan pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)) |
-|Required roles and permissions:|**Security Admin**<br>**Subscription Owner** |
+|Required roles and permissions:|**Security Admin**<br>**Subscription Owner**<br>**Contributor** |
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| ## Customize the security alerts email notifications via the portal<a name="email"></a>
You can send email notifications to individuals or to all users with specific Az
## Customize the alerts email notifications through the API
-You can also manage your email notifications through the supplied REST API. For full details see the [SecurityContacts API documentation](/rest/api/defenderforcloud/security-contacts).
+You can also manage your email notifications through the supplied REST API. For full details, see the [SecurityContacts API documentation](/rest/api/defenderforcloud/security-contacts).
This is an example request body for the PUT request when creating a security contact configuration:
URI: `https://management.azure.com/subscriptions/<SubscriptionId>/providers/Micr
} ```
-## See also
+## Next steps
To learn more about security alerts, see the following pages: -- [Security alerts - a reference guide](alerts-reference.md)--Learn about the security alerts you might see in Microsoft Defender for Cloud's Threat Protection module-- [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md)--Learn how to manage and respond to security alerts-- [Workflow automation](workflow-automation.md)--Automate responses to alerts with custom notification logic
+- [Security alerts - a reference guide](alerts-reference.md) - Learn about the security alerts you might see in Microsoft Defender for Cloud's Threat Protection module.
+- [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - Learn how to manage and respond to security alerts.
+- [Workflow automation](workflow-automation.md) - Automate responses to alerts with custom notification logic.
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Microsoft Defender for Cloud is a cloud-native application protection platform (
![Diagram that shows the core functionality of Microsoft Defender for Cloud.](media/defender-for-cloud-introduction/defender-for-cloud-pillars.png) > [!NOTE]
-> For pricing information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> For Defender for Cloud pricing information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Secure cloud applications
Defender for Cloud helps you to incorporate good security practices early during
TodayΓÇÖs applications require security awareness at the code, infrastructure, and runtime levels to make sure that deployed applications are hardened against attacks.
-| Capability | What problem does it solve? | Get started | Defender plan and pricing |
-| - | | -- | - |
-| [Code pipeline insights](defender-for-devops-introduction.md) | Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud | [Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Capability | What problem does it solve? | Get started | Defender plan |
+| | | | - |
+| [Code pipeline insights](defender-for-devops-introduction.md) | Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud | Defender for DevOps |
## Improve your security posture
The security of your cloud and on-premises resources depends on proper configura
Defender for Cloud includes Foundational CSPM capabilities for free. You can also enable advanced CSPM capabilities by enabling the Defender CSPM plan. --
-| Capability | What problem does it solve? | Get started | Defender plan and pricing |
+| Capability | What problem does it solve? | Get started | Defender plan |
|--|--|--|--| | [Centralized policy management](security-policy-concept.md) | Define the security conditions that you want to maintain across your environment. The policy translates to recommendations that identify resource configurations that violate your security policy. The [Microsoft cloud security benchmark](concept-regulatory-compliance.md) is a built-in standard that applies security principles with detailed technical implementation guidance for Azure and other cloud providers (such as AWS and GCP). | [Customize a security policy](custom-security-policies.md) | Foundational CSPM (Free) | | [Secure score]( secure-score-security-controls.md) | Summarize your security posture based on the security recommendations. As you remediate recommendations, your secure score improves. | [Track your secure score](secure-score-access-and-track.md) | Foundational CSPM (Free) |
Proactive security principles require that you implement security practices that
When your environment is threatened, security alerts right away indicate the nature and severity of the threat so you can plan your response. After you identify a threat in your environment, you need to quickly respond to limit the risk to your resources.
-| Capability | What problem does it solve? | Get started | Defender plan and pricing |
+| Capability | What problem does it solve? | Get started | Defender plan |
| - | | -- | - |
-| Protect cloud servers | Provide server protections through Microsoft Defender for Endpoint or extended protection with just-in-time network access, file integrity monitoring, vulnerability assessment, and more. | [Secure your multicloud and on-premises servers](defender-for-servers-introduction.md) | [Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Identify threats to your storage resources | Detect unusual and potentially harmful attempts to access or exploit your storage accounts using advanced threat detection capabilities and Microsoft Threat Intelligence data to provide contextual security alerts. | [Protect your cloud storage resources](defender-for-storage-introduction.md) | [Defender for Storage](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Protect cloud databases | Protect your entire database estate with attack detection and threat response for the most popular database types in Azure to protect the database engines and data types, according to their attack surface and security risks. | [Deploy specialized protections for cloud and on-premises databases](quickstart-enable-database-protections.md) | - [Defender for Azure SQL Databases](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for SQL servers on machines](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for Open-source relational databases](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br>- [Defender for Azure Cosmos DB](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Protect containers | Secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications with environment hardening, vulnerability assessments, and run-time protection. | [Find security risks in your containers](defender-for-containers-introduction.md) | [Defender for Containers](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| [Infrastructure service insights](asset-inventory.md) | Diagnose weaknesses in your application infrastructure that can leave your environment susceptible to attack. | - [Identify attacks targeting applications running over App Service](defender-for-app-service-introduction.md)</br>- [Detect attempts to exploit Key Vault accounts](defender-for-key-vault-introduction.md)</br>- [Get alerted on suspicious Resource Manager operations](defender-for-resource-manager-introduction.md)</br>- [Expose anomalous DNS activities](defender-for-dns-introduction.md) | - [Defender for App Service](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for Key Vault](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for Resource Manager](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</br></br>- [Defender for DNS](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-| [Security alerts](alerts-overview.md) | Get informed of real-time events that threaten the security of your environment. Alerts are categorized and assigned severity levels to indicate proper responses. | [Manage security alerts]( managing-and-responding-alerts.md) | [Any workload protection Defender plan](#protect-cloud-workloads) |
-| [Security incidents](alerts-overview.md#what-are-security-incidents) | Correlate alerts to identify attack patterns and integrate with Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions to respond to threats and limit the risk to your resources. | [Export alerts to SIEM, SOAR, or ITSM systems](export-to-siem.md) | [Any workload protection Defender plan](#protect-cloud-workloads) |
+| Protect cloud servers | Provide server protections through Microsoft Defender for Endpoint or extended protection with just-in-time network access, file integrity monitoring, vulnerability assessment, and more. | [Secure your multicloud and on-premises servers](defender-for-servers-introduction.md) | Defender for Servers |
+| Identify threats to your storage resources | Detect unusual and potentially harmful attempts to access or exploit your storage accounts using advanced threat detection capabilities and Microsoft Threat Intelligence data to provide contextual security alerts. | [Protect your cloud storage resources](defender-for-storage-introduction.md) | Defender for Storage |
+| Protect cloud databases | Protect your entire database estate with attack detection and threat response for the most popular database types in Azure to protect the database engines and data types, according to their attack surface and security risks. | [Deploy specialized protections for cloud and on-premises databases](quickstart-enable-database-protections.md) | - Defender for Azure SQL Databases</br>- Defender for SQL servers on machines</br>- Defender for Open-source relational databases</br>- Defender for Azure Cosmos DB |
+| Protect containers | Secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications with environment hardening, vulnerability assessments, and run-time protection. | [Find security risks in your containers](defender-for-containers-introduction.md) | Defender for Containers |
+| [Infrastructure service insights](asset-inventory.md) | Diagnose weaknesses in your application infrastructure that can leave your environment susceptible to attack. | - [Identify attacks targeting applications running over App Service](defender-for-app-service-introduction.md)</br>- [Detect attempts to exploit Key Vault accounts](defender-for-key-vault-introduction.md)</br>- [Get alerted on suspicious Resource Manager operations](defender-for-resource-manager-introduction.md)</br>- [Expose anomalous DNS activities](defender-for-dns-introduction.md) | - Defender for App Service</br></br>- Defender for Key Vault</br></br>- Defender for Resource Manager</br></br>- Defender for DNS |
+| [Security alerts](alerts-overview.md) | Get informed of real-time events that threaten the security of your environment. Alerts are categorized and assigned severity levels to indicate proper responses. | [Manage security alerts]( managing-and-responding-alerts.md) | Any workload protection Defender plan |
+| [Security incidents](alerts-overview.md#what-are-security-incidents) | Correlate alerts to identify attack patterns and integrate with Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions to respond to threats and limit the risk to your resources. | [Export alerts to SIEM, SOAR, or ITSM systems](export-to-siem.md) | Any workload protection Defender plan |
## Learn More
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
+
+ Title: Enable agentless scanning for VMs
+description: Find installed software and software vulnerabilities on your Azure machines and AWS machines without installing an agent.
++++ Last updated : 06/29/2023++
+# Enable agentless scanning for VMs
+
+Agentless scanning provides visibility into installed software and software vulnerabilities on your workloads to extend vulnerability assessment coverage to server workloads without a vulnerability assessment agent installed.
+
+Learn more about [agentless scanning](concept-agentless-data-collection.md).
+
+Agentless vulnerability assessment uses the Microsoft Defender Vulnerability Management engine to assess vulnerabilities in the software installed on your VMs, without requiring Defender for Endpoint to be installed. Vulnerability assessment shows software inventory and vulnerability results in the same format as the agent-based assessments.
+
+## Compatibility with agent-based vulnerability assessment solutions
+
+Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md) (MDVM), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
+
+When you enable agentless vulnerability assessment:
+
+- If you have **no existing integrated vulnerability** assessment solutions enabled on any of your VMs on your subscription, Defender for Cloud automatically enables MDVM by default.
+
+- If you select **Microsoft Defender Vulnerability Management** as part of an [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness.
+
+ - Machines covered by just one of the sources (Defender Vulnerability Management or agentless) show the results from that source.
+ - Machines covered by both sources show the agent-based results only for increased freshness.
+
+- If you select **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan are shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly.
+
+ If you want to change the default behavior so that Defender for Cloud always displays results from MDVM (regardless of a third-party agent solution), select the [Microsoft Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
+
+## Enabling agentless scanning for machines
+
+When you enable [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Defender for Servers P2](defender-for-servers-introduction.md), agentless scanning is enabled on by default.
+
+If you have Defender for Servers P2 already enabled and agentless scanning is turned off, you need to turn on agentless scanning manually.
+
+### Agentless vulnerability assessment on Azure
+
+**To enable agentless vulnerability assessment on Azure**:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription.
+1. For either the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or Defender for Servers P2 plan, select **Settings**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png" alt-text="Screenshot of link for the settings of the Defender plans for Azure accounts." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png":::
+
+ The agentless scanning settings are shared by both Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2. When you enable agentless scanning on either plan, the setting is enabled for both plans.
+
+1. In the settings pane, turn on **Agentless scanning for machines**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png" alt-text="Screenshot of settings and monitoring screen to turn on agentless scanning." lightbox="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png":::
+
+1. Select **Save**.
+
+### Agentless vulnerability assessment on AWS
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant account.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, select **Settings**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings-aws.png" alt-text="Screenshot of link for the settings of the Defender plans for AWS accounts." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings-aws.png":::
+
+ When you enable agentless scanning on either plan, the setting applies to both plans.
+
+1. In the settings pane, turn on **Agentless scanning for machines**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scan-on-aws.png" alt-text="Screenshot of the agentless scanning status for AWS accounts." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scan-on-aws.png":::
+
+1. Select **Save and Next: Configure Access**.
+
+1. Download the CloudFormation template.
+
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
+
+1. Select **Next: Review and generate**.
+
+1. Select **Update**.
+
+After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud.
+
+## Exclude machines from scanning
+
+Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
+
+To configure machines for exclusion:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription or multicloud connector.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, select **Settings**.
+1. For agentless scanning, select **Edit configuration**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-edit-configuration.png" alt-text="Screenshot of the link to edit the agentless scanning configuration." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scanning-edit-configuration.png":::
+
+1. Enter the tag name and value that applies to the machines that you want to exempt. You can enter `multiple tag:value` pairs.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-exclude-tags.png" alt-text="Screenshot of the tag and value fields for excluding machines from agentless scanning.":::
+
+1. Select **Save** to apply the changes.
+
+## Next steps
+
+In this article, you learned about how to scan your machines for software vulnerabilities without installing an agent.
+
+Learn more about:
+
+- [Vulnerability assessment with Microsoft Defender for Endpoint](deploy-vulnerability-assessment-defender-vulnerability-management.md)
+- [Vulnerability assessment with Qualys](deploy-vulnerability-assessment-vm.md)
+- [Vulnerability assessment with BYOL solutions](deploy-vulnerability-assessment-byol-vm.md)
defender-for-cloud How To Enable Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-enable-agentless-containers.md
Title: How-to enable Agentless Container posture in Microsoft Defender CSPM
description: Learn how to onboard Agentless Containers Previously updated : 06/01/2023 Last updated : 06/13/2023 # Onboard Agentless Container posture in Defender CSPM
You can customize your vulnerability assessment experience by exempting manageme
## Next Steps - Learn more about [Trusted Access](/azure/aks/trusted-access-feature). -- Learn how to [view and remediate vulnerability assessment findings for registry images and running images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn how to [view and remediate vulnerability assessment findings for registry images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn how to [Test the Attack Path and Security Explorer using a vulnerable container image](how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md)
- Learn how to [create an exemption](exempt-resource.md) for a resource or subscription. - Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
defender-for-cloud How To Test Attack Path And Security Explorer With Vulnerable Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md
+
+ Title: How-to test the attack path and cloud security explorer using a vulnerable container image in Microsoft Defender for Cloud
+description: Learn how to test the attack path and security explorer using a vulnerable container image
++ Last updated : 07/17/2023++
+# Testing the Attack Path and Security Explorer using a vulnerable container image
+
+## Observing potential threats in the attack path experience
+
+Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
+
+Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on name, environment, path count, and risk categories. Explore cloud security graph Insights on the resource. Examples of Insight types are:
+
+- Pod exposed to the internet
+- Privileged container
+- Pod uses host network
+- Container image is vulnerable to remote code execution
+
+## Testing the attack path and security explorer using a mock vulnerable container image
+
+If there are no entries in the list of attack paths, you can still test this feature by using a mock container image. Use the following steps to set up the test:
+
+**Requirement:** An instance of Azure Container Registry (ACR) in the tested scope.
+
+1. Import a mock vulnerable image to your Azure Container Registry:
+
+ 1. Run the following command in Cloud Shell:
+
+ ```
+ az acr import --name $MYACR --source DCSPMtesting.azurecr.io/mdc-mock-0001 --image mdc-mock-0001
+ ```
+
+ 1. If your AKS isn't attached to your ACR, use the following Cloud Shell command line to point your AKS instance to pull images from the selected ACR:
+
+ ```
+ az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
+
+1. Authenticate your Cloud Shell session to work with the cluster:
+
+ ```
+ az aks get-credentials --subscription <cluster-suid> --resource-group <your-rg> --name <your-cluster-name>
+
+1. Verify success by doing the following steps:
+
+ - Look for an entry with **mdc-dcspm-demo** as namespace
+ - In the **Workloads-> Deployments** tab, verify ΓÇ£podΓÇ¥ created 3/3 and **dcspmcharts-ingress-nginx-controller** 1/1.
+ - In services and ingresses look for-> services **service**, **dcspmcharts-ingress-nginx-controller and dcspmcharts-ingress-nginx-controller-admission**. In the ingress tab, verify one **ingress** is created with an IP address and nginx class.
+
+1. Deploy the mock vulnerable image to expose the vulnerable container to the internet by running the following command:
+
+ ```
+ helm install dcspmcharts oci://dcspmtesting.azurecr.io/dcspmcharts --version 1.0.0 --namespace mdc-dcspm-demo --create-namespace --set registry=<your-registry>
+```
+
+> [!NOTE]
+> After completing the above flow, it can take up to 24 hours to see results in the cloud security explorer and attack path.
+
+## Investigate internet exposed Kubernetes pods
+
+You can build queries in one of the following ways:
+
+- [Find the security issue under attack paths](#find-the-security-issue-under-attack-paths)
+- [Explore risks with built-in cloud security explorer templates](#explore-risks-with-cloud-security-explorer-templates)
+- [Create custom queries with cloud security explorer](#create-custom-queries-with-cloud-security-explorer)
+
+### Find the security issue under attack paths
+
+1.Go to **Recommendations** in the Defender for Cloud menu.
+1. Select the **Attack Path** link to open the attack paths view.
+
+ :::image type="content" source="media/how-to-test-attack-path/attack-path.png" alt-text="Screenshot of showing where to select Attack Path." lightbox="media/how-to-test-attack-path/attack-path.png":::
+
+1. Locate the entry that details this security issue under ΓÇ£Internet exposed Kubernetes pod is running a container with high severity vulnerabilities.ΓÇ¥
+
+ :::image type="content" source="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png" alt-text="Screenshot showing the security issue details." lightbox="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png":::
+
+### Explore risks with cloud security explorer templates
+
+1. From the Defender for Cloud overview page, open the cloud security explorer.
+
+1. Some out of the box templates for Kubernetes appear. Select one of the templates:
+
+ - **Azure Kubernetes pods running images with high severity vulnerabilities**
+ - **Kubernetes namespaces contain vulnerable pods**
+
+ :::image type="content" source="media/how-to-test-attack-path/select-template.png" alt-text="Screenshot showing where to select templates." lightbox="media/how-to-test-attack-path/select-template.png":::
+
+1. Select **Open query**; the template builds the query in the upper portion of the screen. Select **Search** to view the results.
+
+ :::image type="content" source="media/how-to-test-attack-path/query-builder-search.png" alt-text="Screenshot that shows the query built and where to select search." lightbox="media/how-to-test-attack-path/query-builder-search.png":::
+
+### Create custom queries with cloud security explorer
+
+You can also create your own custom queries. The following example shows a search for pods running container images that are vulnerable to remote code execution.
++
+The results are listed below the query.
++
+## Next steps
+
+ - Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Secret Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secret-scanning.md
In addition to detecting SSH private keys, the agentless scanner verifies whethe
- [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md) - [Defender CSPM](concept-cloud-security-posture-management.md)
-> [!NOTE]
-> If both plans are not enabled, you will only have limited access to the features available from Defender for Server's agentless secret scanning capabilities. Check out [which features are available with each plan](#feature-capability).
- - [Enable agentless scanning for machines](enable-vulnerability-assessment-agentless.md#enabling-agentless-scanning-for-machines). For requirements for agentless scanning, see [Learn about agentless scanning](concept-agentless-data-collection.md#availability).
-## Feature capability
-
-You must enable [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) and [Defender CSPM](concept-cloud-security-posture-management.md) to gain access to all of the agentless secret scanning capabilities.
-
-If you only enable one of the two plans, you gain only part of the available features of the agentless secret scanning capabilities. The following table shows which plans enable which features:
-
-| Plan Feature | Defender for servers plan 2 | Defender CSPM |
-|--|--|--|
-| [Attack path](#remediate-secrets-with-attack-path) | No | Yes |
-| [Cloud security explorer](#remediate-secrets-with-cloud-security-explorer) | Yes | Yes |
-| [Recommendations](#remediate-secrets-with-recommendations) | Yes | Yes |
-| [Asset Inventory](#remediate-secrets-from-your-asset-inventory) - Secrets | Yes | No |
- ## Remediate secrets with attack path Attack path analysis is a graph-based algorithm that scans your [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph). These scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach.
defender-for-cloud Sql Azure Vulnerability Assessment Rules Changelog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-rules-changelog.md
This article details the changes made to the SQL vulnerability assessment service rules. Rules that are updated, removed, or added will be outlined below. For an updated list of SQL vulnerability assessment rules, see [SQL vulnerability assessment rules](sql-azure-vulnerability-assessment-rules.md).
+## July 2023
+
+|Rule ID |Rule Title |Change details |
+||||
+|VA2129 |Changes to signed modules should be authorized |Logic change |
+ ## June 2022 |Rule ID |Rule Title |Change details |
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
Learn about the key concepts and components of Azure Deployment Environments. Th
## Dev centers
-A dev center is a collection of projects that require similar settings. Dev centers enable development infrastructure (dev infra) managers to:
+A dev center is a collection of projects that require similar settings. Dev centers enable platform engineers to:
- Use catalogs to manage infrastructure as code (IaC) templates that are available to the projects. - Use environment types to configure the types of environments that development teams can create.
A dev center is a collection of projects that require similar settings. Dev cent
A project is the point of access for the development team. When you associate a project with a dev center, all the settings for the dev center are automatically applied to the project.
-Each project can be associated with only one dev center. Dev infra admins can configure environments for a project by specifying which environment types are appropriate for the development team.
+Each project can be associated with only one dev center. Platform engineers can configure environments for a project by specifying which environment types are appropriate for the development team.
## Environments
deployment-environments How To Configure Project Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md
Project environment types are a subset of the [environment types configured for
In Azure Deployment Environments, [environment types](concept-environments-key-concepts.md#project-environment-types) that you add to the project will be available to developers when they deploy environments. Environment types determine the subscription and identity that are used for those deployments.
-Project environment types enable development infrastructure teams to:
+Project environment types enable platform engineering teams to:
- Configure the target subscription in which Azure resources will be created per environment type and per project.
deployment-environments How To Create Configure Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-dev-center.md
Last updated 04/28/2023
This quickstart shows you how to create and configure a dev center in Azure Deployment Environments.
-An enterprise development infrastructure team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
## Prerequisites
deployment-environments How To Create Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-projects.md
Last updated 04/28/2023
This quickstart shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
-An enterprise development infrastructure team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+A platform engineering team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
## Prerequisites
deployment-environments How To Install Devcenter Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md
Last updated 04/25/2023
-Customer intent: As a dev infra admin, I want to install the devcenter extension so that I can create Deployment Environments resources from the command line.
+#Customer intent: As a platform engineer, I want to install the devcenter extension so that I can create Deployment Environments resources from the command line.
# Azure Deployment Environments Azure CLI extension
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
Last updated 04/25/2023
# Manage your deployment environment
-In Azure Deployment Environments, a development infrastructure admin gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
+In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI.
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
A deployment environment is a preconfigured collection of Azure resources deploy
:::image type="content" source="./media/overview-what-is-azure-deployment-environments/azure-deployment-environments-scenarios-sml.png" lightbox="./media/overview-what-is-azure-deployment-environments/azure-deployment-environments-scenarios.png" alt-text="Diagram that shows the Azure Deployment Environments scenario flow.":::
-With Azure Deployment Environments, your development infrastructure (dev infra) admin can enforce enterprise security policies and provide a curated set of predefined infrastructure as code (IaC) templates.
+With Azure Deployment Environments, your platform engineer can enforce enterprise security policies and provide a curated set of predefined infrastructure as code (IaC) templates.
>[!NOTE] > Azure Deployment Environments currently supports only Azure Resource Manager (ARM) templates.
Developers have the following self-service experience when working with [environ
- Create platform as a service (PaaS) and infrastructure as a service (IaaS) environments quickly and easily by following a few simple steps. - Deploy environments right from where they work.
-### Dev infra scenarios
+### Platform engineering scenarios
-Azure Deployment Environments helps your dev infra admin apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and centrally track environments across projects by doing the following tasks:
+Azure Deployment Environments helps your platform engineer apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and centrally track environments across projects by doing the following tasks:
- Provide a project-based, curated set of reusable IaC templates. - Define specific Azure deployment configurations per project and per environment type.
Azure Deployment Environments provides the following benefits to creating, confi
Capture and share IaC templates in source control within your team or organization, to easily create on-demand environments. Promote collaboration through inner-sourcing of templates from source control repositories. - **Compliance and governance**:
-Dev infra teams can curate environment templates to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
+Platform engineering teams can curate environment templates to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
- **Project-based configurations**: Create and organize environment templates by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Last updated 04/25/2023
This quickstart shows you how to create and configure a dev center in Azure Deployment Environments.
-An enterprise development infrastructure team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+
+The following diagram shows the steps you perform in this quickstart to configure a dev center for Azure Deployment Environments in the Azure portal.
++
+First, you create a dev center to organize your deployment environments resources. Next, you create a key vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Then, you attach an identity to the dev center and assign that identity access to the key vault. Then, you add a catalog that stores your IaC templates to the dev center. Finally, you create environment types to define the types of environments that development teams can create.
++
+The following diagram shows the steps you perform in the [Create and configure a project quickstart](quickstart-create-and-configure-projects.md) to configure a project associated with a dev center for Deployment Environments in the Azure portal.
++
+You need to perform the steps in both quickstarts before you can create a deployment environment.
## Prerequisites
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Last updated 04/25/2023
This quickstart shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
-An enterprise development infrastructure team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+A platform engineering team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+
+The following diagram shows the steps you perform in the [Create and configure a dev center for Azure Deployment Environments](quickstart-create-and-configure-devcenter.md) quickstart to configure a dev center for Azure Deployment Environments in the Azure portal. You must perform these steps before you can create a project.
+
+
+The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal.
++
+First, you create a project. Then, assign the dev center managed identity the Owner role to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project.
+
+You need to perform the steps in both quickstarts before you can create a deployment environment.
+
+For more information on how to create an environment, see [Quickstart: Create and access Azure Deployment Environments by using the developer portal](quickstart-create-access-environments.md).
## Prerequisites
deployment-environments Tutorial Deploy Environments In Cicd Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md
az role assignment create \
### 1.6 Create project environment types
-At the project level, dev infra admins specify which environment types are appropriate for the development team.
+At the project level, platform engineers specify which environment types are appropriate for the development team.
Create a new Project Environment Type for each of the Environment Types we created on the dev center
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
Last updated 04/25/2023
-#Customer intent: As a developer, I want to understand Dev Box concepts and terminology so that I can set up a Dev Box environment.
+#Customer intent: As a platform engineer, I want to understand Dev Box concepts and terminology so that I can set up a Dev Box environment.
# Key concepts for Microsoft Dev Box
As a dev box user, you have control over your own dev boxes. You can create more
## Dev center
-A dev center is a collection of projects that require similar settings. Dev centers enable dev infrastructure managers to:
+A dev center is a collection of projects that require similar settings. Dev centers enable platform engineers to:
- Manage the images and SKUs available to the projects by using dev box definitions. - Configure the networks that the development teams consume by using network connections.
A dev box definition specifies a source image and size, including compute size a
## Network connection
-IT administrators and dev infrastructure managers configure the network that's used for dev box creation in accordance with their organizational policies. Network connections store configuration information, like Active Directory join type and virtual network, that dev boxes use to connect to network resources.
+IT administrators and platform engineers configure the network that's used for dev box creation in accordance with their organizational policies. Network connections store configuration information, like Active Directory join type and virtual network, that dev boxes use to connect to network resources.
When you're creating a network connection, you must choose the Active Directory join type:
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Last updated 04/25/2023
-#Customer intent: As a dev infrastructure manager, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region.
+#Customer intent: As a platform engineer, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region.
# Connect dev boxes to resources by configuring network connections
dev-box How To Get Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md
In the following table, select your role to see how to get help for urgent issue
|My role is: |My issue is urgent: |My issue isn't urgent: | ||-||
-|Administrator </br>(IT admin, Dev infra admin) |[Support for administrators](#support-for-administrators) |[Learn about the Microsoft Developer Community](#discover-the-microsoft-developer-community-for-dev-box)|
+|Administrator </br>(IT admin, platform engineer) |[Support for administrators](#support-for-administrators) |[Learn about the Microsoft Developer Community](#discover-the-microsoft-developer-community-for-dev-box)|
|Dev team lead </br>(Project Admin) |[Support for dev team leads](#support-for-dev-team-leads) |[Search Microsoft Developer Community](https://developercommunity.microsoft.com/devbox) | |Developer </br>(Dev Box User)|[Support for developers](#support-for-developers) |[Report to Microsoft Developer Community](https://developercommunity.microsoft.com/devbox/report) | ## Support for administrators
-Administrators include IT admins, Dev infrastructure admins, and anyone who has administrative access to all your Dev Box resources.
+Administrators include IT admins, platform engineers, and anyone who has administrative access to all your Dev Box resources.
#### 1. Internal troubleshooting
-Always use your internal troubleshooting processes before contacting support. As a dev infrastructure admin, you have access to all Dev Box resources through the Azure portal and through the Azure CLI.
+Always use your internal troubleshooting processes before contacting support. As a platform engineer, you have access to all Dev Box resources through the Azure portal and through the Azure CLI.
#### 2. Contact support If you can't resolve the issue, open a support request to contact Azure support:
As a DevCenter Project Admin, you can:
- View the dev box definitions attached to the dev center. - Create, view, update, and delete dev box pools in the project.
-#### 2. Contact your dev infrastructure admin
-If you can't resolve the issue, escalate it to your dev infrastructure admin.
+#### 2. Contact your platform engineers
+If you can't resolve the issue, escalate it to your platform engineering team.
## Support for developers Developers are assigned the Dev Box User role, which enables you to create and manage your own dev boxes through the developer portal. You don't usually have permissions to manage Dev Box resources in the Azure portal; your dev team lead manages those resources.
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
Last updated 04/25/2023
-Customer intent: As a dev infra admin, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
+#Customer intent: As a platform engineer, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
# Configure Microsoft Dev Box from the command-line with the Azure CLI
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
Last updated 04/25/2023
-#Customer intent: As a dev infrastructure manager, I want to be able to manage dev box definitions so that I can provide appropriate dev boxes to my users.
+#Customer intent: As a platform engineer, I want to be able to manage dev box definitions so that I can provide appropriate dev boxes to my users.
# Manage a dev box definition
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
Last updated 04/25/2023
-#Customer intent: As a dev infrastructure manager, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users.
+#Customer intent: As a platform engineer, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users.
# Manage a dev box pool
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
Last updated 04/25/2023
+#Customer intent: As a platform engineer, I want to be able to manage dev box projects so that I can provide appropriate dev boxes to my users. -->
-<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev box projects so that I can provide appropriate dev boxes to my users. -->
- # Manage a dev box project A project is the point of access to Microsoft Dev Box for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships.
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
Last updated 04/25/2023
-#Customer intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box implementation.
+#Customer intent: As a platform engineer, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box implementation.
# Manage a Microsoft Dev Box dev center
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
adobe-target: true
Microsoft Dev Box gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows.
-The Dev Box service was designed with three organizational roles in mind: dev infrastructure (infra) admins, developer team leads, and developers.
+The Dev Box service was designed with three organizational roles in mind: platform engineers, developer team leads, and developers.
:::image type="content" source="media/overview-what-is-microsoft-dev-box/dev-box-roles.png" alt-text="Diagram that shows roles and responsibilities for dev boxes." border="false":::
-Dev infra admins and IT admins work together to provide developer infrastructure and tools to the developer teams. Dev infra admins set and manage security settings, network configurations, and organizational policies to ensure that dev boxes can access resources securely.
+Platform engineers and IT admins work together to provide developer infrastructure and tools to the developer teams. Platform engineers set and manage security settings, network configurations, and organizational policies to ensure that dev boxes can access resources securely.
Developer team leads are experienced developers who have in-depth knowledge of their projects. They can be assigned the DevCenter Project Admin role and assist with creating and managing the developer experience. Project admins create and manage pools of dev boxes.
Microsoft Dev Box bridges the gap between development teams and IT, by bringing
## Scenarios for Microsoft Dev Box Organizations can use Microsoft Dev Box in a range of scenarios.
-### Dev infra scenarios
+### Platform engineering scenarios
-Dev Box helps dev infra teams provide the appropriate dev boxes for each user's workload. Dev infra admins can:
+Dev Box helps platform engineering teams provide the appropriate dev boxes for each user's workload. Platform engineers can:
- Create dev box pools, add appropriate dev box definitions, and assign access for only dev box users who are working on those specific projects. - Control costs by using auto-stop schedules.
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Last updated 04/25/2023
# Quickstart: Configure Microsoft Dev Box
-This quickstart describes how to set up Microsoft Dev Box to enable development teams to self-serve their dev boxes. The setup process involves two distinct phases. In the first phase, dev infra admins configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase.
+This quickstart describes how to set up Microsoft Dev Box to enable development teams to self-serve their dev boxes. The setup process involves two distinct phases. In the first phase, platform engineers configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase.
The following graphic shows the steps required to configure Microsoft Dev Box in the Azure portal.
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Sign in to the Azure portal
-Open your web browser, navigate to the [Microsoft Azure portal](https://portal.azure.com/), and then enter your credentials to sign in to the portal.
+From a browser, sign in to the [Azure portal](https://portal.azure.com).
The default view is your service dashboard.
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Sign in to the Azure portal
-Open your web browser, navigate to the [Microsoft Azure portal](https://portal.azure.com/), and then enter your credentials to sign in to the portal. The default view is your service dashboard.
+From a web browser, sign in to the [Azure portal](https://portal.azure.com). The default view is your service dashboard.
> [!NOTE] > You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
expressroute Expressroute Howto Add Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-ipv6-portal.md
This article describes how to add IPv6 support to connect via ExpressRoute to yo
## Sign in to the Azure portal
-From a browser, go to the [Azure portal](https://portal.azure.com), and then sign in with your Azure account.
+From a web browser, sign in to the [Azure portal](https://portal.azure.com).
## Add IPv6 Private Peering to your ExpressRoute circuit
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
This quickstart shows you how to create an ExpressRoute circuit using the Azure
### Sign in to the Azure portal
-From a browser, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
+From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
### Create a new ExpressRoute circuit
expressroute Expressroute Howto Reset Peering Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering-portal.md
Resetting your ExpressRoute peerings might be helpful in the following scenarios
## Sign in to the Azure portal
-From a browser, go to the [Azure portal](https://portal.azure.com), and then sign in with your Azure account.
+From a browser, sign in to the [Azure portal](https://portal.azure.com), and then sign in with your Azure account.
## Reset a peering
firewall-manager Manage Web Application Firewall Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/manage-web-application-firewall-policies.md
You can centrally create and associate Web Application Firewall (WAF) policies f
## Associate a WAF policy
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the Azure portal search bar, type **Firewall Manager** and press **Enter**. 3. On the Azure Firewall Manager page, select **Application Delivery Platforms**. :::image type="content" source="media/manage-web-application-firewall-policies/application-delivery-platforms.png" alt-text="Screenshot of Firewall Manager application delivery platforms.":::
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a Firewall Policy
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the Azure portal search bar, type **Firewall Manager** and press **Enter**. 3. On the Azure Firewall Manager page, select **View Azure firewall policies**.
firewall Deploy Firewall Basic Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-firewall-basic-portal-policy.md
If you don't have an Azure subscription, create a [free account](https://azure.m
The resource group contains all the resources for the how-to.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**. 4. For **Subscription**, select your subscription. 1. For **Resource group name**, enter *Test-FW-RG*.
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal-policy.md
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources for the tutorial.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page, then select **Add**. Enter or select the following values: | Setting | Value |
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources used in this procedure.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**. 4. For **Subscription**, select your subscription. 1. For **Resource group** name, type **Test-FW-RG**.
firewall Tutorial Firewall Dnat Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat-policy.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a resource group
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal home page, select **Resource groups**, then select **Add**. 4. For **Subscription**, select your subscription. 1. For **Resource group name**, type **RG-DNAT-Test**.
You can keep your firewall resources for the next tutorial, or if no longer need
## Next steps > [!div class="nextstepaction"]
-> [Deploy and configure Azure Firewall Premium](premium-deploy.md)
+> [Deploy and configure Azure Firewall Premium](premium-deploy.md)
firewall Tutorial Firewall Dnat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a resource group
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal home page, select **Resource groups**, then select **Create**. 4. For **Subscription**, select your subscription. 1. For **Resource group**, type **RG-DNAT-Test**.
firewall Tutorial Hybrid Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal-policy.md
If you don't have an Azure subscription, create a [free account](https://azure.m
First, create the resource group to contain the resources for this tutorial:
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal home page, select **Resource groups** > **Create**. 3. For **Subscription**, select your subscription. 1. For **Resource group name**, type **FW-Hybrid-Test**.
firewall Tutorial Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
First, create the resource group to contain the resources:
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the Azure portal home page, select **Resource groups** > **Add**. 3. For **Subscription**, select your subscription. 1. For **Resource group name**, type **FW-Hybrid-Test**.
firewall Tutorial Protect Firewall Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-protect-firewall-ddos.md
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources for the tutorial.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page, then select **Add**. Enter or select the following values: | Setting | Value |
healthcare-apis Concepts Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md
Title: The MedTech service and Azure Machine Learning Service - Azure Health Data Services
+ Title: MedTech service and Azure Machine Learning Service - Azure Health Data Services
description: Learn how to use the MedTech service and the Azure Machine Learning Service Previously updated : 06/19/2023 Last updated : 07/21/2023
-# The MedTech service and Azure Machine Learning Service
+# MedTech service and Azure Machine Learning Service
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
healthcare-apis Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md
Previously updated : 06/19/2023 Last updated : 07/21/2023
-# The MedTech service and Microsoft Power BI
+# MedTech service and Microsoft Power BI
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
healthcare-apis Concepts Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md
Title: The MedTech service and Teams notifications - Azure Health Data Services
+ Title: MedTech service and Teams notifications - Azure Health Data Services
description: Learn how to use the MedTech service and Teams notifications Previously updated : 06/19/2023 Last updated : 07/21/2023
-# The MedTech service and Microsoft Teams notifications
+# MedTech service and Microsoft Teams notifications
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
Previously updated : 04/28/2023 Last updated : 07/21/2023
This article provides troubleshooting steps and fixes for MedTech service deploy
> [!TIP] > Having access to MedTech service metrics and logs is essential for troubleshooting and assessing the overall health and performance of your MedTech service. Check out these MedTech service articles to learn how to enable, configure, and use these MedTech service monitoring features: >
-> [How to use the MedTech service monitoring tab](how-to-use-monitoring-tab.md)
+> [How to use the MedTech service monitoring and health checks tabs](how-to-use-monitoring-and-health-checks-tabs.md)
> > [How to configure the MedTech service metrics](how-to-configure-metrics.md) >
internet-peering Peering Registered Prefix Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/peering-registered-prefix-requirements.md
+
+ Title: Peering registered prefix requirements
+
+description: Learn about the technical requirements to register your prefixes for Azure Peering Service.
++++ Last updated : 07/23/2023++
+# Peering registered prefix requirements
+
+Ensure the prerequisites in this document are met before you activate your prefixes for Azure Peering Service.
+
+## Technical requirements
+
+For a registered prefix to be validated after creation, the following checks must pass:
+
+* The prefix can't be in a private range
+* The origin ASN must be registered in a major routing registry
+* The prefix must be announced from all peering sessions
+* Routes must be advertised with the Peering Service community string **8075:8007**
+* AS paths in your routes can't exceed a path length of 3, and can't contain private ASNs or AS prepending
+
+## Troubleshooting
+
+The validation state of a peering registered prefix can be seen in the Azure portal.
++
+Prefixes can only be registered when all validation steps have passed. Listed in this document are possible validation errors, with troubleshooting steps to solve them.
+
+### Prefix not received from IP
+
+To validate a prefix during registration, the prefix must be advertised on every session belonging to the prefix's parent peering. This message indicates that Microsoft isn't receiving prefix advertisement from one or more of the sessions. All IPs listed in this message are missing advertisement. Contact your networking team, and confirm that the prefixes are being advertised on all sessions.
+
+If you're advertising your prefix on all sessions and you're still seeing this validation failure message, contact peeringservice@microsoft.com with your Azure subscription and prefix to get help.
+
+### Advertisement missing MAPS community
+
+A requirement for registered prefix validation is that prefixes are advertised with the Peering Service community string **8075:8007**. This message indicates that Microsoft is receiving prefix advertisement, but the Peering Service community string is missing. Add the Peering Service community string to the community when advertising routes for Peering Service prefixes. After that, the community requirements will be satisfied and validation will continue.
+
+If you have any issues or questions, contact peeringservice@microsoft.com with your Azure subscription and prefix to get help.
+
+### AS path isn't correct
+
+For registered prefix validation, the AS path of the advertised routes must satisfy technical requirements. Namely, the AS path of the routes must not exceed a path length of 3, and the AS path can't contain any private ASNs. This message indicates that Microsoft is receiving routes for the prefix, but the AS path violates one of those two requirements. Adjust the prefix advertisement on all sessions, and ensure that the AS path length doesn't exceed 3, and there are no private ASNs in the path. After that, the AS path requirements will be satisfied and validation will continue.
+
+### Internal server error
+
+Contact peeringservice@microsoft.com with your Azure subscription and prefix to get help.
+
+## Next steps
+
+* [Internet peering for Peering Service walkthrough](walkthrough-peering-service-all.md).
+* [Internet peering for voice services walkthrough](walkthrough-communications-services-partner.md)
+* [Peering Service customer walkthrough](../peering-service/customer-walkthrough.md)
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Title: Internet peering for Communications Services walkthrough
-description: Learn about Internet peering for Communications Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix.
-
+ Title: Internet peering for Peering Service Voice Services walkthrough
+description: Learn about Internet peering for Peering Service Voice Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix.
+ Previously updated : 06/15/2023-- Last updated : 07/23/2023
-# Internet peering for Communications Services walkthrough
+# Internet peering for Peering Service Voice Services walkthrough
-In this article, you learn steps to establish a Direct interconnect between a Communications Services Provider and Microsoft.
+In this article, you learn steps to establish a Peering Service interconnect between a voice services provider and Microsoft.
-**Communications Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
+**Voice Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
-Internet peering supports Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
+Internet peering supports voice services providers to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications Services to ensure high quality and performance centric services.
-## Technical Requirements
+The following flowchart summarizes the process to onboard to Peering Service Voice
+
-To establish direct interconnect for Communication Services, follow these requirements:
+## Technical requirements
+
+To establish direct interconnect for Peering Service Voice Services, follow these requirements:
- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.-- The peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST have geo redundancy in place to ensure failover in the event of site failures in region/metro.-- The Peer MUST has the BGP sessions as Active-Active to ensure high availability and faster convergence and shouldn't be provisioned as Primary and Backup.-- The Peer MUST maintain a 1:1 ratio for Peer peering routers to peering circuits and no rate limiting is applied.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's communications service endpoints (for example, SBC). -- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet. -- The Peer MUST run BGP over Bidirectional Forwarding Detection (BFD) to facilitate sub second route convergence.-- All communications infrastructure prefixes are registered in Azure portal and advertised with community string 8075:8007.-- The Peer MUST NOT terminate peering on a device running a stateful firewall.
+- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC).
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST run BGP over Bidirectional Forwarding Detection (BFD) to facilitate subsecond route convergence.
+- The Peer MUST NOT terminate the peering on a device running a stateful firewall.
+- The Peer CANNOT have two local connections configured on the same router, as diversity is required
+- The Peer CANNOT apply rate limiting to their connection
+- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections.
+- Primary, backup, and redundant sessions all must have the same bandwidth
+- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved.
+- All infrastructure prefixes are registered in Azure portal and advertised with community string 8075:8007.
- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
-## Establish Direct Interconnect with Microsoft for Communications Services
+## Establish Direct Interconnect with Microsoft for Peering Service Voice Services
+
+To establish a direct interconnect with Microsoft using Internet peering, follow these steps:
+
+### 1. Associate your public ASN with your Azure subscription
-To establish a direct interconnect with Microsoft using Internet peering, follow the following steps:
+See [Associate peer ASN to Azure subscription using the Azure portal](./howto-subscription-association-portal.md) to learn how to Associate your public ASN with your Azure subscription. If the ASN has already been associated, proceed to the next step.
-1. **Associate Peer public ASN to the Azure Subscription:** [Associate peer ASN to Azure subscription using the Azure portal](./howto-subscription-association-portal.md). If the Peer has already associated a public ASN to Azure subscription, go to the next step.
+### 2. Create a Peering Service Voice Services peering
-2. **Create Direct peering connection for Peering Service:** [Create a Direct peering using the portal](./howto-direct-portal.md), and make sure you meet high-availability.requirement. In the **Configuration** tab of **Create a Peering**, select the following options:
+1. To create a peering resource for Peering Service Voice Services, search for **Peerings** in the Azure portal.
- | Setting | Value |
- | | |
- | Peering type | Select **Direct**. |
- | Microsoft network | Select **8075 (with Voice)**. |
- | SKU | Select **Premium Free**. |
+ :::image type="content" source="./media/walkthrough-communications-services-partner/internet-peering-portal-search.png" alt-text="Screenshot shows how to search for Peering resources in the Azure portal.":::
- In **Direct Peering Connection**, select following options:
+1. Select **+ Create**.
- | Setting | Value |
- | | |
- | Session Address provider | Select **Microsoft**. |
- | Use for Peering Services | Select **Enabled**. |
+ :::image type="content" source="./media/walkthrough-communications-services-partner/create-peering.png" alt-text="Screenshot shows how to create a Peering resource in the Azure portal.":::
+
+1. In the **Basics** tab, enter or select your Azure subscription, resource group, name, and ASN of the peering:
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/create-peering-basics.png" alt-text="Screenshot of the Basics tab of creating a peering in the Azure portal.":::
> [!NOTE]
- > When activating Peering Service, ignore the following message: *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.*
+ > These details you select CAN'T be changed after the peering is created. confirm they are correct before creating the peering.
+
+1. In the **Configuration** tab, you MUST choose the following required configurations:
+
+ - Peering type: **Direct**.
+ - Microsoft network: **AS8075 (with Voice)**.
+ - SKU: **Premium Free**.
+
+1. Select your **Metro**, then select **Create new** to add a connection to your peering.
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/create-peering-configuration.png" alt-text="Screenshot of the Configuration tab of creating a peering in the Azure portal.":::
+
+1. In **Direct Peering Connection**, enter or select your peering facility details then select **Save**.
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/direct-peering-connection.png" alt-text="Screenshot of creating a direct peering connection.":::
+
+ Peering connections configured for Peering Service Voice Services MUST have **Microsoft** as the Session Address Provider, and **Use for Peering Service** enabled. These options are chosen for you automatically. Microsoft must be the IP provider for Peering Service Voice Services, you can't provide your own IPs.
+
+1. In **Create a peering**, select **Create new** again to add a second connection to your peering. The peering must have at least two connections. Local redundancy is a requirement for Peering Service, and creating a Peering with two sessions achieves this requirement.
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/two-connections-peering.png" alt-text="Screenshot of the Configuration tab after two connections.":::
+
+1. Select **Review + create**. Review the summary and select **Create** after the validation passes.
+
+Allow time for the resource to finish deploying. When deployment is successful, your peering is created and provisioning begins.
+
+## Configure optimized routing for your prefixes
+
+To get optimized routing for your prefixes with your Peering Service Voice Services interconnects, follow these steps:
-1. **Register your prefixes for Optimized Routing:** For optimized routing for your Communication services infrastructure prefixes, register all your prefixes with your peering interconnects.
+### 1. Register your prefixes
- Ensure that the registered prefixes are announced over the direct interconnects established in that location. If the same prefix is announced in multiple peering locations, it's sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation.
+For optimized routing for voice services infrastructure prefixes, you must register them.
+
+> [!NOTE]
+> The connection state of your peering connections must be **Active** before registering any prefixes.
+
+Ensure that the registered prefixes are announced over the direct interconnects established with your peering. If the same prefix is announced in multiple peering locations, you do NOT have to register the prefix in every single location. A prefix can only be registered with a single peering. When you receive the unique prefix key after validation, this key will be used for the prefix even in locations other than the location of the peering it was registered under.
+
+1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes**.
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/registered-asn.png" alt-text="Screenshot shows how to go to Registered ASNs from the Peering Overview page in the Azure portal.":::
+
+1. Select **Add registered prefix**.
+
+ :::image type="content" source="./media/walkthrough-communications-services-partner/add-registered-prefix.png" alt-text="Screenshot of Registered prefix page in the Azure portal.":::
> [!NOTE]
- > The Connection State of your peering connections must be **Active** before registering any prefixes.
+ > If the **Add registered prefix** button is disabled, then your peering doesn't have at least one **Active** connection. Wait until this occurs to register your prefix.
+
+1. Configure your prefix by giving it a name, and the IPv4 prefix string and select **Save**.
-## Register the prefix
+ :::image type="content" source="./media/walkthrough-communications-services-partner/register-prefix-configure.png" alt-text="Screenshot of registering a prefix in the Azure portal.":::
-1. If you're an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page.
+After prefix creation, you can see the generated Peering Service prefix key when viewing the Registered ASN resource:
- :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-under-direct-peering.png" alt-text="Screenshot of registered prefixes tab under a peering enabled for Peering Service." :::
-2. Register prefixes to access the activation keys.
+After you create a registered prefix, it will be queued for validation. The validation state of the prefix can be found in the Registered Prefixes page:
- :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-blade.png" alt-text="Screenshot of registered prefixes blade with a list of prefixes and keys." :::
- :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefix-example.png" alt-text="Screenshot showing a sample prefix being registered." :::
+For a registered prefix to become validated, the following checks must pass:
- :::image type="content" source="./media/walkthrough-communications-services-partner/prefix-after-registration.png" alt-text="Screenshot of registered prefixes blade showing a new prefix added." :::
+* The prefix can't be in a private range
+* The origin ASN must be registered in a major routing registry
+* All connections in the parent peering must advertise routes for the prefix
+* Routes must be advertised with the Peering Service community string 8075:8007
+* AS paths in your routes can't exceed a path length of 3, and can't contain private ASNs or AS prepending
-## Activate the prefix
+For more information on registered prefix requirements and how to troubleshoot validation errors, see [Peering Registered Prefix Requirements](./peering-registered-prefix-requirements.md).
-In the previous section, you registered the prefix and generated the prefix key. The prefix registration DOES NOT activate the prefix for optimized routing (and doesn't accept <\/24 prefixes). Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
+### 2. Activate your prefixes
-In this section, you activate the prefix:
+In the previous section, you registered prefixes and generated prefix keys. Prefix registration DOES NOT activate the prefix for optimized routing (and doesn't accept <\/24 prefixes). Prefix activation and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
-1. In the search box at the top of the portal, enter *peering service*. Select **Peering Services** in the search results.
+1. To begin activating your prefixes, in the search box at the top of the portal, enter *peering service*. Select **Peering Services** in the search results.
:::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-portal-search.png" alt-text="Screenshot shows how to search for Peering Service in the Azure portal.":::
-1. Select **+ Create** to create a new Peering Service connection.
+1. Select **Create** to create a new Peering Service connection.
:::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-list.png" alt-text="Screenshot shows the list of existing Peering Service connections in the Azure portal.":::
In this section, you activate the prefix:
:::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-basics.png" alt-text="Screenshot shows the Basics tab of creating a Peering Service connection in the Azure portal.":::
-1. In the **Configuration** tab, provide details on the location, provider and primary and backup interconnect locations. If the backup location is set to **None**, the traffic will fail over to the internet.
+1. In the **Configuration** tab, choose your country, state/province, your provider name, the primary peering location, and optionally the backup peering location.
+
+ > [!CAUTION]
+ > If you choose **None** as the provider backup peering location when creating a Peering Service, you will not have geo-redundancy.
- > [!NOTE]
- > - If you're an Operator Connect partner, your organization is available as a **Provider**.
- > - The prefix key should be the same as the one obtained in the [Register the prefix](#register-the-prefix) step.
+1. In the **Prefixes** section, create prefixes corresponding to the prefixes you registered in the previous step. Enter the name of the prefix, the prefix string, and the prefix key you obtained when you registered the prefix. Know that you don't have to create all of your peering service prefixes when creating a peering service, you can add them later.
+
+ > [!NOTE]
+ > Ensure that the prefix key you enter when creating a peering service prefix matches the prefix key generated when you registered that prefix.
- :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-configuration.png" alt-text="Screenshot shows the Configuration tab of creating a Peering Service connection in the Azure portal.":::
+ :::image type="content" source="./media/walkthrough-communications-services-partner/configure-peering-service-prefix.png" alt-text="Screenshot of th the Configuration tab of creating a Peering Service connection in the Azure portal.":::
1. Select **Review + create**. 1. Review the settings, and then select **Create**.
+After you create a Peering Service prefix, it will be queued for validation. The validation state of the prefix can be found in the Peering Service Prefixes page.
++
+For a peering service prefix to become validated, the following checks MUST pass:
+
+* The prefix can't be in a private range
+* The origin ASN must be registered in a major routing registry
+* The prefix must be registered, and the prefix key in the peering service prefix must match the prefix key of the corresponding registered prefix
+* All primary and backup sessions (if configured) must advertise routes for the prefix
+* Routes must be advertised with the Peering Service community string 8075:8007
+* AS paths in your routes can't exceed a path length of 3, and can't contain private ASNs or AS prepending
+
+For more information on Peering Service prefix requirements and how to troubleshoot validation errors, see [Peering Service Prefix Requirements](../peering-service/peering-service-prefix-requirements.md).
+
+After a prefix passes validation, then activation for that prefix is complete.
+ ## Frequently asked questions (FAQ): **Q.** When will my BGP peer come up?
In this section, you activate the prefix:
**A.** Our automated process allocates addresses and sends the information via email after the port is configured on our side.
-**Q.** I have smaller subnets (</24) for my Communications services. Can I get the smaller subnets also routed?
+**Q.** I have smaller subnets (</24) for my communications services. Can I get the smaller subnets also routed?
**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Ensure that you're registering the smaller prefixes for routing and the same are announced over the interconnects. **Q.** What Microsoft routes will we receive over these interconnects?
-**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This ensures not only Communications but other cloud services are accessible from the same interconnect.
+**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects to ensure not only voice but other cloud services are accessible from the same interconnect.
+
+**Q.** My peering registered prefix has failed validation. How should I proceed?
+
+**A.** Review the [Peering Registered Prefix Requirements](./peering-registered-prefix-requirements.md) and follow the troubleshooting steps described.
+
+**Q.** My peering service prefix has failed validation. How should I proceed?
+
+**A.** Review the [Peering Service Prefix Requirements](../peering-service/peering-service-prefix-requirements.md) and follow the troubleshooting steps described.
**Q.** Are there any AS path constraints?
internet-peering Walkthrough Exchange Route Server Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-exchange-route-server-partner.md
+
+ Title: Internet peering for Azure Peering Service Exchange with Route Server partner walkthrough
+description: Learn about Internet peering for Peering Service Exchange with Route Server, its requirements, the steps to establish direct interconnect, and how to register your ASN.
++++ Last updated : 07/23/2023++
+# Internet peering for Peering Service Exchange with Route Server partner walkthrough
+
+In this article, you learn how to establish an Exchange with Route Server interconnect enabled for Azure Peering Service with Microsoft.
+
+Internet peering supports internet exchange partners (IXPs) to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
+
+Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for IXPs to ensure high quality and performance centric services.
+
+The following flowchart summarizes the process to onboard to Peering Service Exchange with Route Server:
++
+## Technical requirements
+
+To establish a Peering Service Exchange with Route Server peering, follow these requirements:
+
+- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.
+- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC).
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST NOT terminate the peering on a device running a stateful firewall.
+- The Peer CANNOT have two local connections configured on the same router, as diversity is required
+- The Peer CANNOT apply rate limiting to their connection
+- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections.
+- Primary, backup, and redundant sessions all must have the same bandwidth
+- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved.
+- All origin ASNs are registered in Azure portal.
+- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
+
+## Establish Direct Interconnect with Microsoft for Peering Service Exchange with Route Server
+
+To establish a Peering Service Exchange with Route Server interconnect with Microsoft, follow these steps:
+
+### 1. Associate your public ASN with your Azure subscription.
+
+See [Associate peer ASN to Azure subscription using the Azure portal](howto-subscription-association-portal.md) to learn how to Associate your public ASN with your Azure subscription. If the ASN has already been associated, proceed to the next step.
+
+### 2. Create a Peering Service Exchange with Route Server peering.
+
+1. To create a Peering Service Exchange with Route Server peering resource, search for **Peerings** in the Azure portal.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/internet-peering-portal-search.png" alt-text="Screenshot shows how to search for Peering resources in the Azure portal.":::
+
+1. Select **+ Create**.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/create-peering.png" alt-text="Screenshot shows how to create a Peering resource in the Azure portal.":::
+
+1. In the **Basics** tab, enter or select your Azure subscription, resource group, name, and ASN of the peering:
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/create-peering-basics.png" alt-text="Screenshot of the Basics tab of creating a peering in the Azure portal.":::
+
+ > [!NOTE]
+ > These details you select CAN'T be changed after the peering is created. confirm they are correct before creating the peering.
+
+1. In the **Configuration** tab, you MUST choose the following required configurations to create a peering for Peering Service Exchange with Route Server:
+
+ - Peering type: **Direct**.
+ - Microsoft network: **AS8075 (with exchange route server)**.
+ - SKU: **Premium Free**.
+
+1. Select your **Metro**, then select **Create new** to add a connection to your peering.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/create-peering-configuration.png" alt-text="Screenshot of the Configuration tab of creating a peering in the Azure portal.":::
+
+1. In **Direct Peering Connection**, enter or select your peering facility details then select **Save**.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/direct-peering-connection.png" alt-text="Screenshot of creating a direct peering connection.":::
+
+ Peering connections for Peering Service Exchange with Route Server MUST have **Peer** as the Session Address Provider, and **Use for Peering Service** enabled. These options are chosen for you automatically.
+
+ Before finalizing your Peering, make sure the peering has at least one connection.
+
+ > [!NOTE]
+ > Exchange with Route Server peerings are configured with a BGP mesh. Provide one peer IP and one Microsoft IP and the BGP mesh will be handled automatically. The number of connections displayed in the peering will not be equal to the number of sessions configured because of this.
+
+1. Select **Review + create**. Review the summary and select **Create** after the validation passes.
+
+ Allow time for the resource to finish deploying. When deployment is successful, your peering is created and provisioning begins.
+
+## Configure optimized routing for your prefixes
+
+To get optimized routing for your prefixes with your Peering Service Exchange with Route Server interconnects, follow these steps:
+
+### 1. Register a customer ASN
+
+Before prefixes can be optimized for a customer, their ASN must be registered.
+
+> [!NOTE]
+> The Connection State of your peering connections must be **Active** before registering an ASN.
+
+1. Open your Peering Service Exchange with Route Server peering in the Azure portal, and select **Registered ASNs**.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/registered-asn.png" alt-text="Screenshot shows how to go to Registered ASNs from the Peering Overview page in the Azure portal.":::
+
+1. Create a Registered ASN by entering the customer's Autonomous System Number (ASN) and a name.
+
+ :::image type="content" source="./media/walkthrough-exchange-route-server-partner/register-new-asn.png" alt-text="Screenshot shows how to register an ASN in the Azure portal.":::
+
+ After your ASN is registered, a unique peering service prefix key used for activation will be generated.
+
+> [!NOTE]
+> For every customer ASN you register, the ASN only needs to be registered with a single peering. The same prefix key can be used for all prefixes activated by that customer, regardless of location. As a result, it is not needed to register the ASN under more than one peering. Duplicate registration of an ASN is not allowed.
+
+### 2. Provide peering service prefix keys to customers for activation
+
+When your customers onboard to Peering Service, customers must follow the steps in [Peering Service customer walkthrough](../peering-service/customer-walkthrough.md) to activate prefixes using the Peering Service prefix key obtained during ASN registration. Provide this key to customers before they activate, as this key is used for all prefixes during activation.
+
+## Frequently asked questions (FAQ):
+
+**Q.** When will my BGP peer come up?
+
+**A.** After the LAG comes up, our automated process configures BGP. Peer must configure BGP.
+
+**Q.** When will peering IP addresses be allocated and displayed in the Azure portal?
+
+**A.** Our automated process allocates addresses and sends the information via email after the port is configured on our side.
+
+**Q.** I have smaller subnets (</24) for my voice services. Can I get the smaller subnets also routed?
+
+**A.** Yes, Peering service supports smaller prefix routing also. Ensure that you're activating the smaller prefixes for routing and the same are announced over the interconnects.
+
+**Q.** What Microsoft routes will we receive over these interconnects?
+
+**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects to ensure not only voice but other cloud services are accessible from the same interconnect.
+
+**Q.** Are there any AS path constraints?
+
+**A.** Yes, a private ASN can't be in the AS path. For activated prefixes smaller than /24, the AS path must be less than four.
+
+**Q.** I need to set the prefix limit, how many routes Microsoft would be announcing?
+
+**A.** Microsoft announces roughly 280 prefixes on internet, and it may increase by 10-15% in future. So, a safe limit of 400-500 can be good to set as ΓÇ£Max prefix countΓÇ¥
+
+**Q.** Will Microsoft re-advertise the Peer prefixes to the Internet?
+
+**A.** No.
+
+**Q.** Is there a fee for this service?
+
+**A.** No, however Peer is expected to carry site cross connect costs.
+
+**Q.** What is the minimum link speed for an interconnect?
+
+**A.** 10 Gbps.
+
+**Q.** Is the Peer bound to an SLA?
+
+**A.** Yes, once utilization reaches 40% a 45-60day LAG augmentation process must begin.
+
+**Q.** How does it take to complete the onboarding process?
+
+**A.** Time is variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
+
+**Q.** How is progress communicated outside of the portal status?
+
+**A.** Automated emails are sent at varying milestones
+
+**Q.** Can we use APIs for onboarding?
+
+**A.** Currently there's no API support, and configuration must be performed via web portal.
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
Title: Peering Service partner walkthrough-
-description: Get started with Peering Service partner.
-
+ Title: Internet peering for Azure Peering Service partner walkthrough
+description: Learn about Internet peering for Azure Peering Service, its requirements, the steps to establish direct interconnect, and how to register a prefix.
+ Previously updated : 02/23/2023-- Last updated : 07/23/2023
-# Peering Service partner walkthrough
+# Internet peering for Azure Peering Service partner walkthrough
+
+In this article, you learn how to establish a Direct interconnect enabled for Azure Peering Service.
+
+Internet peering supports Peering Service providers to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
+
+Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Peering Services to ensure high quality and performance centric services.
+
+The following flowchart summarizes the process to onboard to Peering Service:
++
+## Technical requirements
+
+To establish direct interconnect for Peering Service, follow these requirements:
+
+- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.
+- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints.
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST NOT terminate the peering on a device running a stateful firewall.
+- The Peer CANNOT have two local connections configured on the same router, as diversity is required.
+- The Peer CANNOT apply rate limiting to their connection.
+- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections.
+- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved.
+- Primary, backup, and redundant sessions all must have the same bandwidth.
+- All infrastructure prefixes are registered in the Azure portal and advertised with community string 8075:8007.
+- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
+
+## Establish Direct Interconnect for Peering Service
+
+To establish a Peering Service interconnect with Microsoft, follow the following steps:
+
+### 1. Associate your public ASN with your Azure subscription
+
+See [Associate peer ASN to Azure subscription using the Azure portal](./howto-subscription-association-portal.md) to learn how to Associate your public ASN with your Azure subscription. If the ASN has already been associated, proceed to the next step.
+
+### 2. Create a Peering Service peering
+
+1. To create a Peering Service peering resource, search for **Peerings** in the Azure portal.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/internet-peering-portal-search.png" alt-text="Screenshot shows how to search for Peering resources in the Azure portal.":::
+
+1. Select **+ Create**.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/create-peering.png" alt-text="Screenshot shows how to create a Peering resource in the Azure portal.":::
+
+1. In the **Basics** tab, enter or select your Azure subscription, resource group, name, and ASN of the peering:
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/create-peering-basics.png" alt-text="Screenshot of the Basics tab of creating a peering in the Azure portal.":::
+
+ > [!NOTE]
+ > These details CANNOT be changed after the peering is created. Please confirm they are correct before creating the peering.
+
+1. In the **Configuration** tab, you MUST choose the following required configurations:
+
+ - Peering type: **Direct**.
+ - Microsoft network: **AS8075**.
+ - SKU: **Premium Free**.
+
+1. Select your **Metro**, then select **Create new** to add a connection to your peering.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/create-peering-configuration.png" alt-text="Screenshot of the Configuration tab of creating a peering in the Azure portal.":::
+
+1. In **Direct Peering Connection**, enter or select your peering facility details then select **Save**.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/direct-peering-connection.png" alt-text="Screenshot of creating a direct peering connection.":::
+
+ Peering Service peerings MUST have **Use for Peering Service** enabled.
+
+Before finalizing your Peering, make sure the peering has at least two connections. Local redundancy is a requirement for Peering Service, and creating a Peering with two sessions achieves this requirement.
+
+1. Select **Review + create**. Review the summary and select **Create** after the validation passes.
+
+Allow time for the resource to finish deploying. When deployment is successful, your peering is created and provisioning begins.
+
+## Configure optimized routing for your prefixes
+
+To get optimized routing for your prefixes with your Peering Service interconnects, follow these steps:
+
+### 1. Register your prefixes
+
+For optimized routing for infrastructure prefixes, you must register them.
+
+> [!NOTE]
+> The connection state of your peering connections must be **Active** before registering any prefixes.
+
+Ensure that the registered prefixes are announced over the direct interconnects established with your peering. If the same prefix is announced in multiple peering locations, you do NOT have to register the prefix in every single location. A prefix can only be registered with a single peering. When you receive the unique prefix key after validation, this key will be used for the prefix even in locations other than the location of the peering it was registered under.
+
+1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes**.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/registered-asn.png" alt-text="Screenshot shows how to go to Registered ASNs from the Peering Overview page in the Azure portal.":::
+
+1. Select **Add registered prefix**.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/add-registered-prefix.png" alt-text="Screenshot of Registered prefix page in the Azure portal.":::
+
+ > [!NOTE]
+ > If the **Add registered prefix** button is disabled, then your peering doesn't have at least one **Active** connection. Wait until this occurs to register your prefix.
+
+1. Configure your prefix by giving it a name, and the IPv4 prefix string and select **Save**.
+
+ :::image type="content" source="./media/walkthrough-peering-service-all/register-prefix-configure.png" alt-text="Screenshot of registering a prefix in the Azure portal.":::
+
+After prefix creation, you can see the generated peering service prefix key when viewing the Registered ASN resource:
++
+After you create a registered prefix, it will be queued for validation. The validation state of the prefix can be found in the Registered Prefixes page:
++
+For a registered prefix to become validated, the following checks must pass:
+
+* The prefix can't be in a private range
+* The origin ASN must be registered in a major routing registry
+* All connections in the parent peering must advertise routes for the prefix
+* Routes must be advertised with the MAPS community string 8075:8007
+* AS paths in your routes can't exceed a path length of 3, and can't contain private ASNs or AS prepending
+
+For more information on registered prefix requirements and how to troubleshoot validation errors, see [Peering Registered Prefix Requirements](./peering-registered-prefix-requirements.md).
+
+### 2. Provide peering service prefix keys to customers for activation
+
+When your customers onboard to Peering Service, customers must follow the steps in [Peering Service customer walkthrough](../peering-service/customer-walkthrough.md) to activate prefixes using the Peering Service prefix key obtained during prefix registration. Provide this key to customers before they activate, as this key is used for all prefixes during activation.
+
+## Frequently asked questions (FAQ):
+
+**Q.** When will my BGP peer come up?
+
+**A.** After the LAG comes up, our automated process configures BGP. Peer must configure BGP.
+
+**Q.** When will peering IP addresses be allocated and displayed in the Azure portal?
+
+**A.** Our automated process allocates addresses and sends the information via email after the port is configured on our side.
+
+**Q.** I have smaller subnets (</24) for my services. Can I get the smaller subnets also routed?
+
+**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Ensure that you're registering the smaller prefixes for routing and the same are announced over the interconnects.
+
+**Q.** What Microsoft routes will we receive over these interconnects?
+
+**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects to ensure that other cloud services are accessible from the same interconnect.
+
+**Q.** My peering registered prefix has failed validation. How should I proceed?
+
+**A.** Review the [Peering Registered Prefix Requirements](./peering-registered-prefix-requirements.md) and follow the troubleshooting steps described.
+
+**Q.** Are there any AS path constraints?
+
+**A.** Yes, a private ASN can't be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four.
+
+**Q.** I need to set the prefix limit, how many routes Microsoft would be announcing?
+
+**A.** Microsoft announces roughly 280 prefixes on internet, and it may increase by 10-15% in future. So, a safe limit of 400-500 can be good to set as ΓÇ£Max prefix countΓÇ¥
+
+**Q.** Will Microsoft re-advertise the Peer prefixes to the Internet?
+
+**A.** No.
+
+**Q.** Is there a fee for this service?
+
+**A.** No, however Peer is expected to carry site cross connect costs.
+
+**Q.** What is the minimum link speed for an interconnect?
-This article explains the steps a provider needs to follow to enable a Direct peering for Peering Service.
+**A.** 10 Gbps.
-## Create Direct peering connection for Peering Service
+**Q.** Is the Peer bound to an SLA?
-Service Providers can expand their geographical reach by creating a new Direct peering that support Peering Service as follows:
+**A.** Yes, once utilization reaches 40% a 45-60day LAG augmentation process must begin.
-1. Become a Peering Service partner.
-1. Follow the instructions to [Create or modify a Direct peering](howto-direct-portal.md). Ensure it meets high-availability requirement.
-1. Follow the steps to [Enable Peering Service on a Direct peering using the portal](howto-peering-service-portal.md).
+**Q.** How does it take to complete the onboarding process?
-## Use legacy Direct peering connection for Peering Service
+**A.** Time is variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
-If you have a legacy Direct peering that you want to use to support Peering Service:
+**Q.** How is progress communicated outside of the portal status?
-1. Become a Peering Service partner.
-1. Follow the instructions to [Convert a legacy Direct peering to Azure resource](howto-legacy-direct-portal.md). If necessary, order more circuits to meet high-availability requirement.
-1. Follow the steps to [Enable Peering Service on a Direct peering](howto-peering-service-portal.md).
+**A.** Automated emails are sent at varying milestones
-## Next steps
+**Q.** Can we use APIs for onboarding?
-* Learn about Microsoft's [peering policy](policy.md).
-* To learn how to set up Direct peering with Microsoft, see [Direct peering walkthrough](walkthrough-direct-all.md).
-* To learn how to set up Exchange peering with Microsoft, see [Exchange peering walkthrough](walkthrough-exchange-all.md).
+**A.** Currently there's no API support, and configuration must be performed via web portal.
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll configure sample code to use the [Advanced Message Queui
## Confirm your device provisioning registration
-1. Go to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the left-hand menu or on the portal page, select **All resources**.
kinect-dk Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/support.md
For quick and reliable answers on your technical product questions from Microsof
Azure subscribers can create and manage support requests in the Azure portal. One-on-one development support for Body Tracking, Sensor SDK, Speech device SDK, or Azure Cognitive Services is available for Azure subscribers with an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with their subscription.
- - Have an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with your Azure subscription?ΓÇ»[Sign in to Azure portal](https://portal.azure.com/) to submit an incident.
+ - Have an [Azure Support Plan](https://azure.microsoft.com/support/plans/) associated with your Azure subscription? Sign in to the [Azure portal](https://portal.azure.com) to submit an incident.
- Need an Azure Subscription? [Azure subscription options](https://azure.microsoft.com/pricing/purchase-options/) will provide more information about different options. - Need a Support plan? [Select support plan](https://azure.microsoft.com/support/plans/)
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
Get started with Azure Load Balancer by using the Azure portal to create an inte
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the virtual network
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
Get started with Azure Load Balancer by using the Azure portal to create a basic
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the virtual network
To learn more about Azure Load Balancer, continue to:
> [!div class="nextstepaction"] > [What is Azure Load Balancer?](../load-balancer-overview.md)-
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
In this article, you'll learn how to configure a Virtual Machine Scale Set with
## Sign in to the Azure portal
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Get started with Azure Load Balancer by using the Azure portal to create an inte
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the virtual network
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
Get started with Azure Load Balancer by using the Azure portal to create a publi
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the virtual network
load-balancer Tutorial Deploy Cross Region Load Balancer Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-deploy-cross-region-load-balancer-template.md
To find more templates that are related to Azure Load Balancer, see [Azure Quick
## Deploy the template
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Enter and select **Deploy a custom template** in the search bar 1. In the **Custom deployment** page, enter **load-balancer-cross-region** in the **Quickstart template** textbox and select **quickstarts/microsoft.network/load-balancer-cross-region**.
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
In this tutorial, you learn how to:
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create virtual network
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
For more information about availability zones and a standard load balancer, see
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create the virtual network
When no longer needed, delete the resource group, load balancer, and all related
Advance to the next article to learn how to load balance VMs across availability zones: > [!div class="nextstepaction"]
-> [Load balance VMs across availability zones](./quickstart-load-balancer-standard-public-portal.md)
+> [Load balance VMs across availability zones](./quickstart-load-balancer-standard-public-portal.md)
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Learn [where and how to deploy your model to a compute target](./v1/how-to-deplo
## Azure Machine Learning compute (managed)
-A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters and [compute instances](concept-compute-instance.md) are the only managed computes.
+A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters, [serverless compute (preview)](how-to-use-serverless-compute.md), and [compute instances](concept-compute-instance.md) are the only managed computes.
-You can create Azure Machine Learning compute instances or compute clusters from:
+There is no need to create serverless compute. You can create Azure Machine Learning compute instances or compute clusters from:
* [Azure Machine Learning studio](how-to-create-attach-compute-studio.md). * The Python SDK and the Azure CLI:
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
from azure.ai.ml.entities import Workspace
ml_client.workspaces.begin_delete(name=ws.name, delete_dependent_resources=True) ```
-If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach any compute resources to your workspace you must delete them separately in [Azure portal](https://portal.azure.com).
+If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach any compute resources to your workspace you must delete them separately in the [Azure portal](https://portal.azure.com).
### Using Azure Prepayment credit with Azure Machine Learning
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
You might receive any of the following errors when you try to deploy an Azure Op
- **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). - **Model Not Deployable**
- - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](https://learn.microsoft.com/azure/cognitive-services/openai/concepts/models#gpt-4-models).
+ - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](/azure/ai-services/openai/concepts/models#gpt-4-models).
- **Resource Create Failed** - **Fix**: We tried to automatically create the Azure OpenAI resource but the operation failed. Try again on a new workspace.
machine-learning Get Started Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/get-started-prompt-flow.md
Copy following sample input data, paste to the input box, and select **Test**, t
```json {
- "url": "https://learn.microsoft.com/en-us/azure/cognitive-services/openai/"
+ "url": "https://learn.microsoft.com/en-us/azure/ai-services/openai/"
} ```
mariadb Howto Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-server-parameters.md
Last updated 06/24/2022
Azure Database for MariaDB supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted. >[!Note]
-> Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or [Azure portal](./howto-server-parameters.md).
+> Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or the [Azure portal](./howto-server-parameters.md).
## Configure server parameters
mariadb Quickstart Create Mariadb Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-portal.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Sign in to the Azure portal
-In your web browser, go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+In your web browser, sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for MariaDB server
mariadb Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-portal.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Sign in to the Azure portal
-In your browser, go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+In your browser, sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for MariaDB server
You can now connect to the server by using the mysql command-line tool or MySQL
Get values for **Server name** (fully qualified) and **Server admin login name** for your Azure Database for MariaDB server from the Azure portal. You use the fully qualified server name to connect to your server by using the mysql command-line tool.
-1. In the [Azure portal](https://portal.azure.com/), in the left menu, select **All resources**. Enter the server name and search for your Azure Database for MariaDB server. Select the server name to view the server details.
+1. In the [Azure portal](https://portal.azure.com), in the left menu, select **All resources**. Enter the server name and search for your Azure Database for MariaDB server. Select the server name to view the server details.
2. On the **Overview** page, make a note of the values for **Server name** and **Server admin login name**. You can also select the **copy** button next to each field to copy the value to the clipboard.
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
Azure Database for MySQL - Flexible Server is a managed service that runs, manag
## Sign in to the Azure portal
-Go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+Sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for MySQL - Flexible Server
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
Azure Database for MySQL - Flexible Server is a managed service that you can use
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. ## Sign in to the Azure portal
-Go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+Sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for MySQL flexible server
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Sign in to the Azure portal
-Open your favorite web browser, and visit the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+Open your favorite web browser, and sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for MySQL server
You can now connect to the server using mysql command-line tool or MySQL Workben
Get the fully qualified **Server name** and **Server admin login name** for your Azure Database for MySQL server from the Azure portal. You use the fully qualified server name to connect to your server using mysql command-line tool.
-1. In [Azure portal](https://portal.azure.com/), click **All resources** from the left-hand menu, type the name, and search for your Azure Database for MySQL server. Select the server name to view the details.
+1. In the [Azure portal](https://portal.azure.com), click **All resources** from the left-hand menu, type the name, and search for your Azure Database for MySQL server. Select the server name to view the details.
2. From the **Overview** page, note down **Server Name** and **Server admin login name**. You may click the copy button next to each field to copy to the clipboard. :::image type="content" source="./media/tutorial-design-database-using-portal/2-server-properties.png" alt-text="4-2 server properties":::
In this tutorial, you use the Azure portal to learned how to:
> * Restore data > [!div class="nextstepaction"]
-> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md)
+> [How to connect applications to Azure Database for MySQL](./how-to-connection-string.md)
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Title: Azure Peering Service overview description: Learn about Azure Peering Service.- + Previously updated : 01/19/2023-- Last updated : 07/23/2023 # Azure Peering Service overview
With Peering Service, customers can select a well-connected partner service prov
:::image type="content" source="./media/about/peering-service-what.png" alt-text="Diagram showing distributed connectivity to Microsoft cloud.":::
-Customers can also opt for Peering Service telemetry such as user latency measures to the Microsoft network, BGP route monitoring, and alerts against leaks and hijacks by registering the Peering Service connection in the Azure portal.
+Customers can also opt for Peering Service telemetry such as user latency measures to the Microsoft network and BGP route monitoring by registering the Peering Service connection in the Azure portal.
To use Peering Service, customers aren't required to register with Microsoft. The only requirement is to contact a [Peering Service partner](location-partners.md) to get the service. To opt in for Peering Service telemetry, customers must register for it in the Azure portal.
Enterprises looking for internet-first access to the cloud or considering SD-WAN
- Ability to select the preferred service provider to connect to the Microsoft cloud. - Traffic insights such as latency reporting and prefix monitoring. - Optimum network hops (AS hops) from the Microsoft cloud.-- Route analytics and statistics: Events for BGP route anomalies (leak or hijack detection) and suboptimal routing.
+- Route analytics and statistics: Events for BGP route anomalies and suboptimal routing.
### Robust, reliable peering
The following routing technique is preferred:
### Monitoring platform
- Service monitoring is offered to analyze customer traffic and routing, and it provides the following capabilities:
+Service monitoring is offered to analyze user traffic and routing. The following metrics are available in the Azure portal to track the performance and availability of your Peering Service connection:
-- **Internet BGP route anomalies detection**
-
- This service is used to detect and alert for any route anomaly events like route hijacks to the customer prefixes.
+- **Ingress and egress traffic rates**
-- **Customer latency**
+- **BGP session availability**
- This service monitors the routing performance between the customer's location and Microsoft.
-
- Routing performance is measured by validating the round-trip time taken from the client to reach the Microsoft Edge PoP. Customers can view the latency reports for different geographic locations.
+- **Packet drops**
- Monitoring captures the events if there's any service degradation.
+- **Flap events**
- :::image type="content" source="./media/about/peering-service-latency-report.png" alt-text="Diagram showing monitoring platform for Peering Service.":::
+- **Latency**
+
+- **Prefix events**
-### Traffic protection
+ :::image type="content" source="./media/about/peering-service-latency-report.png" alt-text="Diagram showing monitoring platform for Peering Service.":::
+
+### Onboarding a Peering Service connection
-Routing happens only via a preferred path that's defined when the customer is registered with Peering Service.
+To onboard a Peering Service connection:
-Microsoft guarantees to route the traffic via preferred paths even if malicious activity is detected.
+- Work with Internet Service provider (ISP) or Internet Exchange (IX) Partner to obtain a Peering Service to connect your network with the Microsoft network.
-BGP route anomalies are reported in the Azure portal, if any.
+- Ensure the [connectivity provider](location-partners.md) is partnered with Microsoft for Peering Service.
## Next steps
BGP route anomalies are reported in the Azure portal, if any.
- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md). - To find a service provider partner, see [Peering Service partners and locations](location-partners.md). - To register Peering Service, see [Create, change, or delete a Peering Service connection using the Azure portal](azure-portal.md).
+- To establish a Direct interconnect for Microsoft Azure Peering Service, see [Internet peering for Microsoft Azure Peering Services walkthrough](../../articles/internet-peering/walkthrough-direct-all.md)
+- To establish a Direct interconnect for Communications Services, see [Internet peering for Communications Services walkthrough](../../articles/internet-peering/walkthrough-communications-services-partner.md)
+- To establish a Direct interconnect for Exchange Router Server, see [Internet peering for Exchange Route Server walkthrough](../../articles/internet-peering/walkthrough-exchange-route-server-partner.md)
peering-service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/azure-portal.md
Title: Create, change, or delete a Peering Service connection - Azure portal description: Learn how to create, change, or delete a Peering Service connection using the Azure portal.- + Previously updated : 06/05/2023-- Last updated : 07/23/2023 # Create, change, or delete a Peering Service connection using the Azure portal
Sign in to the [Azure portal](https://portal.azure.com).
1. Select the **provider backup peering location** as the next closest to your network location. A peering service will be active via the backup peering location only in the event of failure of primary peering service location for disaster recovery. If **None** is selected, internet is the default failover route in the event of primary peering service location failure.
-1. Under the **Prefixes** section, select **Create new prefix**. In **Name**, enter a name for the prefix resource. Enter the prefixes that are associated with the service provider in **Prefix**. In **Prefix key**, enter the prefix key that was given to you by your provider (ISP or IXP). This key allows Microsoft to validate the prefix and provider who have allocated your IP prefix.
+1. Under the **Prefixes** section, select **Create new prefix**. In **Name**, enter a name for the prefix resource. Enter the prefixes that are associated with the service provider in **Prefix**. In **Prefix key**, enter the prefix key that was given to you by your provider (ISP or IXP). This key allows Microsoft to validate the prefix and provider who have allocated your IP prefix. If your provider is a Route Server partner, you can create all of your prefixes with the same Peering Service prefix key.
:::image type="content" source="./media/azure-portal/peering-service-configuration.png" alt-text="Screenshot of the Configuration tab of Create a peering service connection in Azure portal.":::
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="./media/azure-portal/peering-service-create.png" alt-text="Screenshot of the Review + create tab of Create a peering service connection in Azure portal.":::
-1. After you create a Peering Service connection, additional validation is performed on the included prefixes. You can review the validation status under the **Prefixes** section of your Peering Service. If the validation fails, one of the following error messages is displayed:
+1. After you create a Peering Service connection, additional validation is performed on the included prefixes. You can review the validation status under the **Prefixes** section of your Peering Service.
+
+ :::image type="content" source="./media/azure-portal/peering-service-prefix-validation.png" alt-text="Screenshot shows the validation status of the prefixes." lightbox="./media/azure-portal/peering-service-prefix-validation.png":::
+
+If the validation fails, one of the following error messages is displayed:
- Invalid Peering Service prefix, the prefix should be valid format, only IPv4 prefix is supported currently. - Prefix wasn't received from Peering Service provider, contact Peering Service provider.
- - Prefix announcement doesn't have a valid BGP community, contact Peering Service provider.
+ - Prefix announcement doesn't have a valid BGP community, contact Peering Service provider.
+ - Prefix overlaps with an existing prefix, contact Peering Service provider
- Prefix received with longer AS path(>3), contact Peering Service provider. - Prefix received with private AS in the path, contact Peering Service provider.
+Review the [Technical requirements for Peering Service prefixes](../internet-peering/peering-registered-prefix-requirements.md) for more help to solve peering service prefix validation failures.
+ ## Add or remove a prefix 1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
Sign in to the [Azure portal](https://portal.azure.com).
1. Select the ellipsis (**...**) next to the listed prefix, and select **Delete**. > [!NOTE]
-> You can't modify an existing prefix.
+> You can't modify an existing prefix. If you want to change the prefix, you must delete the resource and then re-create it.
## Delete a Peering Service connection
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="./media/azure-portal/peering-service-delete.png" alt-text="Screenshot of deleting a Peering Service in Azure portal.":::
+## Modifying the primary or backup peering location
+
+If you would like to change the primary or backup peering location in your Peering Service, reach out to peeringservice@microsoft.com to request this. Give the resource ID of the peering service to modify, and the new primary and backup locations you'd like to be configured.
+ ## Next steps - To learn more about Peering Service connections, see [Peering Service connection](connection.md).-- To learn more about Peering Service connection telemetry, see [Access Peering Service connection telemetry](connection-telemetry.md).-
+- To learn more about Peering Service connection telemetry, see [Access Peering Service connection telemetry](connection-telemetry.md).
peering-service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/connection.md
Title: Azure Peering Service connection description: Learn about Microsoft Azure Peering Service connection.- -- Previously updated : 05/31/2023 -++ Last updated : 07/23/2023 # Peering Service connection
A connection typically refers to a logical information set, identifying a Peerin
- Connectivity partner - Connectivity partner Primary service location - Connectivity partner Backup service location-- Customer's physical location - IP prefixes Customer can establish a single connection or multiple connections as per the requirement. A connection is also used as a unit of telemetry collection. For instance, to opt for telemetry alerts, customer must define the connection that will be monitored. > [!NOTE]
-> When you sign up for Peering Service, we analyze your Windows and Microsoft 365 telemetry in order to provide you with latency measurements for your selected prefixes. Currently telemetry data is supported for /24 or bigger size prefixes only.
+> When you sign up for Peering Service, we analyze your Windows and Microsoft 365 telemetry in order to provide you with latency measurements for your selected prefixes.
> For more information about connection telemetry, see [Access Peering Service connection telemetry](connection-telemetry.md). ## How to create a peering service connection?
-**Scenario** - Let's say a branch office is spread across different geographic locations as shown in the figure below. Here, the customer is required to provide a logical name, Service Provider(SP) name, customer's physical location, and IP prefixes that are (owned by the customer or allocated by the Service Provider) associated with a single connection. The primary and backup service locations with partner help defining the preferred service location for customer. This process must be repeated to create Peering Service for other locations.
+**Scenario** - Let's say a branch office is spread across different geographic locations as shown in the figure. Here, the customer is required to provide a logical name, Service Provider (SP) name, customer's physical location, and IP prefixes that are (owned by the customer or allocated by the Service Provider) associated with a single connection. The primary and backup service locations with partner help defining the preferred service location for customer. This process must be repeated to create Peering Service for other locations.
:::image type="content" source="./media/connection/peering-service-connections.png" alt-text="Diagram shows geo redundant connections.":::
peering-service Customer Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/customer-walkthrough.md
+
+ Title: Azure Peering Service customer walkthrough
+description: Learn about Azure Peering Service and how to onboard.
++++ Last updated : 07/23/2023++
+# Azure Peering Service customer walkthrough
+
+This section explains the steps to optimize your prefixes with an Internet Service Provider (ISP) or Internet Exchange Provider (IXP) who is a Peering Service partner.
+
+The complete list of Peering Service providers can be found here: [Peering Service partners](location-partners.md)
+
+## Activate the prefix
+
+If you have received a Peering Service prefix key from your Peering Service provider, then you can activate your prefixes for optimized routing with Peering Service. Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
+
+To activate the prefix, follow these steps:
+
+1. In the search box at the top of the portal, enter *peering service*. Select **Peering Services** in the search results.
+
+ :::image type="content" source="./media/customer-walkthrough/peering-service-portal-search.png" alt-text="Screenshot shows how to search for Peering Service in the Azure portal.":::
+
+1. Select **+ Create** to create a new Peering Service connection.
+
+ :::image type="content" source="./media/customer-walkthrough/peering-service-list.png" alt-text="Screenshot shows the list of existing Peering Service connections in the Azure portal.":::
+
+1. In the **Basics** tab, enter or select your subscription, resource group, and Peering Service connection name.
+
+ :::image type="content" source="./media/customer-walkthrough/peering-service-basics.png" alt-text="Screenshot shows the Basics tab of creating a Peering Service connection in the Azure portal.":::
+
+1. In the **Configuration** tab, provide details on the location, provider and primary and backup interconnect locations. If the backup location is set to **None**, the traffic fails over to the internet.
+
+ > [!NOTE]
+ > - The prefix key should be the same as the one obtained from your Peering Service provider.
+
+ :::image type="content" source="./media/customer-walkthrough/peering-service-configuration.png" alt-text="Screenshot shows the Configuration tab of creating a Peering Service connection in the Azure portal.":::
+
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
+
+## FAQs:
+
+**Q.** Will Microsoft re-advertise my prefixes to the Internet?
+
+**A.** No.
+
+**Q.** My Peering Service prefix has failed validation. How should I proceed?
+
+**A.** Review the [Peering Service Prefix Requirements](./peering-service-prefix-requirements.md) and follow the troubleshooting steps described.
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
Title: Azure Peering Service locations and partners description: Learn about the available locations and partners globally for the Azure Peering Service.- --- Previously updated : 06/15/2023 -++ Last updated : 07/23/2023+ # Peering Service partners
The following table provides information on the Peering Service connectivity par
> [!NOTE]
->For more information about enlisting with the Peering Service Partner program, reach out to peeringservice@microsoft.com.
+> For more information about enlisting with the Peering Service Partner program, reach out to peeringservice@microsoft.com.
## Next steps - To learn about Peering Service, see [Peering Service overview](about.md).-- To learn about onboarding a Peering Service connection, see [Onboarding Peering Service](onboarding-model.md). - To learn about Peering Service connection, see [Peering Service connection](connection.md). - To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).
peering-service Onboarding Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/onboarding-model.md
Title: Azure Peering Service onboarding model description: Get started to onboard Azure Peering Service.- + - Previously updated : 05/18/2020-- Last updated : 07/23/2023 # Onboarding Peering Service model
-Onboarding process of Peering Service is comprised of two models as listed below:
+Onboarding process of Peering Service is composed of two models:
- Onboarding Peering Service connection - Onboarding Peering Service connection telemetry
-Action plans for the above listed models are described as below:
+Action plans for the above listed models are described as follows:
| **Step** | **Action**| **What you get**|
-|--||||
-|1|Customer to provision the connectivity from a connectivity partner (no interaction with Microsoft).ΓÇï |An Internet provider who is well connected to Microsoft and meets the technical requirements for performant and reliable connectivity to Microsoft. ΓÇï |
-|2 (Optional)|Customer registers locations into the Azure portal.​ A location is defined by: ISP/IXP Name​, Physical location of the customer site (state level), IP Prefix given to the location by the Service Provider or the enterprise​. ​|Telemetry​: Internet Route monitoring​, traffic prioritization from Microsoft to the user’s closest edge POP location​. |
--
+|--|||
+| 1 | Customer to provision the connectivity from a connectivity partner (no interaction with Microsoft).ΓÇï | An Internet provider who is well connected to Microsoft and meets the technical requirements for performant and reliable connectivity to Microsoft. ΓÇï |
+| 2 (Optional) | Customer registers locations into the Azure portal.​ A location is defined by: ISP/IXP Name​, Physical location of the customer site (state level), IP Prefix given to the location by the Service Provider or the enterprise​. ​| Telemetry​: Internet Route monitoring​, traffic prioritization from Microsoft to the user’s closest edge POP location​. |
## Onboarding Peering Service connection
-To onboard Peering Service connection, do the following:
+To onboard Peering Service connection:
-Work with Internet Service provider (ISP) or Internet Exchange (IX) Partner to obtain Peering Service to connect your network with the Microsoft network.
+- Work with Internet Service provider (ISP) or Internet Exchange (IX) Partner to obtain Peering Service to connect your network with the Microsoft network.
-Ensure the [connectivity providers](location-partners.md) are partnered with Microsoft for Peering Service.
+- Ensure the [connectivity providers](location-partners.md) are partnered with Microsoft for Peering Service.
## Onboarding Peering Service connection telemetry
-Customers can opt for its telemetry such as BGP route analytics to monitor networking latency and performance when accessing the Microsoft network. This can be achieved by registering the connection into the Azure portal.
+Customers can opt for Peering Service telemetry such as BGP route analytics to monitor networking latency and performance when accessing the Microsoft network by registering the connection into the Azure portal.
-To onboard Peering Service connection telemetry, customer must register the Peering Service connection into the Azure portal. Refer to the [Register Peering Service - Azure portal](azure-portal.md) to learn the procedure.
+To onboard Peering Service connection telemetry, customer must register the Peering Service connection into the Azure portal. Refer to the [Manage a Peering Service connection using the Azure portal](azure-portal.md) to learn the procedure.
Following that, you can measure telemetry by referring [here](measure-connection-telemetry.md). ## Next steps
-To learn step by step process on how to register Peering Service connection, see [Register Peering Service - Azure portal](azure-portal.md).
+To learn step by step process on how to register Peering Service connection, see [Manage a Peering Service connection using the Azure portal](azure-portal.md).
-To learn about measure connection telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
+To learn about measure connection telemetry, see [Connection telemetry](connection-telemetry.md).
peering-service Peering Service Prefix Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/peering-service-prefix-requirements.md
+
+ Title: Azure Peering Service prefix requirements
+description: Learn the technical requirements to optimize your prefixes using Azure Peering Service.
++++ Last updated : 07/23/2023++
+# Azure Peering Service prefix requirements
+
+Ensure the prerequisites in this document are met before you activate your prefixes for Peering Service.
+
+## Technical requirements
+
+For a registered prefix to be validated after creation, the following checks must pass:
+
+* The prefix can't be in a private range
+* The origin ASN must be registered in a major routing registry
+* The peering service prefix key in the peering service prefix must match the prefix key received during registration
+* The prefix must be announced from all primary and backup peering sessions
+* Routes must be advertised with the Peering Service community string 8075:8007
+* AS paths in your routes can't exceed a path length of 3, and can't contain private ASNs or AS prepending
+
+## Troubleshooting
+
+The validation state of a Peering Service prefix can be seen in the Azure portal.
++
+Prefixes can only be activated when all validation steps have passed. See the following possible validation errors and the troubleshooting steps to solve them.
+
+### Provider has fewer than two sessions in its primary location
+
+Peering Service requires the primary peering location to have local redundancy. This requirement is achieved by having two Peering Service sessions configured on two different routers. If you're seeing this validation failure message, you have chosen a primary peering location that has fewer than two Peering Service sessions. If provisioning is still in progress for your second Peering Service connection, allow time for provisioning to complete. After that, the primary redundancy requirement will be satisfied and validation will continue.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue. If you're a Peering Service partner, contact peeringservice@microsoft.com with your Azure subscription and prefix so we can assist you.
+
+### Provider has no sessions in its backup location
+
+Peering Service requires the backup peering location, if you've chosen one, to have one Peering Service session. If you're seeing this validation failure message, you have chosen a backup peering location that doesn't have a Peering Service session. If provisioning is still in progress for your Peering Service connection in the backup location, allow time for provisioning to complete. After that, the backup requirement will be satisfied and validation will continue.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue. If you're a Peering Service partner, contact peeringservice@microsoft.com with your Azure subscription and prefix so we can assist you.
+
+### Peering service prefix is invalid
+
+If you're seeing this validation failure message, the prefix string that you've given isn't a valid IPv4 prefix. Delete and re-create this peering service prefix with a valid IPv4 address.
+
+### Not receiving prefix advertisement from IP for prefix
+
+Peering Service requires the provider to advertise routes for their peering service prefix. If you're seeing this validation failure message, it means the provider isn't advertising routes for the prefix that's being validated. Refer to this document and review the [Peering Service technical requirements](../internet-peering/walkthrough-peering-service-all.md#technical-requirements) regarding route advertisement. Contact your networking team and confirm that they're advertising routes for the prefix being validated. Also confirm the advertisement adheres to Peering Service requirements, such as advertising using the Peering Service community string 8075:8007, and that the AS path of the route doesn't contain private ASNs. Use the IP address in the message to identify the Peering Service connection that isn't advertising the prefix. All Peering Service connections must advertise routes.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue. If you're a Peering Service partner, you're advertising routes for your prefix and you're still seeing this validation failure, contact peeringservice@microsoft.com with your Azure subscription and prefix so we can assist you.
+
+### Received route for prefix should have the Peering Service community 8075:8007
+
+For Peering Service, prefix routes must be advertised with the Peering Service community string 8075:8007. This validation error message indicates that Microsoft is receiving routes, but they aren't being advertised with the Peering Service community string 8075:8007. Add the Peering Service community string to the community when advertising routes for Peering Service prefixes. After that, the community requirement for Peering Service will be satisfied and validation will continue.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue.
+
+### AS path length for prefix should be <=3 for prefix
+
+For Peering Service, prefix routes can't exceed an AS path length of 3. This message indicates that Microsoft is receiving routes, but the AS path of the received routes is greater than 3. Advertise routes with a new AS path that doesn't exceed a path length of 3. The AS path length requirement for Peering Service will be satisfied and validation will continue.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue.
+
+### AS path for prefix shouldn't have any private ASN
+
+For Peering Service, prefix routes can't be advertised with an AS path that contains private ASNs. This message indicates that Microsoft is receiving routes, but the AS path of the received routes contains a private ASN. Advertise routes with a new AS path that doesn't contain a private ASN. The private AS requirement for Peering Service will be satisfied and validation will continue.
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue.
+
+### Peering service provider not found
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue. If you're a Peering Service partner, contact peeringservice@microsoft.com with your Azure subscription and prefix so we can assist you.
+
+### Internal server error
+
+If you're a Peering Service customer, contact your Peering Service provider about this issue. Contact peeringservice@microsoft.com with your Azure subscription and prefix so we can assist you.
+
+## Frequently asked questions (FAQ):
+
+**Q.** I'm advertising a prefix from a different origin ASN than the ASN of my peering. Can this work with Peering Service?
+
+**A.** To make this work with Peering Service, you must create a peer ASN in the same subscription as the peering service resource, and give it the same name as the peer ASN associated with the peering: [[Associate peer ASN to Azure subscription using the Azure portal](../internet-peering/howto-subscription-association-portal.md)].
+
+## Next steps
+
+[Peering Service customer walkthrough](customer-walkthrough.md)
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Sign in to the Azure portal
-Go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+Sign in to the [Azure portal](https://portal.azure.com). Enter your credentials to sign in to the portal. The default view is your service dashboard.
## Create an Azure Database for PostgreSQL flexible server
sap Manual Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/manual-deployment.md
Before you begin, [check you're in the correct Azure subscription](#check-azure-
Verify that you're using the appropriate Azure subscription:
-1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. [Open Azure Cloud Shell](https://shell.azure.com/).
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Select an area for resources about how to integrate SAP and Azure in that space.
### Azure OpenAI service
-For more information about integration with [Azure OpenAI service](/azure/cognitive-services/openai/overview), see the following Azure documentation:
+For more information about integration with [Azure OpenAI service](/azure/ai-services/openai/overview), see the following Azure documentation:
- [Microsoft AI SDK for SAP](https://microsoft.github.io/aisdkforsapabap/docs/intro) - [ABAP SDK for Azure](https://github.com/microsoft/ABAP-SDK-for-Azure)
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-attach-cognitive-services.md
If you leave the property unspecified, your search service attempts to use the f
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Create an [Azure AI multi-service resource](../ai-services/multi-service-resource.md?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
Enrichments are a billable feature. If you no longer need to call Azure AI servi
### [**Azure portal**](#tab/portal-remove)
-1. [Sign in to Azure portal](https://portal.azure.com) and open the search service **Overview** page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service **Overview** page.
1. Under **Skillsets**, select the skillset containing the key you want to remove.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
A Debug Session works with all generally available [indexer data sources](search
## Create a debug session
-1. [Sign in to Azure portal](https://portal.azure.com) and find your search service.
+1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service.
1. In the **Overview** page of your search service, select the **Debug Sessions** tab.
search Cognitive Search Tutorial Blob Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-dotnet.md
If possible, create both in the same region and resource group for proximity and
### Start with Azure Storage
-1. [Sign in to the Azure portal](https://portal.azure.com/) and click **+ Create Resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and click **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
You can use the Free tier to complete this walkthrough.
To interact with your Azure Cognitive Search service you will need the service URL and an access key.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. You can copy either the primary or secondary key.
search Cognitive Search Tutorial Blob Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-python.md
If possible, create both in the same region and resource group for proximity and
### Start with Azure Storage
-1. [Sign in to the Azure portal](https://portal.azure.com/) and select **+ Create Resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
You can use the Free tier to complete this tutorial.
To send requests to your Azure Cognitive Search service, you'll need the service URL and an access key.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL is `https://mydemo.search.windows.net`, your service name would be `mydemo`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL is `https://mydemo.search.windows.net`, your service name would be `mydemo`.
2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob.md
If possible, create both in the same region and resource group for proximity and
### Start with Azure Storage
-1. [Sign in to the Azure portal](https://portal.azure.com/) and select **+ Create Resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
You can use the Free tier to complete this walkthrough.
To interact with your Azure Cognitive Search service you'll need the service URL and an access key.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. You can copy either the primary or secondary key.
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-debug-sessions.md
This section creates the sample data set in Azure Blob Storage so that the index
REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Search Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-powershell.md
The following services and tools are required for this quickstart.
REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
If you don't have an Azure subscription, create a [free account](https://azure.m
REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Search Get Started Semantic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-semantic.md
This quickstart walks you through the query modifications that invoke semantic s
+ An API key and search endpoint:
- [Sign in to the Azure portal](https://portal.azure.com/) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+ Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
In **Overview**, copy the URL and save it to Notepad for a later step. An example endpoint might look like `https://mydemo.search.windows.net`.
search Search Get Started Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-text.md
This quickstart has [steps](#create-load-and-query-an-index) for the following S
+ An API key and search endpoint:
- [Sign in to the Azure portal](https://portal.azure.com/) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+ Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
In **Overview**, copy the URL and save it to Notepad for a later step. An example endpoint might look like `https://mydemo.search.windows.net`.
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
During development, plan on frequent rebuilds. Because physical structures are c
Index design through the portal enforces requirements and schema rules for specific data types, such as disallowing full text search capabilities on numeric fields.
-1. [Sign in to Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search service Overview page, choose either option for creating a search index:
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
In this step, configure your search service to recognize an **authorization** he
### [**Azure portal**](#tab/config-svc-portal)
-1. [Sign in to Azure portal](https://portal.azure.com) and open the search service page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
1. Select **Keys** in the left navigation pane.
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
Other Azure SDKs provide samples in [Python](https://github.com/Azure/azure-sdk-
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the search service **Overview** page, select the **Indexes** tab.
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
When you're ready to create an indexer on a remote search service, you'll need a
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the search service Overview page, choose from two options:
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-changed-deleted-blobs.md
In Cognitive Search, set a native blob soft deletion detection policy on the dat
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the Cognitive Search service Overview page, go to **New Data Source**, a visual editor for specifying a data source definition.
search Search Howto Index Encrypted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-encrypted-blobs.md
You should have an Azure Function app that contains the decryption logic and an
### Get an admin api-key and URL for Azure Cognitive Search
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
If your Azure Active Directory organization has [Conditional Access enabled](../
The SharePoint indexer will use this Azure Active Directory (Azure AD) application for authentication.
-1. [Sign in to Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for or navigate to **Azure Active Directory**, then select **App registrations**.
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
The number of indexing jobs that can run simultaneously varies for text-based an
If your data source is an [Azure Blob Storage container](../storage/blobs/storage-blobs-introduction.md#containers) or [Azure Data Lake Storage Gen 2](../storage/blobs/storage-blobs-introduction.md#about-azure-data-lake-storage-gen2), enumerating a large number of blobs can take a long time (even hours) until this operation is completed. This will cause that your indexer's documents succeeded count isn't increased during that time and it may seem it's not making any progress, when it is. If you would like document processing to go faster for a large number of blobs, consider partitioning your data into multiple containers and create parallel indexers pointing to a single index.
-1. [Sign in to Azure portal](https://portal.azure.com) and check the number of search units used by your search service. Select **Settings** > **Scale** to view the number at the top of the page. The number of indexers that will run in parallel is approximately equal to the number of search units.
+1. Sign in to the [Azure portal](https://portal.azure.com) and check the number of search units used by your search service. Select **Settings** > **Scale** to view the number at the top of the page. The number of indexers that will run in parallel is approximately equal to the number of search units.
1. Partition source data among multiple containers or multiple virtual folders inside the same container.
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
A system-assigned managed identity is unique to your search service and bound to
### [**Azure portal**](#tab/portal-sys)
-1. [Sign in to Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
1. Under **Settings**, select **Identity**.
A user-assigned managed identity is a resource on Azure. It's useful if you need
### [**Azure portal**](#tab/portal-user)
-1. [Sign in to Azure portal](https://portal.azure.com/)
+1. Sign in to the [Azure portal](https://portal.azure.com)
1. Select **+ Create a resource**.
A managed identity must be paired with an Azure role that determines permissions
The following steps are for Azure Storage. If your resource is Azure Cosmos DB or Azure SQL, the steps are similar.
-1. [Sign in to Azure portal](https://portal.azure.com) and [find your Azure resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) to which the search service must have access.
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your Azure resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) to which the search service must have access.
1. In Azure Storage, select **Access control (AIM)** on the left navigation pane.
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
Once you reset an indexer, you can't undo the action.
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com) and open the search service page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
1. On the **Overview** page, select the **Indexers** tab. 1. Select an indexer. 1. Select the **Reset** command, and then select **Yes** to confirm the action.
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-schedule-indexers.md
Schedules are specified in an indexer definition. To set up a schedule, you can
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com) and open the search service page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
1. On the **Overview** page, select the **Indexers** tab. 1. Select an indexer. 1. Select **Settings**.
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
When you complete the steps in this section, you have a shared private link that
### [**Azure portal**](#tab/portal-create)
-1. [Sign in to Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
1. Under **Settings** on the left navigation pane, select **Networking**.
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-trusted-service-exception.md
In Azure Cognitive Search, indexers that access Azure blobs can use the [trusted
## Check service identity
-1. [Sign in to Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
1. On the **Identity** page, make sure that a [system assigned identity is enabled](search-howto-managed-identities-data-sources.md). Remember that user-assigned managed identities, currently in preview, won't work for a trusted service connection.
In Azure Cognitive Search, indexers that access Azure blobs can use the [trusted
## Check network settings
-1. [Sign in to Azure portal](https://portal.azure.com) and [find your storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
1. In the left navigation pane under **Security + networking**, select **Networking**.
search Search Indexer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-tutorial.md
In this step, create an external data source on Azure SQL Database that an index
If you have an existing Azure SQL Database resource, you can add the hotels table to it, starting at step 4.
-1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Find or create a **SQL Database**. You can use defaults and the lowest level pricing tier. One advantage to creating a server is that you can specify an administrator user name and password, necessary for creating and loading tables in a later step.
The next component is Azure Cognitive Search, which you can [create in the porta
API calls require the service URL and an access key. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
Your code runs locally in Visual Studio, connecting to your search service on Az
Use Azure portal to verify object creation, and then use **Search explorer** to query the index.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, open each list in turn to verify the object is created. **Indexes**, **Indexers**, and **Data Sources** will have "hotels", "azure-sql-indexer", and "azure-sql", respectively.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, open each list in turn to verify the object is created. **Indexes**, **Indexers**, and **Data Sources** will have "hotels", "azure-sql-indexer", and "azure-sql", respectively.
:::image type="content" source="media/search-indexer-tutorial/tiles-portal.png" alt-text="Screenshot of the indexer and data source tiles in the Azure portal search service page." border="true":::
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
You can set both properties using the portal, PowerShell, or Azure CLI commands.
### [**Azure portal**](#tab/portal-pp)
-1. [Sign in to Azure portal](https://portal.azure.com) and open your key vault overview page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open your key vault overview page.
1. On the **Overview** page under **Essentials**, enable **Soft-delete** and **Purge protection**.
You can set both properties using the portal, PowerShell, or Azure CLI commands.
Skip key generation if you already have a key in Azure Key Vault that you want to use, but collect the key identifier. You'll need this information when creating an encrypted object.
-1. [Sign in to Azure portal](https://portal.azure.com) and open your key vault overview page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open your key vault overview page.
1. Select **Keys** on the left, and then select **+ Generate/Import**.
Conditions that will prevent you from adopting this approach include:
> > The REST API version 2021-04-30-Preview and [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) provide this feature.
-1. [Sign into Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **+ Create a new resource**.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
In this step, configure your search service to recognize an **authorization** he
### [**Azure portal**](#tab/config-svc-portal)
-1. [Sign in to Azure portal](https://portal.azure.com) and open the search service page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
1. Select **Keys** in the left navigation pane.
You must be an **Owner** or have [Microsoft.Authorization/roleAssignments/write]
Role assignments in the portal are service-wide. If you want to [grant permissions to a single index](#rbac-single-index), use PowerShell or the Azure CLI instead.
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to your search service.
Make sure that you [register your client application with Azure Active Directory
### [**Azure portal**](#tab/test-portal)
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to your search service.
search Search Semi Structured Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-semi-structured-data.md
If possible, create both in the same region and resource group for proximity and
### Start with Azure Storage
-1. [Sign in to the Azure portal](https://portal.azure.com/) and click **+ Create Resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and click **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
As with Azure Blob Storage, take a moment to collect the access key. Further on,
REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
You can add or update a semantic configuration at any time without rebuilding yo
### [**Azure portal**](#tab/portal)
-1. [Sign in to Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
1. Open an index.
Your next step is adding parameters to the query request. To be successful, your
[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To configure semantic ranking in the portal, follow the steps below:
-1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
1. Select **Search explorer** at the top of the overview page.
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
The third component is Azure Cognitive Search, which you can [create in the port
To authenticate to your search service, you'll need the service URL and an access key.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
To complete this tutorial, you'll need an Azure Cognitive Search service, which
API calls require the service URL and an access key. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
search Vector Search How To Chunk Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-chunk-documents.md
This article describes several approaches for chunking large documents so that y
## Why is chunking important?
-The models used to generate embedding vectors have maximum limits on the text fragments provided as input. For example, the maximum length of input text for the [Azure OpenAI](/azure/cognitive-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around 4 characters of text for common OpenAI models, this maximum limit is equivalent to around 6000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the Large Language Models (LLM) used for indexing and queries.
+The models used to generate embedding vectors have maximum limits on the text fragments provided as input. For example, the maximum length of input text for the [Azure OpenAI](/azure/ai-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around 4 characters of text for common OpenAI models, this maximum limit is equivalent to around 6000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the Large Language Models (LLM) used for indexing and queries.
## How chunking fits into the workflow
mountains. /n You can both ski in winter and swim in summer.
## Try it out: Chunking and vector embedding generation sample
-A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/cognitive-services/openai/) embedding models. This sample uses a [Cognitive Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
+A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/ai-services/openai/) embedding models. This sample uses a [Cognitive Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
This sample is built on LangChain, Azure OpenAI, and Azure Cognitive Search. ## See also
-+ [Understanding embeddings in Azure OpenAI Service](/azure/cognitive-services/openai/concepts/understand-embeddings)
-+ [Learn how to generate embeddings](/azure/cognitive-services/openai/how-to/embeddings?tabs=console)
-+ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line)
++ [Understanding embeddings in Azure OpenAI Service](/azure/ai-services/openai/concepts/understand-embeddings)++ [Learn how to generate embeddings](/azure/ai-services/openai/how-to/embeddings?tabs=console)++ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/ai-services/openai/tutorials/embeddings?tabs=command-line)
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
Last updated 07/10/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
-Cognitive Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/cognitive-search-vector-pr/tree/main) tap the [similarity embedding models](/azure/cognitive-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
+Cognitive Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/cognitive-search-vector-pr/tree/main) tap the [similarity embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per vector field.
Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per ve
+ For example, you can use **text-embedding-ada-002** to generate text embeddings and [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for image embeddings.
- + To avoid [rate limiting](/azure/cognitive-services/openai/quotas-limits), you can implement retry logic in your workload. For the Python demo, we used [tenacity](https://pypi.org/project/tenacity/).
+ + To avoid [rate limiting](/azure/ai-services/openai/quotas-limits), you can implement retry logic in your workload. For the Python demo, we used [tenacity](https://pypi.org/project/tenacity/).
+ Query outputs are any matching documents found in a search index. Your search index must have been previously loaded with documents having one or more vector fields with embeddings. Whatever model you used for indexing, use the same model for queries.
Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per ve
If you want resources in the same region, start with:
-1. [A region for the similarity embedding model](/azure/cognitive-services/openai/concepts/models#embeddings-models-1), currently in Europe and the United States.
+1. [A region for the similarity embedding model](/azure/ai-services/openai/concepts/models#embeddings-models-1), currently in Europe and the United States.
1. [A region for Cognitive Search](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-search).
print(embeddings)
+ **Identify use cases:** Evaluate the specific use cases where embedding model integration for vector search features can add value to your search solution. This can include matching image content with text content, cross-lingual searches, or finding similar documents. + **Optimize cost and performance**: Vector search can be resource-intensive and is subject to maximum limits, so consider only vectorizing the fields that contain semantic meaning.
-+ **Choose the right embedding model:** Select an appropriate model for your specific use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider using pretrained models like **text-embedding-ada-002** from OpenAI or **Image Retrieval** REST API from [Azure AI Computer Vision](/azure/cognitive-services/computer-vision/how-to/image-retrieval).
++ **Choose the right embedding model:** Select an appropriate model for your specific use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider using pretrained models like **text-embedding-ada-002** from OpenAI or **Image Retrieval** REST API from [Azure AI Computer Vision](/azure/ai-services/computer-vision/how-to/image-retrieval). + **Normalize Vector lengths**: Ensure that the vector lengths are normalized before storing them in the search index to improve the accuracy and performance of similarity search. Most pretrained models already are normalized but not all. + **Fine-tune the model**: If needed, fine-tune the selected model on your domain-specific data to improve its performance and relevance to your search application. + **Test and iterate**: Continuously test and refine your embedding model integration to achieve the desired search performance and user satisfaction. ## Next steps
-+ [Understanding embeddings in Azure OpenAI Service](/azure/cognitive-services/openai/concepts/understand-embeddings)
-+ [Learn how to generate embeddings](/azure/cognitive-services/openai/how-to/embeddings?tabs=console)
-+ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line)
++ [Understanding embeddings in Azure OpenAI Service](/azure/ai-services/openai/concepts/understand-embeddings)++ [Learn how to generate embeddings](/azure/ai-services/openai/how-to/embeddings?tabs=console)++ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/ai-services/openai/tutorials/embeddings?tabs=command-line)
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Scenarios for vector search include:
You can use other Azure services to provide embeddings and data storage.
-+ Azure OpenAI provides embedding models. Demos and samples target the [text-embedding-ada-002](/azure/cognitive-services/openai/concepts/models#embeddings-models) and other models. We recommend Azure OpenAI for generating embeddings for text.
++ Azure OpenAI provides embedding models. Demos and samples target the [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings-models) and other models. We recommend Azure OpenAI for generating embeddings for text.
-+ [Image Retrieval Vectorize Image API(Preview)](/azure/cognitive-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images.
++ [Image Retrieval Vectorize Image API(Preview)](/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images. + Azure Cognitive Search can automatically index vector data from two data sources: [Azure blob indexers](search-howto-indexing-azure-blob-storage.md) and [Azure Cosmos DB for NoSQL indexers](search-howto-index-cosmosdb.md). For more information, see [Add vector fields to a search index.](vector-search-how-to-create-index.md)
Vector search is a method of information retrieval that aims to overcome the lim
### Embeddings and vectorization
-*Embeddings* are a specific type of vector representation created by machine learning models that capture the semantic meaning of text, or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/cognitive-services/openai/concepts/understand-embeddings).
+*Embeddings* are a specific type of vector representation created by machine learning models that capture the semantic meaning of text, or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/ai-services/openai/concepts/understand-embeddings).
The effectiveness of vector search in retrieving relevant information depends on the effectiveness of the embedding model in distilling the meaning of documents and queries into the resulting vector. The best models are well-trained on the types of data they're representing. You can evaluate existing models such as Azure OpenAI text-embedding-ada-002, bring your own model that's trained directly on the problem space, or fine-tune a general-purpose model. Azure Cognitive Search doesn't impose constraints on which model you choose, so pick the best one for your data.
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
If a query request is about dogs, the model maps the query into a vector that ex
Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are summarized here:
-+ `cosine` calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/cognitive-services/openai/concepts/understand-embeddings#cosine-similarity).
++ `cosine` calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity). + `euclidean` calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors.
sentinel Map Data Fields To Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/map-data-fields-to-entities.md
The procedure detailed below is part of the analytics rule creation wizard. It's
1. Select an **identifier** for the entity. Identifiers are attributes of an entity that can sufficiently identify it. Choose one from the **Identifier** drop-down list, and then choose a data field from the **Value** drop-down list that will correspond to the identifier. With some exceptions, the **Value** list is populated by the data fields in the table defined as the subject of the rule query.
- You can define **up to three identifiers** for a given entity. Some identifiers are required, others are optional. You must choose at least one required identifier. If you don't, a warning message will instruct you which identifiers are required. For best results - for maximum unique identification - you should use **strong identifiers** whenever possible, and using multiple strong identifiers will enable greater correlation between data sources. See the full list of available [entities and identifiers](entities-reference.md).
+ You can define **up to three identifiers** for a given entity mapping. Some identifiers are required, others are optional. You must choose at least one required identifier. If you don't, a warning message will instruct you which identifiers are required. For best results&mdash;for maximum unique identification&mdash;you should use **strong identifiers** whenever possible, and using multiple strong identifiers will enable greater correlation between data sources. See the full list of available [entities and identifiers](entities-reference.md).
:::image type="content" source="media/map-data-fields-to-entities/map-entities.png" alt-text="Map fields to entities":::
-1. Click **Add new entity** to map more entities. You can map **up to five entities** in a single analytics rule. You can also map more than one of the same type. For example, you can map two **IP** entities, one from a *source IP address* field and one from a *destination IP address* field. This way you can track them both.
+1. Select **Add new entity** to map more entities. You can define **up to ten entity mappings** in a single analytics rule. You can also map more than one of the same type. For example, you can map two **IP** entities, one from a *source IP address* field and one from a *destination IP address* field. This way you can track them both.
If you change your mind, or if you made a mistake, you can remove an entity mapping by clicking the trash can icon next to the entity drop-down list. 1. When you have finished mapping entities, click the **Review and create** tab. Once the rule validation is successful, click **Save**. > [!NOTE]
-> - **Each mapped entity can identify *up to ten entities***.
-> - If an alert contains more than ten items that correspond to a single entity mapping, only the first ten will be recognized as entities and be able to be analyzed as such.
-> - This limitation applies to actual mappings, not to entity types. So if you have three different mapped entities for IP addresses (say, source, destination, and gateway), each of those mappings can accommodate ten entities.
+> - ***Up to 500 entities collectively* can be identified in a single alert, divided equally across all entity mappings defined in the rule**.
+> - For example, if two entity mappings are defined in the rule, each mapping can identify up to 250 entities; if five mappings are defined, each one can identify up to 100 entities, and so on.
+> - Multiple mappings of a single entity type (say, source IP and destination IP) each count separately.
+> - If an alert contains items in excess of this limit, those excess items will not be recognized and extracted as entities.
>
-> - **The size limit for an entire alert is *64 KB***.
-> - Alerts that grow larger than 64 KB will be truncated. As entities are identified, they are added to the alert one by one until the alert size reaches 64 KB, and any remaining entities are dropped from the alert.
+> - **The size limit for the entire *entities* area of an alert (the *Entities* field) is *64 KB***.
+> - *Entities* fields that grow larger than 64 KB will be truncated. As entities are identified, they are added to the alert one by one until the field size reaches 64 KB, and any entities yet unidentified are dropped from the alert.
## Notes on the new version
In this document, you learned how to map data fields to entities in Microsoft Se
- Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).+++
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
## July 2023
+- [Higher limits for entities in alerts and entity mappings in analytics rules](#higher-limits-for-entities-in-alerts-and-entity-mappings-in-analytics-rules)
- Announcement: [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting) - [Content Hub generally available and centralization changes released](#content-hub-generally-available-and-centralization-changes-released) - [Deploy incident response playbooks for SAP](#deploy-incident-response-playbooks-for-sap)
See these [important announcements](#announcements) about recent changes to feat
- [Simplified pricing tiers](#simplified-pricing-tiers) in [Announcements](#announcements) section below - [Monitor and optimize the execution of your scheduled analytics rules (Preview)](#monitor-and-optimize-the-execution-of-your-scheduled-analytics-rules-preview)
+### Higher limits for entities in alerts and entity mappings in analytics rules
+
+The following limits on entities in alerts and entity mappings in analytics rules have been raised:
+- You can now define **up to ten entity mappings** in an analytics rule (up from five).
+- A single alert can now contain **up to 500 identified entities** in total, divided equally amongst the mapped entities.
+- The *Entities* field in the alert has a **size limit of 64 KB**. (This size limit previously applied to the entire alert record.)
+
+Learn more about entity mapping, and see a full description of these limits, in [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md).
+
+Learn about other [service limits in Microsoft Sentinel](sentinel-service-limits.md).
+ ### Content Hub generally available and centralization changes released Content hub is now generally available (GA)! The [content hub centralization changes announced in February](#out-of-the-box-content-centralization-changes) have also been released. For more information on these changes and their impact, including more details about the tool provided to reinstate **IN USE** gallery templates, see [Out-of-the-box (OOTB) content centralization changes](sentinel-content-centralize.md).
service-fabric How To Managed Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-connect.md
Last updated 07/11/2022
Once you have deployed a cluster via [Portal, Azure Resource Managed template](quickstart-managed-cluster-template.md), or [PowerShell](tutorial-managed-cluster-deploy.md) there are various ways to connect to and view your managed cluster.
+Connecting to a Service Fabric Explorer (SFX) endpoint on a managed cluster will result in a certificate error 'NET::ERR_CERT_AUTHORITY_INVALID' regardless of certificate being used or cluster configuration. This is because the cluster nodes are using the managed 'cluster' certificate when binding FabricGateway (19000) and FabricHttpGateway (19080) TCP ports and is by design.
+
+![Screenshot of Service Fabric Explorer certificate error.](media/how-to-managed-cluster-connect/sfx-your-connection-isnt-private.png)
+ ## Use the Azure portal To navigate to your managed cluster resource:
service-fabric Quickstart Managed Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-managed-cluster-template.md
Once the deployment completes, find the Service Fabric Explorer value in the out
> [!NOTE] > You can find the output of the deployment in Azure Portal under the resource group deployments tab.
+Connecting to a Service Fabric Explorer (SFX) endpoint on a managed cluster will result in a certificate error 'NET::ERR_CERT_AUTHORITY_INVALID' regardless of certificate being used or cluster configuration. This is because the cluster nodes are using the managed 'cluster' certificate when binding FabricGateway (19000) and FabricHttpGateway (19080) TCP ports and is by design.
+
+![Screenshot of Service Fabric Explorer certificate error.](media/how-to-managed-cluster-connect/sfx-your-connection-isnt-private.png)
+ ## Clean up resources When no longer needed, delete the resource group for your Service Fabric managed cluster. To delete the resource group through the portal:
service-fabric Service Fabric Tutorial Standalone Azure Create Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-azure-create-infrastructure.md
In part one of the series, you learn how to:
## Prerequisites
-To complete this tutorial, you need an Azure subscription. If you don't already have an account, go to the [Azure portal](https://portal.azure.com) to create one.
+To complete this tutorial, you need an Azure subscription. If you don't already have an account, create an account using the [Azure portal](https://portal.azure.com).
## Create Azure Virtual Machine instances
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
The following articles help you get started using the Standard consumption and d
* [Provision an Azure Spring Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md) * [Create an Azure Spring Apps Standard consumption and dedicated plan instance in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md) * [Access applications using Azure Spring Apps Standard consumption and dedicated plan in a virtual network](quickstart-access-standard-consumption-within-virtual-network.md)
-* [Deploy an event-driven application to Azure Spring Apps with the Standard consumption and dedicated plan](quickstart-deploy-event-driven-app-standard-consumption.md)
+* [Deploy an event-driven application to Azure Spring Apps](quickstart-deploy-event-driven-app.md)
* [Set up autoscale for applications in Azure Spring Apps Standard consumption and dedicated plan](quickstart-apps-autoscale-standard-consumption.md) * [Map a custom domain to Azure Spring Apps with the Standard consumption and dedicated plan](quickstart-standard-consumption-custom-domain.md) * [Analyze logs and metrics in the Azure Spring Apps Standard consumption and dedicated plan](quickstart-analyze-logs-and-metrics-standard-consumption.md)
spring-apps Quickstart Deploy Event Driven App Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app-standard-consumption.md
- Title: Quickstart - Deploy event-driven application to Azure Spring Apps
-description: Learn how to deploy an event-driven application to Azure Spring Apps.
--- Previously updated : 06/21/2023--
-zone_pivot_groups: spring-apps-plan-selection
--
-# Quickstart: Deploy an event-driven application to Azure Spring Apps
-
-> [!NOTE]
-> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-This article explains how to deploy a Spring Boot event-driven application to Azure Spring Apps.
-
-The sample project is an event-driven application that subscribes to a [Service Bus queue](../service-bus-messaging/service-bus-queues-topics-subscriptions.md#queues) named `lower-case`, and then handles the message and sends another message to another queue named `upper-case`. To make the app simple, message processing just converts the message to uppercase. The following diagram depicts this process:
--
-## Prerequisites
--- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.---- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise tier in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).---- [Azure CLI](/cli/azure/install-azure-cli). Version 2.45.0 or greater. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`-- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-
-## Clone and build the sample project
-
-Use the following steps to prepare the sample locally.
-
-1. The sample project is ready on GitHub. Clone sample project by using the following command:
-
- ```bash
- git clone https://github.com/Azure-Samples/ASA-Samples-Event-Driven-Application.git
- ```
-
-1. Build the sample project by using the following commands:
-
- ```bash
- cd ASA-Samples-Event-Driven-Application
- ./mvnw clean package -DskipTests
- ```
-
-## Prepare the cloud environment
-
-The main resources you need to run this sample is an Azure Spring Apps instance and an Azure Service Bus instance. Use the following steps to create these resources.
--
-1. Use the following commands to create variables for the names of your resources and for other settings as needed. Resource names in Azure must be unique.
-
- ```azurecli
- export RESOURCE_GROUP=<event-driven-app-resource-group-name>
- export LOCATION=<desired-region>
- export SERVICE_BUS_NAME_SPACE=<event-driven-app-service-bus-namespace>
- export AZURE_CONTAINER_APPS_ENVIRONMENT=<Azure-Container-Apps-environment-name>
- export AZURE_SPRING_APPS_INSTANCE=<Azure-Spring-Apps-instance-name>
- export APP_NAME=<event-driven-app-name>
- ```
---
-1. Use the following commands to create variables for the names of your resources and for other settings as needed. Resource names in Azure must be unique.
-
- ```azurecli
- export RESOURCE_GROUP=<event-driven-app-resource-group-name>
- export LOCATION=<desired-region>
- export SERVICE_BUS_NAME_SPACE=<event-driven-app-service-bus-namespace>
- export AZURE_SPRING_APPS_INSTANCE=<Azure-Spring-Apps-instance-name>
- export APP_NAME=<event-driven-app-name>
- ```
--
-2. Use the following command to sign in to Azure:
-
- ```azurecli
- az login
- ```
-
-1. Use the following command to set the default location:
-
- ```azurecli
- az configure --defaults location=${LOCATION}
- ```
-
-1. Use the following command to list all available subscriptions, then determine the ID for the subscription you want to use.
-
- ```azurecli
- az account list --output table
- ```
-
-1. Use the following command to set your default subscription:
-
- ```azurecli
- az account set --subscription <subscription-ID>
- ```
-
-1. Use the following command to create a resource group:
-
- ```azurecli
- az group create --resource-group ${RESOURCE_GROUP}
- ```
-
-1. Use the following command to set the newly created resource group as the default resource group.
-
- ```azurecli
- az configure --defaults group=${RESOURCE_GROUP}
- ```
-
-## Create a Service Bus instance
-
-Use the following command to create a Service Bus namespace:
-
-```azurecli
-az servicebus namespace create --name ${SERVICE_BUS_NAME_SPACE}
-```
-
-## Create queues in your Service Bus instance
-
-Use the following commands to create two queues named `lower-case` and `upper-case`:
-
-```azurecli
-az servicebus queue create \
- --namespace-name ${SERVICE_BUS_NAME_SPACE} \
- --name lower-case
-az servicebus queue create \
- --namespace-name ${SERVICE_BUS_NAME_SPACE} \
- --name upper-case
-```
--
-## Create an Azure Container Apps environment
-
-The Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
-
-Use the following steps to create the environment:
-
-1. Use the following command to install the Azure Container Apps extension for the Azure CLI:
-
- ```azurecli
- az extension add --name containerapp --upgrade
- ```
-
-1. Use the following command to register the `Microsoft.App` namespace:
-
- ```azurecli
- az provider register --namespace Microsoft.App
- ```
-
-1. If you haven't previously used the Azure Monitor Log Analytics workspace, register the `Microsoft.OperationalInsights` provider by using the following command:
-
- ```azurecli
- az provider register --namespace Microsoft.OperationalInsights
- ```
-
-1. Use the following command to create the environment:
-
- ```azurecli
- az containerapp env create --name ${AZURE_CONTAINER_APPS_ENVIRONMENT} --enable-workload-profiles
- ```
--
-## Create the Azure Spring Apps instance
-
-An Azure Spring Apps service instance hosts the Spring event-driven app. Use the following steps to create the service instance and then create an app inside the instance.
-
-1. Use the following command to install the Azure CLI extension designed for Azure Spring Apps:
-
- ```azurecli
- az extension remove --name spring && \
- az extension add --name spring
- ```
--
-2. Use the following command to register the `Microsoft.AppPlatform` provider for Azure Spring Apps:
-
- ```azurecli
- az provider register --namespace Microsoft.AppPlatform
- ```
-
-1. Get the Azure Container Apps environment resource ID by using the following command:
-
- ```azurecli
- export MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
- --name ${AZURE_CONTAINER_APPS_ENVIRONMENT} \
- --query id \
- --output tsv)
- ```
-
-1. Use the following command to create your Azure Spring Apps instance, specifying the resource ID of the Azure Container Apps environment you created.
-
- ```azurecli
- az spring create \
- --name ${AZURE_SPRING_APPS_INSTANCE} \
- --managed-environment ${MANAGED_ENV_RESOURCE_ID} \
- --sku standardGen2
- ```
---
-2. Use the following command to create your Azure Spring Apps instance:
-
- ```azurecli
- az spring create --name ${AZURE_SPRING_APPS_INSTANCE}
- ```
---
-2. Use the following command to create your Azure Spring Apps instance:
-
- ```azurecli
- az spring create \
- --name ${AZURE_SPRING_APPS_INSTANCE} \
- --sku Enterprise
- ```
--
-## Create an app in your Azure Spring Apps instance
--
-The following sections show you how to create an app in either the standard consumption or dedicated workload profiles.
-
-> [!IMPORTANT]
-> The Consumption workload profile has a pay-as-you-go billing model, with no starting cost. You're billed for the dedicated workload profile based on the provisioned resources. For more information, see [Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps (preview)](../container-apps/workload-profiles-overview.md) and [Azure Spring Apps pricing](https://azure.microsoft.com/pricing/details/spring-apps/).
-
-### Create an app with the consumption workload profile
-
-Use the following command to create an app in the Azure Spring Apps instance:
-
-```azurecli
-az spring app create \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --cpu 1 \
- --memory 2 \
- --min-replicas 2 \
- --max-replicas 2 \
- --runtime-version Java_17 \
- --assign-endpoint true
-```
-
-### (Optional) Create an app with the dedicated workload profile
-
->[!NOTE]
-> This step is optional. Use this step only if you wish to create apps in the dedicated workload profile.
-
-Dedicated workload profiles support running apps with customized hardware and increased cost predictability. Use the following command to create a dedicated workload profile:
-
-```azurecli
-az containerapp env workload-profile set \
- --name ${AZURE_CONTAINER_APPS_ENVIRONMENT} \
- --workload-profile-name my-wlp \
- --workload-profile-type D4 \
- --min-nodes 1 \
- --max-nodes 2
-```
-
-Then, use the following command to create an app with the dedicated workload profile:
-
-```azurecli
-az spring app create \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --cpu 1 \
- --memory 2Gi \
- --min-replicas 2 \
- --max-replicas 2 \
- --runtime-version Java_17 \
- --assign-endpoint true \
- --workload-profile my-wlp
-```
---
-Create an app in the Azure Spring Apps instance by using the following command:
---
-```azurecli
-az spring app create \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --runtime-version Java_17 \
- --assign-endpoint true
-```
---
-```azurecli
-az spring app create \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --assign-endpoint true
-```
--
-## Bind the Service Bus to Azure Spring Apps and deploy the app
-
-You've now created both the Service Bus and the app in Azure Spring Apps, but the app can't connect to the Service Bus. Use the following steps to enable the app to connect to the Service Bus, and then deploy the app.
-
-1. Get the Service Bus's connection string by using the following command:
-
- ```azurecli
- export SERVICE_BUS_CONNECTION_STRING=$(az servicebus namespace authorization-rule keys list \
- --namespace-name ${SERVICE_BUS_NAME_SPACE} \
- --name RootManageSharedAccessKey \
- --query primaryConnectionString \
- --output tsv)
- ```
-
-1. Use the following command to provide the connection string to the app through an environment variable.
-
- ```azurecli
- az spring app update \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --env SERVICE_BUS_CONNECTION_STRING=${SERVICE_BUS_CONNECTION_STRING}
- ```
-
-1. Now the cloud environment is ready. Deploy the app by using the following command.
-
- ```azurecli
- az spring app deploy \
- --service ${AZURE_SPRING_APPS_INSTANCE} \
- --name ${APP_NAME} \
- --artifact-path target/simple-event-driven-app-0.0.1-SNAPSHOT.jar
- ```
-
-## Validate the event-driven app
-
-Use the following steps to confirm that the event-driven app works correctly. You can validate the app by sending a message to the `lower-case` queue, then confirming that there's a message in the `upper-case` queue.
-
-1. Send a message to `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-
-1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
-
-## Clean up resources
-
-Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternatively, to delete the resource group by using Azure CLI, use the following commands:
-
-```azurecli
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-## Next steps
--
-To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Apps:
-
-> [!div class="nextstepaction"]
-> [Introduction to the sample app](./quickstart-sample-app-introduction.md)
---
-To learn how to set up autoscale for applications in Azure Spring Apps Standard consumption plan, advance to this next quickstart:
-
-> [!div class="nextstepaction"]
-> [Set up autoscale for applications in Azure Spring Apps Standard consumption and dedicated plan](./quickstart-apps-autoscale-standard-consumption.md)
--
-For more information, see the following articles:
--- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).-- [Spring on Azure](/azure/developer/java/spring/)-- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md
+
+ Title: Quickstart - Deploy event-driven application to Azure Spring Apps
+description: Learn how to deploy an event-driven application to Azure Spring Apps.
+++ Last updated : 07/19/2023++
+zone_pivot_groups: spring-apps-plan-selection
++
+# Quickstart: Deploy an event-driven application to Azure Spring Apps
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+This article explains how to deploy a Spring Boot event-driven application to Azure Spring Apps.
+
+The sample project is an event-driven application that subscribes to a [Service Bus queue](../service-bus-messaging/service-bus-queues-topics-subscriptions.md#queues) named `lower-case`, and then handles the message and sends another message to another queue named `upper-case`. To make the app simple, message processing just converts the message to uppercase. The following diagram depicts this process:
+++++
+## 1. Prerequisites
++
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
+++
+### [Azure portal](#tab/Azure-portal)
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure Developer CLI (AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
+++++
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
+++++++++++
+## 5. Validate the app
+
+Use the following steps to confirm that the event-driven app works correctly. You can validate the app by sending a message to the `lower-case` queue, then confirm that there's a message in the `upper-case` queue.
+
+1. Send a message to the `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
+
+1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
++
+3. Use the following command to check the app's log to investigate any deployment issue:
+
+ ```azurecli
+ az spring app logs \
+ --service ${AZURE_SPRING_APPS_INSTANCE} \
+ --name ${APP_NAME}
+ ```
+++
+3. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
+
+ :::image type="content" source="media/quickstart-deploy-event-driven-app/logs.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Logs page." lightbox="media/quickstart-deploy-event-driven-app/logs.png":::
+++
+## 7. Next steps
+
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
+
+> [!div class="nextstepaction"]
+> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
+
+> [!div class="nextstepaction"]
+> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
++
+> [!div class="nextstepaction"]
+> [Run microservice apps(Pet Clinic)](./quickstart-sample-app-introduction.md)
+++
+> [!div class="nextstepaction"]
+> [Run polyglot apps on Enterprise plan(ACME Fitness Store)](./quickstart-sample-app-acme-fitness-store-introduction.md)
++
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Here is what the condition looks like in code:
## Step 6: Test the condition
-1. In a new window, open the [Azure portal](https://portal.azure.com).
+1. In a new window, sign in to the [Azure portal](https://portal.azure.com).
1. Sign in as the user you created earlier.
storage Storage Blob Static Website Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-host.md
After you install Visual Studio Code, install the Azure Storage preview extensio
## Sign in to the Azure portal
-Sign in to the [Azure portal](https://portal.azure.com/) to get started.
+Sign in to the [Azure portal](https://portal.azure.com) to get started.
## Configure static website hosting The first step is to configure your storage account to host a static website in the Azure portal. When you configure your account for static website hosting, Azure Storage automatically creates a container named *$web*. The *$web* container will contain the files for your static website.
-1. Open the [Azure portal](https://portal.azure.com/) in your web browser.
+1. Sign in to the [Azure portal](https://portal.azure.com) in your web browser.
1. Locate your storage account and display the account overview. 1. Select **Static website** to display the configuration page for static websites. 1. Select **Enabled** to enable static website hosting for the storage account.
stream-analytics Stream Analytics Test Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-test-query.md
# Test an Azure Stream Analytics job in the portal
-In Azure Stream Analytics, you can test your query without starting or stopping your job. You can test queries on incoming data from your streaming sources or upload sample data from a local file on Azure Portal. You can also test queries locally from your local sample data or live data in [Visual Studio](stream-analytics-live-data-local-testing.md) and [Visual Studio Code](visual-studio-code-local-run-live-input.md).
+In Azure Stream Analytics, you can test your query without starting or stopping your job. You can test queries on incoming data from your streaming sources or upload sample data from a local file on Azure portal. You can also test queries locally from your local sample data or live data in [Visual Studio](stream-analytics-live-data-local-testing.md) and [Visual Studio Code](visual-studio-code-local-run-live-input.md).
## Automatically sample incoming data from input
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
description: Archive of the new features and documentation improvements for Azur
Previously updated : 04/04/2023 Last updated : 07/21/2023
The following updates are new to Azure Synapse Analytics this month.
### Machine Learning
-* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/about/)
+* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/Overview/)
* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md)
-* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/)
+* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/Responsible%20AI/Interpreting%20Model%20Predictions/)
* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md) * Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md) * Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md)
The following updates are new to Azure Synapse Analytics this month.
### Machine Learning
-* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/about/)
+* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/Overview/)
* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md)
-* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/)
+* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/Responsible%20AI/Interpreting%20Model%20Predictions/)
* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md) * Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md) * Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 07/05/2023 Last updated : 07/21/2023
This page is continuously updated with a recent review of what's new in [Azure S
For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous updates in Azure Synapse Analytics](whats-new-archive.md). > [!IMPORTANT]
-> [Microsoft Fabric has been announced](https://azure.microsoft.com/blog/introducing-microsoft-fabric-data-analytics-for-the-era-of-ai/)! Learn about this exciting new preview and discover [What is Microsoft Fabric?](/fabric/get-started/microsoft-fabric-overview). Get started with [end-to-end tutorials in Microsoft Fabric](/fabric/get-started/end-to-end-tutorials).
+> [Microsoft Fabric has been announced!](https://azure.microsoft.com/blog/introducing-microsoft-fabric-data-analytics-for-the-era-of-ai/)
+> - Learn about this exciting new preview and discover [What is Microsoft Fabric?](/fabric/get-started/microsoft-fabric-overview)
+> - Get started with [end-to-end tutorials in Microsoft Fabric](/fabric/get-started/end-to-end-tutorials).
+> - See [What's new in Microsoft Fabric?](/fabric/get-started/whats-new)
## Features currently in preview
This section summarizes recent new features and improvements to machine learning
| March 2023 | **Using OpenAI GPT in Synapse Analytics** | Microsoft offers Azure OpenAI as an Azure Cognitive Service, and you can [access Azure OpenAI's GPT models from within Synapse Spark](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/using-openai-gpt-in-synapse-analytics/ba-p/3751815). | | November 2022 | **R Support (preview)** | Azure Synapse Analytics [now provides built-in R support for Apache Spark](./spark/apache-spark-r-language.md), currently in preview. For an example, [install an R library from CRAN and CRAN snapshots](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_16). | | August 2022 | **SynapseML v.0.10.0** | New [release of SynapseML v0.10.0](https://github.com/microsoft/SynapseML/releases/tag/v0.10.0) (previously MMLSpark), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. Learn more about the [latest additions to SynapseML](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/exciting-new-release-of-synapseml/ba-p/3589606) and get started with [SynapseML](https://aka.ms/spark).|
-| August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/getting_started/dotnet_example/).|
-| August 2022 | **Azure Open AI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20OpenAI/).|
-| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/mlflow/introduction/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/mlflow/autologging/).|
+| August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/Reference/Quickstart%20-%20LightGBM%20in%20Dotnet/).|
+| August 2022 | **Azure Open AI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/OpenAI/).|
+| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Overview/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Autologging/).|
| August 2022 | **SynapseML in Binder** | We know that Spark can be intimidating for first users but fear not because with the technology Binder, you can [explore and experiment with SynapseML in Binder](https://mybinder.org/v2/gh/microsoft/SynapseML/93d7ccf?labpath=notebooks%2Ffeatures) with zero setup, install, infrastructure, or Azure account required.| | June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
traffic-manager Tutorial Traffic Manager Improve Website Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-improve-website-response.md
In order to see the Traffic Manager in action, this tutorial requires that you d
### Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
### Create websites
traffic-manager Tutorial Traffic Manager Subnet Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-subnet-routing.md
The test VMs are used to illustrate how Traffic Manager routes user traffic to t
### Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
### Create websites
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
To assign the *Desktop Virtualization Power On Off Contributor* role with the Az
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* role to the service principal on your subscriptions, you can create a scaling plan. To create a scaling plan:
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
Once you're done, go to the **Review + create** tab and select **Create** to dep
To edit an existing scaling plan:
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
virtual-desktop Sandbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md
To publish Windows Sandbox to your host pool using PowerShell:
1. Connect to Azure using one of the following methods: - Open a PowerShell prompt on your local device. Run the `Connect-AzAccount` cmdlet to sign in to your Azure account. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- - Sign in to [the Azure portal](https://portal.azure.com/) and open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type.
+ - Sign in to the [Azure portal](https://portal.azure.com) and open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type.
1. Run the following cmdlet to get a list of all the Azure tenants your account has access to:
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
# Monitoring Virtual WAN data reference
-This article provides a reference of log and metric data collected to analyze the performance and availability of Virtual WAN. See [Monitoring Virtual WAN](monitor-virtual-wan.md) for details on collecting and analyzing monitoring data for Virtual WAN.
+This article provides a reference of log and metric data collected to analyze the performance and availability of Virtual WAN. See [Monitoring Virtual WAN](monitor-virtual-wan.md) for instructions and additional context on monitoring data for Virtual WAN.
## <a name="metrics"></a>Metrics
-Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
- ### <a name="hub-router-metrics"></a>Virtual hub router metrics The following metric is available for virtual hub router within a virtual hub:
The following metrics are available for Azure ExpressRoute gateways:
| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute gateway. | | **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute gateway.| | **Frequency of routes changed** | Frequency of Route changes in ExpressRoute gateway.|
-| **Number of VMs in Virtual Network**| Number of VMs that use this ExpressRoute gateway.|
-
-### <a name="metrics-steps"></a>View gateway metrics
-
-The following steps help you locate and view metrics:
-
-1. In the portal, navigate to the virtual hub that has the gateway.
-
-1. Select **VPN (Site to site)** to locate a site-to-site gateway, **ExpressRoute** to locate an ExpressRoute gateway, or **User VPN (Point to site)** to locate a point-to-site gateway.
-
-1. Select **Metrics**.
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/view-metrics.png" alt-text="Screenshot shows a site to site VPN pane with View in Azure Monitor selected." lightbox="./media/monitor-virtual-wan-reference/view-metrics.png":::
-
-1. On the **Metrics** page, you can view the metrics that you're interested in.
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted." lightbox="./media/monitor-virtual-wan-reference/metrics-page.png":::
## <a name="diagnostic"></a>Diagnostic logs
-The following diagnostic logs are available, unless otherwise specified.
+The following diagnostic logs are available, unless otherwise specified.
### <a name="s2s-diagnostic"></a>Site-to-site VPN gateway diagnostics
The following diagnostics are available for Virtual WAN point-to-site VPN gatewa
In Azure Virtual WAN, ExpressRoute gateway metrics can be exported as logs via a diagnostic setting.
-### <a name="view-diagnostic"></a>View diagnostic logs configuration
-
-The following steps help you create, edit, and view diagnostic settings:
-
-1. In the portal, navigate to your Virtual WAN resource, then select **Hubs** in the **Connectivity** group.
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub.png" alt-text="Screenshot that shows the Hub selection in the vWAN Portal." lightbox="./media/monitor-virtual-wan-reference/select-hub.png":::
-
-1. Under the **Connectivity** group on the left, select the gateway for which you want to examine diagnostics:
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub-gateway.png" alt-text="Screenshot that shows the Connectivity section for the hub." lightbox="./media/monitor-virtual-wan-reference/select-hub-gateway.png":::
-
-1. On the right part of the page, click on the **View in Azure Monitor** link to the right of **Logs**, then select an option. You can choose to send to Log Analytics, stream to an event hub, or simply archive to a storage account.
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs." lightbox="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png":::
-
-1. In this page, you can create a new diagnostic setting (**+Add diagnostic setting**) or edit an existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the example below), stream to an event hub, send to a 3rd-party solution, or archive to a storage account.
-
- :::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png":::
- ### Log Analytics sample query If you selected to send diagnostic data to a Log Analytics Workspace, then you can use SQL-like queries such as the example below to examine the data. For more information, see [Log Analytics Query Language](/services-hub/health/log-analytics-query-language).
For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided t
## <a name="activity-logs"></a>Activity logs
-**Activity log** entries are collected by default and can be viewed in the Azure portal. You can use Azure activity logs (formerly known as *operational logs* and *audit logs*) to view all operations submitted to your Azure subscription.
+[**Activity log**](../azure-monitor/essentials/activity-log.md) entries are collected by default and can be viewed in the Azure portal. You can use Azure activity logs (formerly known as *operational logs* and *audit logs*) to view all operations submitted to your Azure subscription.
+
+You can view activity logs independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit
:::image type="content" source="./media/monitor-virtual-wan-reference/firewall-resources-portal.png" alt-text="Screenshot shows a Firewall resource in the vWAN hub resource group." lightbox="./media/monitor-virtual-wan-reference/firewall-resources-portal.png":::
-Diagnostics and logging configuration must be done from accessing the **Diagnostic Setting** tab:
- ## Next steps
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Last updated 06/02/2022
# Monitoring Azure Virtual WAN
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability and performance.
This article describes the monitoring data generated by Azure Virtual WAN. Virtual WAN uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-## Virtual WAN Insights
-
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights".
-Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a Virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the Virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md).
+## Prerequisites
+You have created a Virtual WAN setup. For help in deploying a Virtual WAN:
+* [Creating a site-to-site connection](virtual-wan-site-to-site-portal.md)
+* [Creating a User VPN (point-to-site) connection](virtual-wan-point-to-site-portal.md)
+* [Creating an ExpressRoute connection](virtual-wan-expressroute-portal.md)
+* [Creating an NVA in a virtual hub](how-to-nva-hub.md)
+* [Installing Azure Firewall in a Virtual hub](howto-firewall.md)
-## Monitoring data
+## Analyzing metrics
-Virtual WAN collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md).
+Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
-See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for detailed information on the metrics and logs metrics created by Virtual WAN.
+For a list of the platform metrics collected for Virtual WAN, see [Monitoring Virtual WAN data reference metrics](monitor-virtual-wan-reference.md#metrics).
-## Collection and routing
+### <a name="metrics-steps"></a>View metrics for Virtual WAN
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+The following steps help you locate and view metrics:
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+1. In the portal, navigate to the virtual hub.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Virtual WAN are listed in [Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md).
+1. Select **VPN (Site to site)** to locate a site-to-site gateway, **ExpressRoute** to locate an ExpressRoute gateway, or **User VPN (Point to site)** to locate a point-to-site gateway.
-> [!IMPORTANT]
-> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
+1. Select **Metrics**.
-The metrics and logs you can collect are discussed in the following sections.
+ :::image type="content" source="./media/monitor-virtual-wan-reference/view-metrics.png" alt-text="Screenshot shows a site to site VPN pane with View in Azure Monitor selected." lightbox="./media/monitor-virtual-wan-reference/view-metrics.png":::
-## Analyzing metrics
+1. On the **Metrics** page, you can view the metrics that you're interested in.
-You can analyze metrics for Virtual WAN with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+ :::image type="content" source="./media/monitor-virtual-wan-reference/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted." lightbox="./media/monitor-virtual-wan-reference/metrics-page.png":::
-For a list of the platform metrics collected for Virtual WAN, see [Monitoring Virtual WAN data reference metrics](monitor-virtual-wan-reference.md#metrics).
+1. To see metrics for the virtual hub router, you can select **Metrics** from the virtual hub **Overview** blade.
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
## Analyzing logs
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+For a list of supported logs in Virtual WAN, see [Monitoring Virtual WAN data reference logs](monitor-virtual-wan-reference.md#diagnostic). All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md).
+
+### <a name="create-diagnostic"></a>Create diagnostic setting to view logs
+
+The following steps help you create, edit, and view diagnostic settings:
+
+1. In the portal, navigate to your Virtual WAN resource, then select **Hubs** in the **Connectivity** group.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub.png" alt-text="Screenshot that shows the Hub selection in the vWAN Portal." lightbox="./media/monitor-virtual-wan-reference/select-hub.png":::
+
+1. Under the **Connectivity** group on the left, select the gateway for which you want to examine diagnostics:
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md).
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub-gateway.png" alt-text="Screenshot that shows the Connectivity section for the hub." lightbox="./media/monitor-virtual-wan-reference/select-hub-gateway.png":::
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+1. On the right part of the page, click on the **View in Azure Monitor** link to the right of **Logs**.
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md).
+ :::image type="content" source="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs." lightbox="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png":::
-To analyze logs, go to your Virtual WAN gateway (User VPN or site-to-site VPN). In the **Essentials** section of the page, select **Logs -> View in Azure Monitor**.
+1. In this page, you can create a new diagnostic setting (**+Add diagnostic setting**) or edit an existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the example below), stream to an event hub, send to a 3rd-party solution, or archive to a storage account.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png":::
+1. After clicking **Save**, you should start seeing logs appear in this log analytics workspace within a few hours.
+1. To monitor a **secured hub (with Azure Firewall)**, then diagnostics and logging configuration must be done from accessing the **Diagnostic Setting** tab:
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" alt-text="Screenshot shows Firewall diagnostic settings." lightbox="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" :::
+
+> [!IMPORTANT]
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
## Alerts
Azure Monitor alerts proactively notify you when important conditions are found
To create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md).
+## Virtual WAN Insights
+
+Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights".
+
+Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a Virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the Virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md).
+ ## Next steps
-* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a reference of the metrics, logs, and other important values created by Virtual WAN.
+* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN.
* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for additional details on **Azure Monitor Metrics**.
+* See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics.
+* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting on creating diagnostic settings via Azure portal, CLI, PowerShell, etc., you can visit
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create an application gateway