Updates from: 10/03/2024 01:08:00
Service Microsoft Docs article Related commit history on GitHub Change details
api-center Build Register Apis Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/build-register-apis-vscode-extension.md
+
+ Title: Build and register APIs - Azure API Center - VS Code extension
+description: API developers can use the Azure API Center extension for Visual Studio Code to build and register APIs in their organization's API center.
+++ Last updated : 09/23/2024++
+# Customer intent: As an API developer, I want to use my Visual Studio Code environment to build, discover, explore, and consume APIs in my organization's API center.
++
+# Build and register APIs with the Azure API Center extension for Visual Studio Code
+
+To build, discover, explore, and consume APIs in your [API center](overview.md), you can use the Azure API Center extension in your Visual Studio Code development environment. The extension provides the following features for API developers:
+
+* **Build APIs** - Make APIs you're building discoverable to others by registering them in your API center directly or using CI/CD pipelines in GitHub or Azure DevOps. Shift-left API design conformance checks into Visual Studio Code with integrated linting support. Ensure that new API versions don't break API consumers with breaking change detection.
+
+* **Discover APIs** - Browse the APIs in your API center, and view their details and documentation.
+
+* **Explore APIs** - Use Swagger UI or REST client to explore API requests and responses.
+
+* **Consume APIs** - Generate API SDK clients for your favorite language including JavaScript, TypeScript, .NET, Python, and Java, using the Microsoft Kiota engine that generates SDKs for Microsoft Graph, GitHub, and more.
+
+> [!VIDEO https://www.youtube.com/embed/62X0NALedCc]
+
+## Prerequisites
+
+* One or more API centers in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
+
+ Currently, you need to be assigned the Contributor role or higher permissions to manage API centers with the extension.
+
+* [Visual Studio Code](https://code.visualstudio.com/)
+
+* [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center)
+
+ > [!NOTE]
+ > Where noted, certain features are available only in the extension's pre-release version. [!INCLUDE [vscode-extension-prerelease-features](includes/vscode-extension-prerelease-features.md)]
+
+The following Visual Studio Code extensions are optional and needed only for certain scenarios as indicated:
+
+* [REST client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) - to send HTTP requests and view the responses in Visual Studio Code directly
+* [Microsoft Kiota extension](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota) - to generate API clients
+* [Spectral extension](https://marketplace.visualstudio.com/items?itemName=stoplight.spectral) - to run shift-left API design conformance checks in Visual Studio Code
+* [Optic CLI](https://github.com/opticdev/optic) - to detect breaking changes between API specification documents
+* [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) - to generate OpenAPI specification files from API code
+
+## Setup
+
+1. Install the Azure API Center extension for Visual Studio Code from the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center). Install optional extensions as needed.
+1. In Visual Studio Code, in the Activity Bar on the left, select API Center.
+1. If you're not signed in to your Azure account, select **Sign in to Azure...**, and follow the prompts to sign in.
+ Select an Azure account with the API center (or API centers) you wish to view APIs from. You can also filter on specific subscriptions if you have many to view from.
+
+## Register APIs
+
+Register an API in your API center directly from Visual Studio Code, either by registering it as a one-time operation or with a CI/CD pipeline.
+
+1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Register API** and hit **Enter**.
+1. Select how you want to register your API with your API center:
+ * **Step-by-step** is best for one-time registration of APIs.
+ * **CI/CD** adds a preconfigured GitHub or Azure DevOps pipeline to your active Visual Studio Code workspace that is run as part of a CI/CD workflow on each commit to source control. It's recommended to inventory APIs with your API center using CI/CD to ensure API metadata including specification and version stay current in your API center as the API continues to evolve over time.
+1. Complete registration steps:
+ * For **Step-by-step**, select the API center to register APIs with, and answer prompts with information including API title, type, lifecycle stage, version, and specification to complete API registration.
+ * For **CI/CD**, select either **GitHub** or **Azure DevOps**, depending on your preferred source control mechanism. A Visual Studio Code workspace must be open for the Azure API Center extension to add a pipeline to your workspace. After the file is added, complete steps documented in the CI/CD pipeline file itself to configure Azure Pipeline/GitHub Action environment variables and identity. On push to source control, the API will be registered in your API center.
+
+ Learn more about setting up a [GitHub Actions workflow](register-apis-github-actions.md) to register APIs with your API center.
++
+## API design conformance
+
+To ensure design conformance with organizational standards as you build APIs, the Azure API Center extension for Visual Studio Code provides integrated support for API specification linting with Spectral.
+
+1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Set active API Style Guide** and hit **Enter**.
+2. Select one of the default rules provided, or, if your organization has a style guide already available, use **Select Local File** or **Input Remote URL** to specify the active ruleset in Visual Studio Code. Hit **Enter**.
+
+Once an active API style guide is set, opening any OpenAPI or AsyncAPI-based specification file will trigger a local linting operation in Visual Studio Code. Results are displayed both inline in the editor, as well as in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**).
++
+## Breaking change detection
+
+When introducing new versions of your API, it's important to ensure that changes introduced do not break API consumers on previous versions of your API. The Azure API Center extension for Visual Studio Code makes this easy with breaking change detection for OpenAPI specification documents powered by Optic.
+
+1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Detect Breaking Change** and hit **Enter**.
+2. Select the first API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code.
+3. Select the second API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code.
+
+Visual Studio Code will open a diff view between the two API specifications. Any breaking changes are displayed both inline in the editor, as well as in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**).
++
+## Generate OpenAPI specification file from API code
+
+Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. This will create an OpenAPI specification file.
+
+> [!NOTE]
+> This feature is available in the pre-release version of the API Center extension.
++
+After generating the OpenAPI specification file and checking for accuracy, you can register the API with your API center using the **Azure API Center: Register API** command.
+
+## Discover APIs
+
+Your API center resources appear in the tree view on the left-hand side. Expand an API center resource to see APIs, versions, definitions, environments, and deployments.
++
+Search for APIs within an API Center by using the search icon shown in the **Apis** tree view item.
+
+> [!TIP]
+> Optionally enable a [platform API catalog](enable-platform-api-catalog-vscode-extension.md) for your API center in Visual Studio Code so that app developers in your organization can discover APIs in a centralized location. The platform API catalog is a read-only view of the API inventory.
+
+## View API documentation
+
+You can view the documentation for an API definition in your API center and try API operations. This feature is only available for OpenAPI-based APIs in your API center.
+
+1. Expand the API Center tree view to show an API definition.
+1. Right-click on the definition, and select **Open API Documentation**. A new tab appears with the Swagger UI for the API definition.
+
+ :::image type="content" source="media/build-register-apis-vscode-extension/view-api-documentation.png" alt-text="Screenshot of API documentation in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/view-api-documentation.png":::
+
+1. To try the API, select an endpoint, select **Try it out**, enter required parameters, and select **Execute**.
+
+ > [!NOTE]
+ > Depending on the API, you might need to provide authorization credentials or an API key to try the API.
+
+ > [!TIP]
+ > Using the pre-release version of the extension, you can generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**.
+
+## Generate HTTP file
+
+You can view a `.http` file based on the API definition in your API center. If the REST Client extension is installed, you can make requests directory from the Visual Studio Code editor. This feature is only available for OpenAPI-based APIs in your API center.
+
+1. Expand the API Center tree view to show an API definition.
+1. Right-click on the definition, and select **Generate HTTP File**. A new tab appears that renders a .http document populated by the API specification.
+
+ :::image type="content" source="media/build-register-apis-vscode-extension/generate-http-file.png" alt-text="Screenshot of generating a .http file in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/generate-http-file.png":::
+
+1. To make a request, select an endpoint, and select **Send Request**.
+
+ > [!NOTE]
+ > Depending on the API, you might need to provide authorization credentials or an API key to make the request.
+
+## Generate API client
+
+Use the Microsoft Kiota extension to generate an API client for your favorite language. This feature is only available for OpenAPI-based APIs in your API center.
+
+1. Expand the API Center tree view to show an API definition.
+1. Right-click on the definition, and select **Generate API Client**. The **Kiota OpenAPI Generator** pane appears.
+1. Select the API endpoints and HTTP operations you wish to include in your SDKs.
+1. Select **Generate API client**.
+ 1. Enter configuration details about the SDK name, namespace, and output directory.
+ 1. Select the language for the generated SDK.
+
+ :::image type="content" source="media/build-register-apis-vscode-extension/generate-api-client.png" alt-text="Screenshot of Kiota OpenAPI Explorer in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/generate-api-client.png":::
+
+The client is generated.
+
+For details on using the Kiota extension, see [Microsoft Kiota extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota).
+
+## Export API specification
+
+You can export an API specification from a definition and then download it as a file.
+
+To export a specification in the extension's tree view:
+
+1. Expand the API Center tree view to show an API definition.
+1. Right-click on the definition, and select **Export API Specification Document**. A new tab appears that renders an API specification document.
+
+ :::image type="content" source="media/build-register-apis-vscode-extension/export-specification.png" alt-text="Screenshot of exporting API specification in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/export-specification.png":::
+
+You can also export a specification using the Command Palette:
+
+1. Type the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette.
+1. Select **Azure API Center: Export API Specification Document**.
+1. Make selections to navigate to an API definition. A new tab appears that renders an API specification document.
+
+## Related content
+
+* [Azure API Center - key concepts](key-concepts.md)
+* [Enable and view platform API catalog in Visual Studio Code](enable-platform-api-catalog-vscode-extension.md)
+
api-center Enable Platform Api Catalog Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-platform-api-catalog-vscode-extension.md
+
+ Title: Enable platform API catalog - Azure API Center - VS Code extension
+description: Enable enterprise developers to view the enterprise's platform API catalog including API definitions using the Visual Studio Code Extension for Azure API Center.
+++ Last updated : 09/27/2024++
+# Customer intent: As an API program manager, I want to enable an API catalog so that app developers in my organization can discover and consume the APIs in my organization's API center without needing to manage the API inventory itself.
++
+# Enable and view Azure API Center platform API catalog
+
+This article shows how to provide enterprise developers access to the Azure API Center platform API catalog (preview) in the Visual Studio Code extension for [Azure API Center](overview.md). Using the platform API catalog, developers can discover APIs in your Azure API center, view API definitions, and optionally generate API clients when they don't have access to manage the API center itself or add APIs to the inventory. Access to the platform API catalog is managed using Microsoft Entra ID and Azure role-based access control.
+
+## Prerequisites
+
+### For API center administrators
+
+* An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
+
+* Permissions to create an app registration in a Microsoft Entra tenant associated with your Azure subscription, and permissions to grant access to data in your API center.
+
+### For app developers
+
+* [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center)
+
+ > [!IMPORTANT]
+ > Currently, access to the platform API catalog is available only in the extension's pre-release version. [!INCLUDE [vscode-extension-prerelease-features](includes/vscode-extension-prerelease-features.md)]
+
+The following Visual Studio Code extension is optional:
+
+* [Microsoft Kiota extension](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota) - to generate API clients
+
+## Steps for API center administrators to enable access to catalog
+
+The following sections provide steps for API center administrators to enable enterprise developers to access the platform API catalog.
+
+### Create Microsoft Entra app registration
+
+First, configure an app registration in your Microsoft Entra ID tenant. The app registration enables the Visual Studio Code extension for Azure API Center to access the platform API catalog on behalf of a signed-in user.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to **Microsoft Entra ID** > **App registrations**.
+1. Select **+ New registration**.
+1. On the **Register an application** page, set the values as follows:
+
+ * Set **Name** to a meaningful name such as *platform-api-catalog*
+ * Under **Supported account types**, select **Accounts in this organizational directory (Single tenant)**.
+ * In **Redirect URI**, select **Single-page application (SPA)** and set the URI to the runtime URI of your API center. For example, `https://<service name>.data.<region>.azure-apicenter.ms`. Example: `https://contoso-apic.data.eastus.azure-apicenter.ms`.
+ * Select **Register**.
+
+ > [!TIP]
+ > You can use the same app registration for access to more API centers. In **Redirect URI**, continue to add redirect URIs for other API centers that you want to appear in the platform API catalog.
+1. On the **Overview** page, copy the **Application (client) ID** and the **Directory (tenant) ID**. You set these values later when you connect to the API center from the Visual Studio Code extension.
+1. In the left menu, under **Manage**, select **Authentication** > **+ Add a platform**.
+1. On the **Configure platforms** page, select **Mobile and desktop applications**.
+1. On the **Configure Desktop + devices** page, enter the following redirect URI and select **Configure**:
+
+ `https://vscode.dev/redirect`
+
+1. In the left menu, under **Manage**, select **API permissions** > **+ Add a permission**.
+1. On the **Request API permissions** page, do the following:
+ 1. Select the **APIs my organization uses** tab.
+ 1. Search for and select **Azure API Center**. You can also search for and select application ID `c3ca1a77-7a87-4dba-b8f8-eea115ae4573`.
+ 1. In **Select permissions** page, select **user_impersonation**.
+ 1. Select **Add permissions**.
+
+ The Azure API Center permissions appear under **Configured permissions**.
+
+ :::image type="content" source="media/enable-platform-api-catalog-vscode-extension/configure-app-permissions.png" alt-text="Screenshot of required permissions in Microsoft Entra ID app registration in the portal." :::
+
+### Enable sign-in to platform API catalog by Microsoft Entra users and groups
+
+Enterprise developers must sign in with a Microsoft account to see the platform API catalog for your API center. If needed, [add or invite developers](/entra/external-id/b2b-quickstart-add-guest-users-portal) to your Microsoft Entra tenant.
+
+Then, to enable sign-in, assign the **Azure API Center Data Reader** role to users or groups in your tenant, scoped to your API center.
+
+> [!IMPORTANT]
+> By default, you and other administrators of the API center don't have access to APIs in the API Center extension's platform API catalog. Be sure to assign the **Azure API Center Data Reader** role to yourself and other administrators.
+
+For detailed prerequisites and steps to assign a role to users and groups, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). Brief steps follow:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center.
+1. In the left menu, select **Access control (IAM)** > **+ Add role assignment**.
+1. In the **Add role assignment** pane, set the values as follows:
+ * On the **Role** page, search for and select **Azure API Center Data Reader**. Select **Next**.
+ * On the **Members** page, In **Assign access to**, select **User, group, or service principal** > **+ Select members**.
+ * On the **Select members** page, search for and select the users or groups to assign the role to. Click **Select** and then **Next**.
+ * Review the role assignment, and select **Review + assign**.
+1. Repeat the preceding steps to enable sign-in to the platform API catalog for more API centers.
+
+> [!NOTE]
+> To streamline access configuration for new users, we recommend that you assign the role to a Microsoft Entra group and configure a dynamic group membership rule. To learn more, see [Create or update a dynamic group in Microsoft Entra ID](/entra/identity/users/groups-create-rule).
+
+## Steps for enterprise developers to access the platform API catalog
+
+Developers can follow these steps to connect and sign in to view a platform API catalog using the Visual Studio Code extension. Settings to connect to the API center need to be provided by the API center administrator.
+
+### Connect to an API center
+
+1. Install the pre-release version of the [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center) for Visual Studio Code.
+
+1. In Visual Studio Code, in the Activity Bar on the left, select API Center.
+
+ :::image type="content" source="media/enable-platform-api-catalog-vscode-extension/api-center-activity-bar.png" alt-text="Screenshot of the API Center icon in the Activity Bar.":::
+
+1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Connect to an API Center** and hit **Enter**.
+1. Answer the prompts to input the following information:
+ 1. The runtime URL of your API center, in the format `<service name>.data.<region>.azure-apicenter.ms` (don't prefix with `https://`). Example: `contoso-apic.data.eastus.azure-apicenter.ms`. This runtime URL appears on the **Overview** page of the API center in the Azure portal.
+ 1. The application (client) ID from the app registration configured by the administrator in the previous section.
+ 1. The directory (tenant) ID from the app registration configured by the administrator in the previous section.
+
+ > [!TIP]
+ > An API center administrator needs to provide these connection details to developers, or provide a direct link in the following format:
+ > `vscode://apidev.azure-api-center?clientId=<Client ID>&tenantId=<tenant ID>&runtimeUrl=<service-name>.data.<region>.azure-apicenter.ms`
+
+ After you connect to the API center, the name of the API center appears in the API Center platform API catalog.
+
+1. To view the APIs in the API center, under the API center name, select **Sign in to Azure**. Sign-in is allowed with a Microsoft account that is assigned the **Azure API Center Data Reader** role in the API center.
+
+ :::image type="content" source="media/enable-platform-api-catalog-vscode-extension/api-center-pane-initial.png" alt-text="Screenshot of API Center platform API catalog in VS Code extension." :::
+
+1. After signing in, select **APIs** to list the APIs in the API center. Expand an API to explore its versions and definitions.
+
+ :::image type="content" source="media/enable-platform-api-catalog-vscode-extension/api-center-pane-apis.png" alt-text="Screenshot of API Center platform API catalog with APIs in VS Code extension." :::
+
+1. Repeat the preceding steps to connect to more API centers, if access is configured.
+
+### Discover and consume APIs in the catalog
+
+The platform API catalog helps enterprise developers discover API details and start API client development. Developers can access the following features by right-clicking on an API definition in the platform API catalog:
+
+* **Export API specification document** - Export an API specification from a definition and then download it as a file
+* **Generate API client** - Use the Microsoft Kiota extension to generate an API client for their favorite language
+* **Generate Markdown** - Generate API documentation in Markdown format
+* **OpenAPI documentation** - View the documentation for an API definition and try operations in a Swagger UI (only available for OpenAPI definitions)
++
+## Troubleshooting
+
+### Error: Cannot read properties of undefined (reading 'nextLink')
+
+Under certain conditions, a user might encounter the following error message after signing into the API Center platform API catalog and expanding the APIs list for an API center:
+
+`Error: Cannot read properties of undefined (reading 'nextLink')`
+
+Check that the user is assigned the **Azure API Center Data Reader** role in the API center. If necessary, reassign the role to the user. Then, refresh the API Center platform API catalog in the Visual Studio Code extension.
+
+### Unable to sign in to Azure
+
+If users who have been assigned the **Azure API Center Data Reader** role can't complete the sign-in flow after selecting **Sign in to Azure** in the platform API catalog, there might be a problem with the configuration of the connection.
+
+Check the settings in the app registration you configured in Microsoft Entra ID. Confirm the values of the application (client) ID and the directory (tenant) ID in the app registration and the runtime URL of the API center. Then, set up the connection to the API center again.
+
+### Unable to select Azure API Center permissions in Microsoft Entra ID app registration
+
+If you're unable to request API permissions to Azure API Center in your Microsoft Entra app registration for the API Center portal, check that you are searching for **Azure API Center** (or application ID `c3ca1a77-7a87-4dba-b8f8-eea115ae4573`).
+
+If the app isn't present, there might be a problem with the registration of the **Microsoft.ApiCenter** resource provider in your subscription. You might need to re-register the resource provider. To do this, run the following command in the Azure CLI:
+
+```azurecli
+az provider register --namespace Microsoft.ApiCenter
+```
+
+After re-registering the resource provider, try again to request API permissions.
++
+## Related content
+
+* [Build and register APIs with the Azure API Center extension for Visual Studio Code](build-register-apis-vscode-extension.md)
+* [Best practices for Azure RBAC](../role-based-access-control/best-practices.md)
+* [Register a resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider)
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
To delete individual API versions and definitions, use [az apic api version dele
* See the [Azure CLI reference for Azure API Center](/cli/azure/apic) for a complete command list, including commands to manage [environments](/cli/azure/apic/environment), [deployments](/cli/azure/apic/api/deployment), [metadata schemas](/cli/azure/apic/metadata), and [services](/cli/azure/apic). * [Import APIs to your API center from API Management](import-api-management-apis.md)
-* [Use the Visual Studio extension for API Center](use-vscode-extension.md) to build and register APIs from Visual Studio Code.
+* [Use the Visual Studio extension for API Center](build-register-apis-vscode-extension.md) to build and register APIs from Visual Studio Code.
* [Register APIs in your API center using GitHub Actions](register-apis-github-actions.md)
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
With an API center, stakeholders throughout your organization - including API pr
Create and use an API center for the following:
-* **API inventory management** - Register all of your organization's APIs for inclusion in a centralized inventory.
+* **API inventory management** - API developers and API program managers can register all of your organization's APIs for inclusion in a centralized inventory using the Azure portal, the Azure CLI, or developer tooling including the Azure API Center extension for Visual Studio Code and CI/CD pipelines.
* **Real-world API representation** - Add real-world information about each API including versions and definitions such as OpenAPI definitions. List API deployments and associate them with runtime environments, for example, representing Azure API Management or other API management solutions.
-* **API governance** - Organize and filter APIs and related resources using built-in and custom metadata, to help with API governance and discovery by API consumers. Set up [linting and analysis](enable-api-analysis-linting.md) to enforce API definition quality. Integrate with tools such as Dev Proxy to ensure that apps don't use unregistered [shadow APIs](discover-shadow-apis-dev-proxy.md) or APIs that don't meet organizational standards.
+* **API governance** - Organize and filter APIs and related resources using built-in and custom metadata, to help with API governance and discovery by API consumers. Set up [linting and analysis](enable-managed-api-analysis-linting.md) to enforce API definition quality. API developers can shift-left API design conformance checks into Visual Studio Code with integrated linting support and breaking change detection. Integrate with tools such as Dev Proxy to ensure that apps don't use unregistered [shadow APIs](discover-shadow-apis-dev-proxy.md) or APIs that don't meet organizational standards.
-* **API discovery and reuse** - Enable developers and API program managers to discover APIs via the Azure portal, an API Center portal, and developer tools including a [Visual Studio Code extension](use-vscode-extension.md)ΓÇï.
+* **API discovery and reuse** - Enable enterprise developers and API program managers to discover APIs via an API Center portal or an [API platform catalog](enable-platform-api-catalog-vscode-extension.md) that's accessed using the Azure API Center Visual Studio Code extension.
For more about the entities you can manage and the capabilities in Azure API Center, see [Key concepts](key-concepts.md).
api-center Register Apis Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis-github-actions.md
To configure the workflow file:
1. Add this workflow file in the `/.github/workflows/` path in your GitHub repository. > [!TIP]
-> Using the [Visual Studio Code extension](use-vscode-extension.md) for Azure API Center, you can generate a starting workflow file by running an extension command. In the Command Palette, select **Azure API Center: Register APIs**. Select **CI/CD** > **GitHub**. You can then modify the file for your scenario.
+> Using the [Visual Studio Code extension](build-register-apis-vscode-extension.md) for Azure API Center, you can generate a starting workflow file by running an extension command. In the Command Palette, select **Azure API Center: Register APIs**. Select **CI/CD** > **GitHub**. You can then modify the file for your scenario.
```yml name: Register API Definition to Azure API Center
api-center Register Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis.md
In this tutorial, you learned how to use the portal to:
> * Register one or more APIs > * Add an API version with an API definition
-As you build out your API inventory, take advantage of automated tools to register APIs, such as the [Azure API Center extension for Visual Studio Code](use-vscode-extension.md) and the [Azure CLI](manage-apis-azure-cli.md).
+As you build out your API inventory, take advantage of automated tools to register APIs, such as the [Azure API Center extension for Visual Studio Code](build-register-apis-vscode-extension.md) and the [Azure CLI](manage-apis-azure-cli.md).
## Next steps
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
Here are some common swap errors:
- Local cache initialization might fail when the app content exceeds the local disk quota specified for the local cache. For more information, see [Local cache overview](overview-local-cache.md).
+- During a site update operation, the following error may occur "_The slot cannot be changed because its configuration settings have been prepared for swap_". This can occur if either [swap with preview (multi-phase swap)](#swap-with-preview-multi-phase-swap) phase 1 has been completed but phase 2 has not yet been performed, or a swap has failed. There are two ways resolve the issue:
+
+ 1. Cancel the swap operation which will reset the site back to the old state
+ 1. Complete the swap operation which will update site to the desired new state
+
+ Refer to [swap with preview (multi-phase swap)](#swap-with-preview-multi-phase-swap) to learn how to cancel or complete the swap operation.
+ - During [custom warm-up](#Warm-up), the HTTP requests are made internally (without going through the external URL). They can fail with certain URL rewrite rules in *Web.config*. For example, rules for redirecting domain names or enforcing HTTPS can prevent warm-up requests from reaching the app code. To work around this issue, modify your rewrite rules by adding the following two conditions: ```xml
application-gateway Classic To Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/classic-to-resource-manager.md
Title: Application Gateway classic to Resource Manager
-description: Learn about moving Application Gateway resources from the classic deployment model to the Resource Manager deployment model.
+ Title: Azure Application Gateway classic to Resource Manager
+description: Learn about moving Azure Application Gateway resources from the classic deployment model to the Resource Manager deployment model.
Previously updated : 06/27/2024 Last updated : 10/02/2024
-# Application Gateway classic to Resource Manager migration
+# Application gateway classic to Resource Manager migration
Resource Manager enables deploying complex applications through templates, configures virtual machines by using VM extensions, and incorporates access management and tagging. Azure Resource Manager includes scalable, parallel deployment for virtual machines into availability sets. The new deployment model also provides lifecycle management of compute, network, and storage independently. You can read more about Azure Resource Manager [features and benefits](../azure-resource-manager/management/overview.md).
-Application Gateway resources will **not** be migrated automatically as part of VNet migration from classic to Resource Manager.
-As part of VNet migration process as documented at [IaaS resources migration page](/azure/virtual-machines/migration-classic-resource-manager-ps), if you have an Application Gateway resource present on the VNet that you're trying to migrate to Resource Manager deployment model, the automatic migration wouldn't be successful.
+Application gateway resources are **not** migrated automatically as part of VNet migration from classic to Resource Manager.
+As part of VNet migration process as documented at [IaaS resources migration page](/azure/virtual-machines/migration-classic-resource-manager-ps), if you have an application gateway resource present on the VNet that you're trying to migrate to Resource Manager deployment model, the automatic migration wouldn't be successful.
-In order to migrate your Application Gateway resource to Resource Manager deployment model, you'll have to remove the Application Resource from the VNet before beginning migration and then recreate the Application Gateway resource once migration is complete.
+To migrate your application gateway resource to Resource Manager deployment model, you'll have to remove the Application Resource from the VNet before beginning migration and then recreate the application gateway resource once migration is complete.
-## Creating a new Application Gateway resource
+## Creating a new application gateway resource
-For more information on how to set up an Application Gateway resource after VNet migration, you can refer:
+For more information on how to set up an application gateway resource after VNet migration, you can refer:
* [Deployment via portal](quick-create-portal.md) * [Deployment via PowerShell](quick-create-powershell.md)
Azure Resource Manager is the latest control plane of Azure responsible for crea
### Where can I find more information regarding classic to Azure Resource Manager migration?
-Please refer to [Frequently asked questions about classic to Azure Resource Manager migration](/azure/virtual-machines/migration-classic-resource-manager-faq)
+Refer to [Frequently asked questions about classic to Azure Resource Manager migration](/azure/virtual-machines/migration-classic-resource-manager-faq)
+
+### How can I clean up my classic application gateway deployment?
+
+Step 1: Install the old PowerShell version for managing legacy resources.
+
+[Installing the Azure PowerShell Service Management module](/powershell/azure/servicemanagement/install-azure-ps)
+
+> [!NOTE]
+> The cmdlets referenced in this documentation are for managing legacy Azure resources that use Azure Service Manager (ASM) APIs. This legacy PowerShell module isn't recommended for creating new resources since ASM is scheduled for retirement.
+
+Step 2: Run the following command to remove the application gateway.
+ [Remove-AzureApplicationGateway](/powershell/module/servicemanagement/azure/remove-azureapplicationgateway)
+
+ ```
+#Login to account and set proper subscription
+ Add-AzureAccount
+ Get-AzureSubscription
+ Select-AzureSubscription -SubscriptionId <SubscriptionId> -Default
+
+ # Get the list of application gateways in the subscription
+ Get-AzureApplicationGateway
+
+ #Remove the desired application gateway
+ Remove-AzureApplicationGateway -Name <NameofGateway>
+```
### How do I report an issue? Post your issues and questions about migration to our [Microsoft Q&A page](/answers/topics/azure-virtual-network.html). We recommend posting all your questions on this forum. If you have a support contract, you're welcome to log a support ticket as well. ## Next steps
-To get started see: [platform-supported migration of IaaS resources from classic to Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-ps)
+To get started, see: [platform-supported migration of IaaS resources from classic to Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-ps)
-For any concerns around migration, you can contact Azure Support. Learn more about [Azure support here](https://azure.microsoft.com/support/options/).
+For any concerns around migration, you can contact Azure Support. Learn more about [Azure support here](https://azure.microsoft.com/support/options/).
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
Previously updated : 09/06/2024 Last updated : 10/02/2024
The following table displays a comparison between Basic and Standard_v2.
| :: | : | :: | :: | | Reliability | SLA | 99.9 | 99.95 | | Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>Zone<br>Header rewrite | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;|
-| Functionality - advanced | AKS (via AGIC)<br>URL rewrite<br>mTLS<br>Private Link<br>Private-only<sup>1</sup><br>TCP/TLS Proxy | | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
+| Functionality - advanced | AKS (via AGIC)<br>URL rewrite<br>mTLS<br>Private Link<br>Private-only (preview)<br>TCP/TLS Proxy (preview) | | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
| Scale | Max. connections per second<br>Number of listeners<br>Number of backend pools<br>Number of backend servers per pool<br>Number of rules | 200<sup>1</sup><br>5<br>5<br>5<br>5 | 62500<sup>1</sup><br>100<br>100<br>1200<br>400 | | Capacity Unit | Connections per second per compute unit<br>Throughput<br>Persistent new connections | 10<br>2.22 Mbps<br>2500 | 50<br>2.22 Mbps<br>2500 |
azure-app-configuration Create Snapshot Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/create-snapshot-devops-pipeline.md
+
+ Title: Create snapshots in App Configuration with Azure Pipelines
+description: Learn to use Azure Pipelines to create a snapshot in an App Configuration Store
++++ Last updated : 09/09/2024+++
+# Create snapshots in App Configuration with Azure Pipelines
+
+The Azure App Configuration snapshot task is designed to create snapshots in Azure App Configuration.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store)
+- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881)
+- [Azure Pipelines agent version 2.144.0](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.144.0) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents.
+
+## Create a service connection
++
+## Add role assignment
+
+Assign the proper App Configuration role assignment to the credentials being used within the task so that the task can access the App Configuration store.
+
+1. Go to your target App Configuration store.
+1. In the left menu, select **Access control (IAM)**.
+1. In the right pane, selecte **Add role assignment**.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button.":::
+1. For the **Role**, select **App Configuration Data Owner**. This role allows the task to read from and write to the App Configuration store.
+1. Select the service principal associated with the service connection that you created in the previous section.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-owner.png" alt-text="Screenshot shows the Add role assignment dialog.":::
+1. Select **Review + assign**.
+
+## Use in builds
+
+In this section, learn how to use the Azure App Configuration snapshot task in an Azure DevOps build pipeline.
+
+1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For more information about build pipelines got to [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?tabs=tfs-2018-2).
+ - If you're creating a new build pipeline, on the last step of the process, on the **Review** tab, select **Show assistant** on the right side of the pipeline.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Show assistant button for a new pipeline.](./media/new-pipeline-show-assistant.png)
+ - If you're using an existing build pipeline, click the **Edit** button at the top-right.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Edit button for an existing pipeline.](./media/existing-pipeline-show-assistant.png)
+1. Search for the **Azure App Configuration snapshot** Task.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Add Task dialog with Azure App Configuration snapshot in search box.](./media/add-azure-app-configuration-snapshot-task.png)
+1. Configure the necessary parameters for the task to create a snapshot in an App Configuration store. Explanations of the parameters are available in the **Parameters** section below and in tooltips next to each parameter.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the app configuration snapshot task parameters.](./media/azure-app-configuration-snapshot-parameters.png)
+1. Save and queue a build. The build log will display any failures that occurred during the execution of the task.
+
+## Use in releases
+
+In this section, learn how to use the Azure App Configuration snapshot task in an Azure DevOps release pipeline.
+
+1. Navigate to the release pipeline page by selecting, **Pipelines** > **Releases**. For more information about release pipelines go to [Create your first pipeline](/azure/devops/pipelines/release).
+1. Choose an existing release pipeline. If you donΓÇÖt have one, select **+ New** to create a new one.
+1. Select the **Edit** button in the top-right corner to edit the release pipeline.
+1. From the **Tasks** dropdown, choose the **Stage** to which you want to add the task. More information about stages can be found in [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments).
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the selected stage in the Tasks dropdown.](./media/pipeline-stage-tasks.png)
+1. Click **+** next to the job to which you want to add a new task.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the plus button next to the job.](./media/add-task-to-job.png)
+1. In the **Add tasks** dialog, type **Azure App Configuration snapshot** into the search box and select it.
+1. Configure the necessary parameters within the task to create a snapshot within your App Configuration store. Explanations of the parameters are available in the **Parameters** section below, and in tooltips next to each parameter.
+1. Save and queue a release. The release log will display any failures encountered during the execution of the task.
+
+## Parameters
+
+The following parameters are used by the App Configuration snapshot task:
+
+- **Azure subscription**: A drop-down containing your available Azure service connections. To update and refresh your list of available Azure service connections, press the **Refresh Azure subscription** button to the right of the textbox.
+
+- **App Configuration Endpoint**: A drop-down that loads your available configuration store endpoints under the selected subscription. To update and refresh your list of available configuration store endpoints, press the **Refresh App Configuration Endpoint** button to the right of the textbox.
+
+- **Snapshot Name**: Specify the name for the snapshot.
+
+- **Composition Type**: The default value is **Key**.
+ - **Key**: The filters are applied in order for this composition type. Each key-value in the snapshot is uniquely identified by the key only. If there are multiple key-values with the same key and multiple labels, only one key-value will be retained based on the last applicable filter.
+
+ - **Key-Label**: Filters will be applied and every key-value in the resulting snapshot will be uniquely identified by the key and label together.
+
+- **Filters**: Represents key and label filter used to build an App Configuration snapshot. Filters should be of a valid JSON format. Example `[{"key":"abc*", "label":"1.0.0"}]`. At least one filter should be specified and a max of three filters can be specified.
+
+- **Retention period**: The default value is 30 days. Refers to the number of days the snapshot will be retained after it's archived. Archived snapshots can be recovered during the retention period.
+
+- **Tags**: A JSON object in the format of `{"tag1":"val1", "tag2":"val2"}`, which defines tags that are added to each snapshot created in your App Configuration store.
+
+## Troubleshooting
+
+If an unexpected error occurs, debug logs can be enabled by setting the pipeline variable `system.debug` to `true`.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Pull settings from App Configuration with Azure pipelines](./pull-key-value-devops-pipeline.md)
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Previously updated : 09/11/2024 Last updated : 10/01/2024
Excessive requests to App Configuration can result in throttling or overage char
* Watch a single *sentinel key*, rather than watching individual keys. Refresh all configuration only if the sentinel key changes. See [Use dynamic configuration in an ASP.NET Core app](enable-dynamic-configuration-aspnet-core.md) for an example.
+* Use the [App Configuration Kubernetes Provider](./quickstart-azure-kubernetes-service.md) if you run multiple workloads in a Kubernetes cluster, each pulling data from App Configuration individually. The Kubernetes provider retrieves data from App Configuration and makes it available as Kubernetes ConfigMaps and Secrets. This way, your workloads can access the data via ConfigMaps and Secrets without needing to pull data from App Configuration separately.
+ * [Enable geo-replication](./howto-geo-replication.md) of your App Configuration store and spread your requests across multiple replicas. For example, use a different replica from each geographic region for a globally deployed application. Each App Configuration replica has its separate request quota. This setup gives you a model for scalability and enhanced resiliency against transient and regional outages. ## Importing configuration data into App Configuration
If your application is deployed in multiple regions, we recommend that you [enab
Applications often rely on configuration to start, making Azure App Configuration's high availability critical. For improved resiliency, applications should leverage App Configuration's reliability features and consider taking the following measures based on your specific requirements. * **Provision in regions with Azure availability zone support.** Availability zones allow applications to be resilient to data center outages. App Configuration offers zone redundancy for all customers without any extra charges. Creating your App Configuration store in regions with support for availability zones is recommended. You can find [a list of regions](./faq.yml#how-does-app-configuration-ensure-high-data-availability) where App Configuration has enabled availability zone support.
-* **[Enable geo-replication](./howto-geo-replication.md) and allow your application to failover among replicas.** This setup gives you a model for scalability and enhanced resiliency against transient failures and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information.
+* **[Enable geo-replication](./howto-geo-replication.md) and allow your application to failover or distribute load among replicas.** This setup gives you a model for scalability and enhanced resiliency against transient failures and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information.
* **Deploy configuration with [safe deployment practices](/azure/well-architected/operational-excellence/safe-deployments).** Incorrect or accidental configuration changes can frequently cause application downtime. You should avoid making configuration changes that impact the production directly from, for example, the Azure portal whenever possible. In safe deployment practices (SDP), you use a progressive exposure deployment model to minimize the potential blast radius of deployment-caused issues. If you adopt SDP, you can build and test a [configuration snapshot](./howto-create-snapshots.md) before deploying it to production. During the deployment, you can update instances of your application to progressively pick up the new snapshot. If issues are detected, you can roll back the change by redeploying the last-known-good (LKG) snapshot. The snapshot is immutable, guaranteeing consistency throughout all deployments. You can utilize snapshots along with dynamic configuration. Use a snapshot for your foundational configuration and dynamic configuration for emergency configuration overrides and feature flags. * **Include configuration with your application.** If you want to ensure that your application always has access to a copy of the configuration, or if you prefer to avoid a runtime dependency on App Configuration altogether, you can pull the configuration from App Configuration during build or release time and include it with your application. To learn more, check out examples of integrating App Configuration with your [CI/CD pipeline](./integrate-ci-cd-pipeline.md) or [Kubernetes deployment](./integrate-kubernetes-deployment-helm.md). * **Use App Configuration providers.** Applications play a critical part in achieving high resiliency because they can account for issues arising during their runtime, such as networking problems, and respond to failures more quickly. The App Configuration providers offer a range of built-in resiliency features, including automatic replica discovery, replica failover, startup retries with customizable timeouts, configuration caching, and adaptive strategies for reliable configuration refresh. It's highly recommended that you use App Configuration providers to benefit from these features. If that's not an option, you should consider implementing similar features in your custom solution to achieve the highest level of resiliency.
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Using Microsoft Entra is the secure way to connect your cache. We recommend that
When you disable access key authentication for a cache, all existing client connections are terminated, whether they use access keys or Microsoft Entra authentication. Follow the recommended Redis client best practices to implement proper retry mechanisms for reconnecting Microsoft Entra-based connections, if any.
-Before you disable access keys:
+### Before you disable access keys:
-- Microsoft Entra authorization must be enabled.
+- Ensure that Microsoft Entra authentication is enabled and you have at least one Redis User configured.
+- Ensure all applications connecting to your cache instance switch to using Microsoft Entra Authentication.
+- Ensure that the metrics _Connected Clients_ and _Connected Clients Using Microsoft Entra Token_ have the same values. If the values for these two metrics are not the same, that means there are still some connections that were created using access keys and not Entra Token.
+- Consider disabling access during the scheduled maintenance window for your cache instance.
- Disabling access keys is only available for Basic, Standard, and Premium tier caches. - For geo-replicated caches, you must:
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Title: Configure a virtual network - Premium-tier Azure Cache for Redis instance
description: Learn how to create and manage virtual network support for your Premium-tier Azure Cache for Redis instance -- Last updated 08/29/2023
There are network connectivity requirements for Azure Cache for Redis that might
>When you connect to an Azure Cache for Redis instance that's hosted in a virtual network, your cache clients must be in the same virtual network or in a virtual network with virtual network peering enabled within the same Azure region. Global virtual network peering isn't currently supported. This requirement applies to any test applications or diagnostic pinging tools. Regardless of where the client application is hosted, NSGs or other network layers must be configured such that the client's network traffic is allowed to reach the Azure Cache for Redis instance. >
-After the port requirements are configured as described in the previous section, you can verify that your cache is working by following these steps:
+After the port requirements are configured as described in the previous section, a reboot is necessary in most cases to ensure the changes reflect correctly. Otherwise, you might experience some connectivity issues. You can verify that your cache is working by following these steps:
- [Reboot](cache-administration.md#reboot) all of the cache nodes. The cache won't be able to restart successfully if all of the required cache dependencies can't be reachedas documented in [Inbound port requirements](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound port requirements](cache-how-to-premium-vnet.md#outbound-port-requirements). - After the cache nodes have restarted, as reported by the cache status in the Azure portal, you can do the following tests:
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
Keep these other considerations in mind when using Flex Consumption plan during
+ **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`. + **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules). + **Diagnostic settings**: Diagnostic settings are not currently supported.++ **Certificates**: Loading certificates with the WEBSITE_LOAD_CERTIFICATES app setting is currently not supported. ## Related articles
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Microsoft Azure cloud environments meet demanding US government compliance requi
- [DoD IL4](/azure/compliance/offerings/offering-dod-il4) PA issued by DISA - [DoD IL5](/azure/compliance/offerings/offering-dod-il5) PA issued by DISA
-For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+For current Azure Government regions and available services, see [Products available by region](https://go.microsoft.com/fwlink/?linkid=2274941&clcid=0x409).
> [!NOTE] >
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
-### [3.4.0] (CDN: September 30, 2024, npm: TBA)
+### [3.4.0] (CDN: September 30, 2024, npm: October 2)
#### New features - Add support for PMTiles.
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.4.0]: https://www.npmjs.com/package/azure-maps-control/v/3.4.0
[3.3.0]: https://www.npmjs.com/package/azure-maps-control/v/3.3.0 [3.2.1]: https://www.npmjs.com/package/azure-maps-control/v/3.2.1 [3.2.0]: https://www.npmjs.com/package/azure-maps-control/v/3.2.0
azure-vmware Vmware Cloud Foundations License Portability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vmware-cloud-foundations-license-portability.md
+
+ Title: Use VMware Cloud Foundations (VCF) license portability on Azure VMware Solution
++
+description: Bring your own VMware Cloud Foundations (VCF) License on Azure VMware Solution
++ Last updated : 9/30/2024++
+# Use VMware Cloud Foundations (VCF) license portability on Azure VMware Solution
+
+This article discusses how to modernize your VMware workloads by bringing your VMware Cloud Foundations (VCF) entitlements to Azure VMware Solutions and take advantage of incredible cost savings as you modernize your VMware workloads. With Azure VMware Solution, you access both the physical infrastructure and the licensing entitlements for the entire VMware software-defined datacenter (SDDC) stack, including vSphere, ESXi, NSX networking, NSX Firewall, and HCX. With the new VCF license portability option, you can apply your on-premises VCF entitlements, purchased from Broadcom, directly to the Azure VMware Solution infrastructure. This flexibility means you can seamlessly integrate your VMware assets into a fully managed, state-of-the-art Azure environment, maximizing efficiency and cutting costs. Upgrade with confidence and experience the power and flexibility of Azure VMware Solution today!
+
+## What's changing?
+
+Private Cloud on the VCF license portability offering, must have prepurchase Firewall add-on from Broadcom along with the VCF subscription to use the vDefend Firewall on Azure VMware Solution. Prior to using the vDefend Firewall software on Azure VMware Solution ensure you register your Firewall add-on with Microsoft. For detailed instructions on how to register your VCF license, see "Register your VCF license with Azure VMware Solution" later in this article.
+
+>[!IMPORTANT]
+>
+>VCF portable licenses are applied at the host level and must cover all the physical cores on a host. For example, if each host in Azure VMware Solution has 36 cores and you intend to have a Private Cloud with 3 nodes, the VCF portable license must cover 108 (3*36) cores.
+>In the current version, if you want to use your own license on an Azure subscription for the Azure VMware Solution workloads, all the nodes (cores) in that subscription including multiple Private Clouds need to be purchased through Broadcom and covered under your Broadcom VCF license portability contract. At the moment, you're required to bring VCF license entitlements that cover cores for all the nodes deployed within your Azure Subscription.
+
+## Purchasing VCF license portability offering on Azure VMware Solution
+
+We offer three flexible commitments and pricing options for using your own VCF license on Azure VMware Solution. You can choose from pay-as-you-go, 1-year Reserved Instance (RI), and 3-year RI options.
+
+To take advantage of the Reserved Instance (RI) pricing for the VCF license portability offering, purchase an RI under the Product Name- VCF BYOL. For example, if your private cloud uses AV36P nodes, you must [purchase the Reserved Instance](/azure/azure-vmware/reserved-instance?toc=%2Fazure%2Fcost-management-billing%2Freservations%2Ftoc.json#buy-a-reservation) for the Product Name- AV36P VCF BYOL. To use the pay-as-you-go pricing for the VCF license portability offering, you only need to register your VCF license.
++
+## Request host quota with VCF license portability
+
+Existing:
+
+To request quota for VCF license portability offering, provide the following additional information in the **Description** of the support ticket:
+
+- Region Name
+- Number of hosts
+- Host SKU type
+- Add the following statement as is, by replacing "N" with the "Number of VCF BYOL cores" you purchased from Broadcom for license portability to Azure VMware Solutions:
+**"I acknowledge that I have procured portable VCF license from Broadcom for "N" cores to use with Azure VMware Solutions."**
+- Any other details, including Availability Zone requirements for integrating with other Azure services; for example, Azure NetApp Files, Azure Blob Storage
++
+>[!NOTE]
+>
+>VCF portable license is applied at the host level and must cover all the physical cores on a host.
+>Hence, quota will be approved only for the maximum number of nodes which the VCF portable license covers. For example, if you have purchased 1000 cores for portability and requesting for AV36P, you can get a maximum of 27 nodes quota approved for your subscription.
+>
+>That is, 36 physical CPU cores per AV36P node. 27 nodes = 27\*36 = 972 cores. 28 nodes = 28\*36 = 1008 cores. If you have purchased 1000 cores for portability, you can only use up to 27 AV36P nodes under your portable VCF.
+
+## Register your VCF license with Azure VMware Solution
+
+To get your quota request approved, you must first register the VCF portable license details with Microsoft. Quota will be approved only after the entitlements are provided. Expect to receive a response in 1 to 2 business days.
+
+### How to register the VCF license keys
+
+- Email your VCF license entitlements (and VMware vDefender Firewall license entitlements if to enable vDefender Firewall on Azure VMware Solution) to the following email address: registeravsvcfbyol@microsoft.com.
+
+- VCF entitlement sample:
+
+>[!NOTE]
+>
+>The "Qty" represents the number of cores eligible for VCF license portability. Your quota request should not surpass the number of nodes equivalent to your entitled cores from Broadcom. If your quota request exceeds the approved cores, the quota request will be granted only for the number of nodes that are fully covered by the entitled cores.
+
+- VCF with VMware vDefend entitlement sample:
+
+Sample Email to register portable VCF entitlements:
+
+The VMware vDefend Firewall add-on CPU cores required on Azure VMware Solution depend on the planned feature usage:
+- For NSX Distributed Firewall: same core count as VCF core count.
+- For NSX Gateway Firewall, it would be 64 cores (with default NSX Edges).
+- For both NSX Distributed and Gateway firewall, it should be combined core count of both.
+++
+>[!NOTE]
+>
+> The VCF license entitlements submitted to Microsoft will be securely retained for our reporting purpose. You can request the permanent deletion of this data from Microsoft's systems at any time. Once the automated validation process is in place, your data will be automatically deleted from all Microsoft systems, which may take up to 120 days. Additionally, all your VCF entitlement data will be permanently deleted within 120 days any time you migrate to an Azure VMware Solution-owned VCF solution.
+
+## Creating/scaling a Private Cloud
+
+You can create your Azure VMware Solution Private Cloud the same way as today regardless of your licensing method, that is, whether you bring your own VCF portable license or use the Azure VMware Solution-owned VCF license. [Learn more](/azure/azure-vmware/plan-private-cloud-deployment). Your decision of licensing is a cost optimization choice and doesn't affect your deployment workflow.
+
+For example, you want to deploy 10 nodes of AV36P node type.
+
+**Scenario 1:**
+"I want to purchase my VCF subscription from Broadcom and use the license portability offering on Azure VMware Solution."
+
+1. Create quota request for AV36P nodes. Declare your own VCF portable license intent and the number of cores you're entitled for portability.
+2. Register your VCF entitlements via email to Microsoft.
+3. Optional- to use the Reserved Instance pricing purchase AV36P VCF BYOL Reserved Instance. You can skip this step to use the pay-as-you-go pricing for the VCF license portability.
+4. Create your Private Cloud with AV36P nodes.
+
+**Scenario 2:**
+"I want to let Azure VMware Solution manage my license for all my Azure VMware Solution private cloud."
+
+1. Create quota request for AV36P node type.
+2. Optional- Purchase AV36P Reserved Instance.
+3. Create your Private Cloud with AV36P nodes.
+
+## Moving between the two VCF licensing methods
+
+If you're currently managing your own VCF licensing for Azure VMware Solution and wish to transition to Azure VMware Solution-owned licensing, you can easily make the switch without any changes to your Private Cloud.
+
+**Steps:**
+
+1. Create a support request to inform us of your intent to convert.
+2. Exchange RI- If you have any active RI with VCF BYOL, exchange them for non-VCF BYOL RI. For instance, you can [exchange your AV36P VCF BYOL RI for an AV36P](/azure/cost-management-billing/reservations/exchange-and-refund-azure-reservations).
+
+If you're an existing Azure VMware Solution customer using Azure VMware Solution-owned licensing deployments and wish to transition to the license portability (VCF BYOL) offering, you can also easily make the switch without any changes to your Private Cloud deployments by registering your VCF entitlements with Microsoft.
+
+>[!NOTE]
+>
+>You need to purchase the VCF entitlements from Broadcom for all cores that match your current Azure VMware Solution deployment. For instance, if your Azure subscription has a Private Cloud with 100 AV36P nodes, you must purchase VCF subscription for atleast 3600 cores from Broadcom to convert to VCF BYOL offering.
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
This article describes how to configure and back up Azure Kubernetes Service (AK
You can use Azure Backup to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) by using the Backup extension, which must be installed in the cluster. The Backup vault communicates with the cluster via the Backup extension to perform backup and restore operations.
->[!Note]
->Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
-
-## Before you start
--- Currently, AKS backup supports only Azure Disk Storage-based persistent volumes (enabled by CSI driver). The backups are stored in an operational datastore only (backup data is stored in your tenant and isn't moved to a vault). The Backup vault and AKS cluster must be in the same region.--- AKS backup uses a blob container and a resource group to store the backups. The blob container holds the AKS cluster resources. Persistent volume snapshots are stored in the resource group. The AKS cluster and the storage locations must be in the same region. Learn [how to create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).--- Currently, AKS backup supports once-a-day backup. It also supports more frequent backups (in 4-hour, 8-hour, and 12-hour intervals) per day. This solution allows you to retain your data for restore for up to 360 days. Learn how to [create a backup policy](#create-a-backup-policy).--- You must [install the Backup extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations for an AKS cluster. Learn more [about the Backup extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension).
+> [!NOTE]
+> Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
-- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and `Microsoft.ContainerService` are registered for your subscription before you initiate backup configuration and restore operations.
+## Before you begin
-- Ensure that you perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before you initiate a backup or restore operation for AKS backup.
+- Currently, AKS Backup supports only Azure Disk Storage-based persistent volumes enabled by CSI driver. The backups are stored in an operational datastore only (backup data is stored in your tenant and isn't moved to a vault). The Backup vault and AKS cluster must be in the same region.
+- AKS Backup uses a blob container and a resource group to store the backups. The blob container holds the AKS cluster resources. Persistent volume snapshots are stored in the resource group. The AKS cluster and the storage locations must be in the same region. Learn [how to create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
+- Currently, AKS Backup supports once-a-day backups. It also supports more frequent backups (in 4-hour, 8-hour, and 12-hour intervals) per day. This solution allows you to retain your data for restore for up to 360 days. Learn how to [create a backup policy](#create-a-backup-policy).
+- You need to [install the Backup extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations for an AKS cluster. Learn more [about the Backup extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension).
+- Make sure you have `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and `Microsoft.ContainerService` registered for your subscription before you initiate backup configuration and restore operations.
+- Make sure you complete [all prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before you initiate a backup or restore operation for AKS Backup.
For more information on supported scenarios, limitations, and availability, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). ## Create a Backup vault
-A Backup vault is a management entity that stores recovery points treated over time. A Backup vault also provides an interface to do the backup operations. Operations include taking on-demand backups, doing restores, and creating backup policies. AKS backup requires the Backup vault and the AKS cluster to be in the same region. Learn [how to create a Backup vault](create-manage-backup-vault.md#create-a-backup-vault).
-
->[!Note]
->A Backup vault is a new resource that's used to back up newly supported datasources. A Backup vault is different from a Recovery Services vault.
+A Backup vault is a management entity that stores recovery points treated over time. A Backup vault also provides an interface to do the backup operations. Operations include taking on-demand backups, doing restores, and creating backup policies. AKS Backup requires the Backup vault and the AKS cluster to be in the same region. Learn [how to create a Backup vault](create-manage-backup-vault.md#create-a-backup-vault).
-If you want to use Azure Backup to protect your AKS clusters from any regional outage:
-
-1. Set the **Backup Storage Redundancy** parameter as **Globally-Redundant** during vault creation. Once the redundancy for a vault is set, you can't disable.
+> [!NOTE]
+> A Backup vault is a new resource that's used to back up newly supported datasources. A Backup vault is different from a Recovery Services vault.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enable-backup-storage-redundancy-parameter.png" alt-text="Screenshot shows how to enable the Backup Storage Redundance parameter.":::
+If you want to use Azure Backup to protect your AKS clusters from any regional outage, you can enable Cross Region Restore. To enable Cross Region Restore, you need to:
-2. Set the **Cross Region Restore** parameter under **Vault Properties** as **Enabled**. Once this parameter is enabled, you can't disable it.
+1. Set the **Backup Storage Redundancy** parameter as **Geo-Redundant** during vault creation. Once the redundancy for a vault is set, you can't disable it.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enable-cross-region-restore-parameter.png" alt-text="Screenshot shows how to enable the Cross Region Restore parameter.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enable-backup-storage-redundancy.png" alt-text="Screenshot shows how to enable the Backup Storage Redundance parameter.":::
-3. Create a Backup Instance using a Backup Policy with retention duration set for Vault-standard datastore. Every recovery point stored in this datastore will be in the secondary region.
+1. Set the **Cross Region Restore** parameter under **Vault Properties** as **Enabled**. Once this parameter is enabled, you can't disable it.
- >[!Note]
- >Vault-standard datastore is currently in preview.
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enable-cross-region-restore.png" alt-text="Screenshot shows how to enable the Cross Region Restore parameter.":::
-## Create a backup policy
+1. Create a Backup Instance using a Backup Policy with retention duration set for Vault-standard datastore. Every recovery point stored in this datastore will be in the secondary region.
-Before you configure backups, you need to create a backup policy that defines the frequency of backups and the retention duration of backups.
+## Create a Backup policy
-You can also create a backup policy when you configure the backup.
+Before you configure backups, you need to create a Backup policy that defines the frequency of backups and the retention duration of backups.
To create a backup policy:
-1. Go to **Backup center** and select **Policy** to create a new backup policy.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/create-backup-policy.png" alt-text="Screenshot that shows how to start creating a backup policy.":::
-
- Alternatively, go to **Backup center** > **Backup policies** > **Add**.
-
-1. For **Datasource type**, select **Kubernetes Service** and continue.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/select-datasource-type.png" alt-text="Screenshot that shows selecting the datasource type.":::
-
-1. Enter a name for the backup policy (for example, *Default Policy*) and select the Backup vault (the new Backup vault you created) where the backup policy needs to be created.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enter-backup-policy-name.png" alt-text="Screenshot that shows providing the backup policy name.":::
-
-1. On the **Schedule + retention** tab, define the *frequency of backups* and *how long they need to be retained* in Operational and Vault Tier (also called *datastore*).
-
- **Backup Frequency**: Select the *backup frequency* (hourly or daily), and then choose the *retention duration* for the backups.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-frequency.png" alt-text="Screenshot that shows selection of backup frequency.":::
+1. Go to the Backup vault that you created, and select **Manage** > **Backup policies** > **Add**.
+1. Enter a name for the backup policy.
+1. For **Datasource type**, select **Kubernetes Services**.
+1. On the **Schedule + retention** tab, define the *backup schedule*.
- **Retention Setting**: A new backup policy has two retention rules.
+ - **Backup Frequency**: Select the *backup frequency* (hourly or daily), and then choose the *retention duration* for the backups.
+ - **Retention Setting**: A new backup policy has the *Default* rule defined by default. You can edit this rule and canΓÇÖt delete it. The default rule defines the retention duration for all the operational tier backups taken. You can also create additional retention rules to store backups for a longer duration that are taken daily or weekly.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-period.png" alt-text="Screenshot that shows selection of retention period.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-rules.png" alt-text="Screenshot that shows the retention settings." lightbox="./media/azure-kubernetes-service-cluster-backup/retention-rules.png":::
- You can also create additional retention rules to store backups for a longer duration that are taken daily or weekly.
+ You can also create extra retention rules to store backups for a longer duration that are taken daily or weekly.
+ > [!NOTE]
+ >
+ > - In addition to first successful backup of the day, you can define the retention rules for first successful backup of the week, month, and year. In terms of priority, the order is year, month, week, and day.
+ > - You can copy backups in the secondary region (Azure Paired region) stored in the *Vault Tier*, which you can use to restore AKS clusters to a secondary region when the primary region is unavailable. To opt for this feature, use a *Geo-redundant vault* with *Cross Region Restore* enabled.
- - **Default**: This rule defines the default retention duration for all the operational tier backups taken. You can only edit this rule and canΓÇÖt delete it.
-
- - **First successful backup taken every day**: In addition to the default rule, every first successful backup of the day can be retained in the Operational datastore and Vault-standard store. You can edit and delete this rule (if you want to retain backups in Operational datastore).
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-configuration-for-vault-operational-tiers.png" alt-text="Screenshot that shows the retention configuration for Vault Tier and Operational Tier.":::
--
- You can also define similar rules for the *First successful backup taken every week, month, and year*.
-
- >[!Note]
- >- In addition to first successful backup of the day, you can define the retention rules for first successful backup of the week, month, and year. In terms of priority, the order is year, month, week, and day.
- >- The Vault-standard datastore is currently in preview. If you don't want to use the feature, edit the retention rule and clear the checkbox next to the **Vault-standard datastore**.
- >- The backups stored in the Vault Tier can also copied in the secondary region (Azure Paired region) that you can use to restore AKS clusters to a secondary region when the primary region is unavailable. To opt for this feature, use a *Geo-redundant vault* with *Cross Region Restore* enabled.
---
-1. When the backup frequency and retention settings are configured, select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/review-create-policy.png" alt-text="Screenshot that shows the completion of a backup policy creation.":::
-
-1. On the **Review + create** tab, review the information, and then select **Create**.
+2. When the backup frequency and retention settings are configured, select **Next**.
+3. On the **Review + create** tab, review the information, and then select **Create**.
-## Configure backups
+## Install Backup extension and configure backup
-You can use AKS backup to back up an entire cluster or specific cluster resources that are deployed in the cluster. You can also protect a cluster multiple times per the deployed application's schedule and retention requirements or security requirements.
+You can use AKS Backup to back up an entire cluster or specific cluster resources that are deployed in the cluster. You can also protect a cluster multiple times per the deployed application's schedule and retention requirements or security requirements.
> [!NOTE] > To set up multiple backup instances for the same AKS cluster:
You can use AKS backup to back up an entire cluster or specific cluster resource
> - Configure backup in the same Backup vault but using a different backup policy. > - Configure backup in a different Backup vault.
-To configure backups for AKS cluster:
-
-1. In the Azure portal, go to the AKS cluster that you want to back up.
+### Install the Backup extension
-1. In the resource menu, select **Backup**, and then select **Configure Backup**.
+To configure backups for an AKS cluster:
+1. In the Azure portal, go to the AKS cluster that you want to back up.
+1. From the service menu, under **Settings**, select **Backup**.
1. To prepare the AKS cluster for backup or restore, select **Install Extension** to install the Backup extension in the cluster.- 1. Provide a storage account and blob container as input. Your AKS cluster backups are stored in this blob container. The storage account must be in the same region and subscription as the cluster.
- Select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/add-storage-details-for-backup.png" alt-text="Screenshot that shows how to add storage and blob details for backup.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/add-storage-details.png" alt-text="Screenshot that shows how to add storage and blob details for backup.":::
-1. Review the extension installation details, and then select **Create**.
+1. Select **Next**. Review the extension installation details, and then select **Create**.
The extension installation begins.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/install-extension.png" alt-text="Screenshot that shows how to review and install the Backup extension.":::
-
-1. When the Backup extension is installed successfully, select **Configure Backup** to begin configuring backups for your AKS cluster.
-
- You can also perform this action in Backup center.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/configure-backup.png" alt-text="Screenshot that shows the selection of Configure Backup.":::
-
-1. Select the Backup vault.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/select-vault.png" alt-text="Screenshot that shows how to choose a vault.":::
-
- The Backup vault should have Trusted Access enabled for the AKS cluster to be backed up. To enable Trusted Access, select **Grant Permission**. If it's already enabled, select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/grant-permission.png" alt-text="Screenshot that shows how to proceed to the next step after granting permission.":::
-
- > [!NOTE]
- > - If the AKS cluster doesn't have the Backup extension installed, you can perform the installation during configuring backup for the cluster.
-
-1. Select the backup policy, which defines the schedule for backups and their retention period. Then select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/select-backup-policy.png" alt-text="Screenshot that shows how to choose a backup policy.":::
+### Configure backup
+1. When the Backup extension is installed successfully, select **Configure backup**.
+1. Select the Backup vault that you created earlier. The Backup vault should have Trusted Access enabled for the AKS cluster to be backed up. To enable Trusted Access, select **Grant Permission**. If it's already enabled, select **Next**.
+1. On the **Backup policy** tab, select the backup policy, which defines the schedule for backups and their retention period, and then select **Next**.
1. On the **Datasources** tab, select **Add/Edit** to define the backup instance configuration.-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/define-backup-instance-configuration.png" alt-text="Screenshot that shows how to define the Backup Instance Configuration.":::
-
-1. In the **Select Resources to Backup** pane, define the cluster resources that you want to back up.
+1. On the **Select Resources to Backup** pane, define the cluster resources that you want to back up.
Learn more about [backup configurations](azure-kubernetes-service-backup-overview.md).
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/define-cluster-resources-for-backup.png" alt-text="Screenshot that shows how to define the cluster resources for backup.":::
-
-1. For **Snapshot resource group**, select the resource group to use to store the persistent volume (Azure Disk Storage) snapshots. Then select **Validate**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validate-snapshot-resource-group-selection.png" alt-text="Screenshot that shows how to validate the snapshot resource group.":::
-
-1. When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears:
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-on-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error when required permissions aren't assigned.":::
+1. For **Snapshot resource group**, select the resource group to use to store the persistent volume (Azure Disk Storage) snapshots, and then select **Validate**.
-1. To resolve the error, under **Datasource name**, select the datasource, and then select **Assign missing roles**.
+ When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears:
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/start-role-assignment.png" alt-text="Screenshot that shows how to start assigning roles.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error when required permissions aren't assigned.":::
- The following screenshot shows the list of roles that you can select:
+ To resolve the error, under **Datasource name**, select the checkbox for the datasource, and then select **Assign missing roles**.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/select-missing-roles.png" alt-text="Screenshot that shows how to select missing roles.":::
-
-1. When role assignment is finished, select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/proceed-for-backup.png" alt-text="Screenshot that shows how to proceed to the backup configuration.":::
-
-1. Select **Configure backup**.
-
-1. When the configuration is finished, select **Next**.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/finish-backup-configuration.png" alt-text="Screenshot that shows how to finish backup configuration.":::
-
- The backup instance is created when backup configuration is finished.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/list-of-backup-instances.png" alt-text="Screenshot that shows the list of created backup instances.":::
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-instance-details.png" alt-text="Screenshot that shows the backup instance details.":::
+1. When the role assignment completes, select **Next** > **Configure backup**.
### Backup configurations Azure Backup for AKS allows you to define the application boundary within AKS cluster that you want to back up. You can use the filters that are available within backup configurations to choose the resources to back up and also to run custom hooks. The defined backup configuration is referenced by the value for **Backup Instance Name**. The below filters are available to define your application boundary:
-1. **Select Namespaces to backup**, you can either select **All** to back up all existing and future namespaces in the cluster, or you can select **Choose from list** to select specific namespaces for backup. The following namespaces are skipped from Backup Configuration and not cofigured for backups: kube-system, kube-node-lease, kube-public.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png" alt-text="Screenshot that shows how to select namespaces to include in the backup." lightbox="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png":::
+1. Select **Select Namespaces to backup**. You can either select **All** to back up all existing and future namespaces in the cluster, or you can select specific namespaces for backup.
-2. Expand **Additional Resource Settings** to see filters that you can use to choose cluster resources to back up. You can choose to back up resources based on the following categories:
+ The following namespaces are skipped from Backup configurations: `kube-system`, `kube-node-lease`, and `kube-public`.
- - **Labels**: You can filter AKS resources by using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) that you assign to types of resources. Enter labels in the form of key/value pairs. Combine multiple labels by using `AND` logic.
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-namespace.png" alt-text="Screenshot that shows how to select namespaces to include in the backup." lightbox="./media/azure-kubernetes-service-cluster-backup/backup-namespace.png":::
- For example, if you enter the labels `env=prod;tier!=web`, the process selects resources that have a label with the `env` key and the `prod` value, and a label with the `tier` key for which the value isn't `web`.
+1. Expand **Additional Resource Settings** to see filters that you can use to choose cluster resources to back up. You can choose to back up resources based on the following categories:
+ - **Labels**: You can filter AKS resources by using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) that you assign to types of resources. Enter labels in the form of key/value pairs. You can combine multiple labels using `AND` logic. For example, if you enter the labels `env=prod;tier!=web`, the process selects resources that have a label with the `env` key and the `prod` value, and a label with the `tier` key for which the value isn't `web`.
- **API groups**: You can also include resources by providing the AKS API group and kind. For example, you can choose for backup AKS resources like Deployments. You can access the list of Kubernetes defined API Groups [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/).-
- - **Other options**: You can enable or disable backup for cluster-scoped resources, persistent volumes, and secrets. By default, cluster-scoped resources and persistent volumes are enabled
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png" alt-text="Screenshot that shows the Additional Resource Settings pane." lightbox="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png":::
-
- > [!NOTE]
- > All these resource settings are combined and applied via `AND` logic.
+ - **Other options**: You can enable or disable backup for cluster-scoped resources, persistent volumes, and secrets. Cluster-scoped resources and persistent volumes are enabled by default.
> [!NOTE] > You should add the labels to every single YAML file that is deployed and to be backed up. This includes namespace-scoped resources like persistent volume claims, and cluster-scoped resources like persistent volumes. -
-## Use hooks during AKS backup
+## Use hooks during AKS Backup
This section describes how to use a backup hook to create an application-consistent snapshot of the AKS cluster with MySQL deployed (a persistent volume that contains the MySQL instance).
To enable a backup hook as part of the backup configuration flow to back up MySQ
```
-1. When the deployment is finished, you can [configure backup for the AKS cluster](#configure-backups).
+1. When the deployment is finished, you can [configure backups for the AKS cluster](#configure-backup).
> [!NOTE]
- >
> As part of a backup configuration, you must provide the custom resource name and the namespace that the resource is deployed in as input.
- >
- > :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/custom-resource-name-and-namespace.png" alt-text="Screenshot that shows how to add the namespace for the backup configuration." lightbox="./media/azure-kubernetes-service-cluster-backup/custom-resource-name-and-namespace.png":::
- >
## Next steps
backup Quick Backup Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-aks.md
The Backup vault communicates with the cluster via the Backup extension to compl
1. When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-on-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-on-permissions-not-assigned.png":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png":::
1. To resolve the error, under **Datasource name**, select the datasource, and then select **Assign missing roles**.
backup Tutorial Configure Backup Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-configure-backup-aks.md
The Backup vault communicates with the cluster via the Backup extension to compl
1. When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-on-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-on-permissions-not-assigned.png":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png":::
1. To resolve the error, under **Datasource name**, select the datasource, and then select **Assign missing roles**.
backup Tutorial Restore Aks Backups Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-aks-backups-across-regions.md
Azure Backup allows you to store AKS cluster backups in both **Operational Tier
For backups to be available in Secondary region (Azure Paired Region), [create a Backup vault](create-manage-backup-vault.md#create-backup-vault) with **Storage Redundancy** enabled as **Globally Redundant** and Cross Region Restore enable. ## Configure Vault Tier backup (preview)
To set the retention policy in a backup policy, follow these steps:
**Retention Setting**: A new backup policy has two retention rules.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-period.png" alt-text="Screenshot that shows selection of retention period.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-rules.png" alt-text="Screenshot that shows selection of retention period." lightbox="./media/azure-kubernetes-service-cluster-backup/retention-rules.png":::
You can also create additional retention rules to store backups for a longer duration that are taken daily or weekly. - - **Default**: This rule defines the default retention duration for all the operational tier backups taken. You can only edit this rule and canΓÇÖt delete it. - **First successful backup taken every day**: In addition to the default rule, every first successful backup of the day can be retained in the Operational datastore and Vault-standard store. You can edit and delete this rule (if you want to retain backups in Operational datastore).
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-configuration-for-vault-operational-tiers.png" alt-text="Screenshot that shows the retention configuration for Vault Tier and Operational Tier.":::
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/retention-configuration-for-vault-operational-tiers.png" alt-text="Screenshot that shows the retention configuration for Vault Tier and Operational Tier." lightbox="./media/azure-kubernetes-service-cluster-backup/retention-configuration-for-vault-operational-tiers.png":::
-With the new backup policy, you can [configure protection for the AKS cluster](azure-kubernetes-service-cluster-backup.md#configure-backups) and store in both Operational Tier (as snapshot) and Vault Tier (as blobs). Once the configuration is complete, the backups stored in the vault are available in the Secondary Region (an [Azure paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions)) for restore that can be used when during regional outage.
+With the new backup policy, you can [configure protection for the AKS cluster](azure-kubernetes-service-cluster-backup.md#configure-backup) and store in both Operational Tier (as snapshot) and Vault Tier (as blobs). Once the configuration is complete, the backups stored in the vault are available in the Secondary Region (an [Azure paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions)) for restore that can be used when during regional outage.
## Restore in secondary region (preview)
Follow these steps:
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/start-kubernetes-cluster-restore.png" alt-text="Screenshot shows how to start the restore process.":::
-2. On the next page, select **Select backup instance**, and then select the *instance* that you want to restore.
+1. On the next page, select **Select backup instance**, and then select the *instance* that you want to restore.
If a disaster occurs and there is an outage in the Primary Region, select Secondary Region. Then, it allows you to choose recovery points available in the [Azure Paired Region](../reliability/cross-region-replication-azure.md#azure-paired-regions). :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/select-backup-instance-for-restore.png" alt-text="Screenshot shows selection of backup instance for restore."::: :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/choose-instances-for-restore.png" alt-text="Screenshot shows choosing instances for restore.":::
-
+ :::image type="content" source="./media/tutorial-restore-aks-backups-across-regions/restore-to-secondary-region.png" alt-text="Screenshot shows the selection of the secondary region.":::
-3. Click **Select restore point** to select the *restore point* you want to restore.
+1. Click **Select restore point** to select the *restore point* you want to restore.
If the restore point is available in both Vault and Operation datastore, select the one you want to restore from.
Follow these steps:
:::image type="content" source="./media/tutorial-restore-aks-backups-across-regions/choose-restore-points-for-kubernetes.png" alt-text="Screenshot shows selection of a restore point."::: -
-4. In the **Restore parameters** section, click **Select Kubernetes Service** and select the *AKS cluster* to which you want to restore the backup to.
+1. In the **Restore parameters** section, click **Select Kubernetes Service** and select the *AKS cluster* to which you want to restore the backup to.
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/parameter-selection.png" alt-text="Screenshot shows how to initiate parameter selection.":::
Follow these steps:
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/set-for-restore-after-parameter-selection.png" alt-text="Screenshot shows the Restore page with the selection of Kubernetes parameter."::: -
-6. The backups stored in the Vault need to be moved to a Staging Location before being restored to the AKS Cluster. Provide a *snapshot resource group* and *storage account* as a Staging Location.
+1. The backups stored in the Vault need to be moved to a Staging Location before being restored to the AKS Cluster. Provide a *snapshot resource group* and *storage account* as a Staging Location.
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/restore-parameters.png" alt-text="Screenshot shows the parameters to add for restore from Vault-standard storage."::: :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/restore-parameter-storage.png" alt-text="Screenshot shows the storage parameter to add for restore from Vault-standard storage.":::
->[!Note]
->Currently, resources created in the staging location can't belong within a Private Endpoint. Ensure that you enable _public access_ on the storage account provided as a staging location.
+ > [!NOTE]
+ > Currently, resources created in the staging location can't belong within a Private Endpoint. Ensure that you enable _public access_ on the storage account provided as a staging location.
-7. Select **Validate** to run validation on the cluster selections for restore.
+1. Select **Validate** to run validation on the cluster selections for restore.
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/validate-restore-parameters.png" alt-text="Screenshot shows the validation of restore parameters."::: -
-8. Once the validation is successful, select **Restore** to trigger the restore operation.
+1. Once the validation is successful, select **Restore** to trigger the restore operation.
:::image type="content" source="./media/tutorial-restore-aks-backups-across-regions/trigger-restore.png" alt-text="Screenshot shows how to start the restore operation.":::
-9. You can track this restore operation by the **Backup Job** named as **CrossRegionRestore**.
+1. You can track this restore operation by the **Backup Job** named as **CrossRegionRestore**.
## Next steps
communication-services Send Email With Inline Attachments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/send-email-with-inline-attachments.md
+
+ Title: Quickstart - Send email with inline attachments using Azure Communication Service
+
+description: Learn how to send an email message with inline attachments using Azure Communication Services.
++++ Last updated : 04/07/2023+++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Send email with inline attachments
+
+In this quick start, you'll learn about how to send email with inline attachments using our Email SDKs.
+++++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [manually poll for email status](./manually-poll-for-email-status.md)
+ - Learn more about [sending email to multiple recipients](./send-email-to-multiple-recipients.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
confidential-computing Quick Create Confidential Vm Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm.md
To create and deploy your confidential VM using an ARM template through the Azur
az group create -n $resourceGroup -l $region ```
-1. Deploy your VM to Azure using an ARM template with a custom parameter file. For TDX deployments here is an example template: https://aka.ms/TDXtemplate.
+1. Deploy your VM to Azure using an ARM template with a custom parameter file and [template file](https://github.com/Azure/confidential-computing-cvm/tree/main/cvm_deployment/templates).
```azurecli-interactive az deployment group create ` -g $resourceGroup ` -n $deployName `
- -u "https://aka.ms/CVMTemplate" `
+ -u "<json-template-file-path>" `
-p "<json-parameter-file-path>" ` -p vmLocation=$region ` vmName=$vmName
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
how to collect the network trace, understand how to use it, and [analyze the Mic
:::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/find-conversation.png" alt-text="Screenshot of the TCP conversation.":::
- 1. Get the conversation between the client and the Data Factory server below by removing the filter.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/get-conversation.png" alt-text="Screenshot of conversation details.":::
+ 1. Get the conversation between the client and the Data Factory server by removing the filter.
- An analysis of the Netmon trace you've collected shows that the Time to Live (TTL)) total is 64. According to the values mentioned in the [IP Time to Live (TTL) and Hop Limit Basics](https://packetpushers.net/ip-time-to-live-and-hop-limit-basics/) article, extracted in the following list, you can see that it's the Linux System that resets the package and causes the disconnection.
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Event Hubs retains events for a configured retention time that applies across al
If you need to archive events beyond the allowed retention period, you can have them automatically stored in Azure Storage or Azure Data Lake by turning on the [Event Hubs Capture feature](event-hubs-capture-overview.md). If you need to search or analyze such deep archives, you can easily import them into [Azure Synapse](store-captured-data-data-warehouse.md) or other similar stores and analytics platforms.
-The reason for Event Hubs' limit on data retention based on time is to prevent large volumes of historic customer data getting trapped in a deep store that is only indexed by a timestamp and only allows for sequential access. The architectural philosophy here's that historic data needs richer indexing and more direct access than the real-time eventing interface that Event Hubs or Kafka provide. Event stream engines aren't well suited to play the role of data lakes or long-term archives for event sourcing.
+The reason for Event Hubs' limit on data retention based on time is to prevent large volumes of historic customer data getting trapped in a deep store that is only indexed by a timestamp and only allows for sequential access. The architectural philosophy here's that historic data needs richer indexing and more direct access than the real-time eventing interface that Event Hubs or Kafka provide. Event streaming engines aren't well suited to play the role of data lakes or long-term archives for event sourcing.
> [!NOTE] > Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a permanent store for infinitely held event streams.
Event Hubs uses *Shared Access Signatures*, which are available at the namespace
## Event consumers
-Any entity that reads event data from an event hub is an *event consumer*. All Event Hubs consumers connect via the AMQP 1.0 session and events are delivered through the session as they become available. The client doesn't need to poll for data availability.
+Any entity that reads event data from an event hub is an *event consumer*. Consumers or receivers use AMQP or Apache Kafka to receive events from an event hub. Event Hubs supports only the pull model forΓÇéconsumers to receive events from it. Even when you use event handlers to handle events from an event hub, the event processor internally uses the pull model to receive events from the event hub.
### Consumer groups
Azure Event Hubs enables you to define resource access policies such as throttli
For more information, see [Resource governance for client applications with application groups](resource-governance-overview.md). ## Apache Kafka support
-[The protocol support for **Apache Kafka** clients](azure-event-hubs-kafka-overview.md) (versions >=1.0) provides endpoints that enable existing Kafka applications to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an s namespace instead of a Kafka cluster bootstrap server.
+[The protocol support for **Apache Kafka** clients](azure-event-hubs-kafka-overview.md) (versions >=1.0) provides endpoints that enable existing Kafka applications to use Event Hubs. Most existing Kafka applications can be reconfigured to point to an s namespace instead of a Kafka cluster bootstrap server.
From the perspective of cost, operational effort, and reliability, Azure Event Hubs is a great alternative to deploying and operating your own Kafka and Zookeeper clusters and to Kafka-as-a-Service offerings not native to Azure. In addition to getting the same core functionality as of the Apache Kafka broker, you also get access to Azure Event Hubs features like automatic batching and archiving via [Event Hubs Capture](event-hubs-capture-overview.md), automatic scaling and balancing, disaster recovery, cost-neutral availability zone support, flexible and secure network integration, and multi-protocol support including the firewall-friendly AMQP-over-WebSockets protocol.
+## Protocols
+Producers or senders can use Advanced Messaging Queuing Protocol (AMQP), Kafka, or HTTPS protocols to send events to an event hub.
+
+Consumers or receivers use AMQP or Kafka to receive events from an event hub. Event Hubs supports only the pull model forΓÇéconsumers to receive events from it. Even when you use event handlers to handle events from an event hub, the event processor internally uses the pull model to receive events from the event hub.
+
+#### AMQP
+You can use the **AMQP 1.0** protocol to send events to and receive events from Azure Event Hubs. AMQP provides reliable, performant, and secure communication for both sending and receiving events. You can use it for high-performance and real-time streaming and is supported by most Azure Event Hubs SDKs.
+
+#### HTTPS/REST API
+You can only send events to Event Hubs using HTTP POST requests. Event Hubs doesn't support receiving events over HTTPS. It's suitable for lightweight clients where a direct TCP connection isn't feasible.
+
+#### Apache Kafka
+Azure Event Hubs has a built-in Kafka endpoint that supports Kafka producers and consumers. Applications that are built using Kafka can use Kafka protocol (version 1.0 or later) to send and receive events from Event Hubs without any code changes.
+
+Azure SDKs abstract the underlying communication protocols and provide a simplified way to send and receive events from Event Hubs using languages like C#, Java, Python, JavaScript, etc.
## Next steps
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
This article helps you create a connection to link a virtual network (virtual ne
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of a virtual network gateway connected to two different ExpressRoute circuits.":::
+ **High resiliency** - This option provides a single redundant connection from the virtual network gateway to a Metro ExpressRoute circuit. Metro circuits provide redundancy across ExpressRoute peering locations. Whereas, unlike maximum resiliency, there is no redundancy within the peering locations.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/high-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit via two peering locations.":::
+ **Standard resiliency** - This option provides a single redundant connection from the virtual network gateway to a single ExpressRoute circuit. > [!NOTE]
- > Standard Resiliency does not provide protection against location wide outages. This option is suitable for non-critical and non-production workloads.
+ > Standard resiliency does not provide protection against location wide outages. This option is suitable for non-critical and non-production workloads.
- :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit.":::
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit via one peering location.":::
6. Enter the following information for the respective resiliency type and then select **Review + create**. Then select **Create** after validation completes.
This article helps you create a connection to link a virtual network (virtual ne
> > :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/same-location-warning.png" alt-text="Screenshot of warning in the Azure portal when selecting two ExpressRoute circuits in the same peering location.":::
- **Standard resiliency**
+ **High/Standard resiliency**
- For standard resiliency, you only need to enter information for one connection.
+ For high or standard resiliency, you only need to enter information for one connection. For high resiliency the connection you need to attach a metro circuit. For standard resiliency the connection you need to attach a regular (non-metro) circuit.
7. After your connection has been successfully configured, your connection object will show the information for the connection.
The circuit user needs the resource ID and an authorization key from the circuit
1. In the **Settings** page, select **High Resiliency** or **Standard Resiliency**, and then select the *Virtual network gateway*. Check the **Redeem authorization** check box. Enter the *Authorization key* and the *Peer circuit URI* and give the connection a name. > [!NOTE]
- > - Connecting to circuits in a different subscription isn't supported for Maximum Resiliency.
+ > - Connecting to circuits in a different subscription isn't supported under Maximum Resiliency.
> - You can connect a virtual network to a Metro circuit in a different subscription when choosing High Resiliency. > - You can connect a virtual network to a regular (non-metro) circuit in a different subscription when choosing Standard Resiliency. > - The *Peer Circuit URI* is the Resource ID of the ExpressRoute circuit (which you can find under the Properties Setting pane of the ExpressRoute Circuit).
The circuit user needs the resource ID and an authorization key from the circuit
## Configure ExpressRoute FastPath
-[FastPath](expressroute-about-virtual-network-gateways.md) improves data path performance such as packets per second and connections per second between your on-premises network and your virtual network.
+[FastPath](expressroute-about-virtual-network-gateways.md) improves data path performance such as packets per second and connections per second between your on-premises network and your virtual network. You can enable FastPath if your virtual network gateway is Ultra Performance or ErGw3AZ.
### Configure FastPath on a new connection
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **BSNL** | &check; | &check; | Chennai<br/>Mumbai | | **[C3ntro](https://www.c3ntro.com/)** | &check; | &check; | Miami | | **Cello** | &check; | &check; | Sydney |
-| **CDC** | &check; | &check; | Canberra<br/>Canberra2 |
+| **[CDC](https://cdc.com/services/network-services/)** | &check; | &check; | Canberra<br/>Canberra2 |
| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** | &check; | &check; | Amsterdam2<br/>Chicago<br/>Dallas<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>Las Vegas<br/>London<br/>London2<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Toronto<br/>Washington DC<br/>Washington DC2 | | **[Chief Telecom](https://www.chief.com.tw/)** |&check; |&check; | Hong Kong<br/>Taipei | | **China Mobile International** |&check; |&check; | Hong Kong<br/>Hong Kong2<br/>Singapore<br/>Singapore2 |
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
However, in a production environment, communications with a Kubernetes cluster s
The simplest solution uses a firewall device that can control outbound traffic based on domain names. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination, giving you fine-grained egress traffic control, but at the same time allows you to provide access to the FQDNs encompassing an AKS clusterΓÇÖs outbound dependencies (something that NSGs can't do). Likewise, you can control ingress traffic and improve security by enabling threat intelligence-based filtering on an Azure Firewall deployed to a shared perimeter network. This filtering can provide alerts, and deny traffic to and from known malicious IP addresses and domains.
-See the following video by Abhinav Sriram for a quick overview on how this works in practice on a sample environment:
+See the following video for a quick overview on how this works in practice on a sample environment:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE529Qc]
frontdoor Front Door Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md
When **Caching** is set to **Enabled**, set the following properties:
| Query parameters | The list of query string parameter names, separated by commas. This property is only set when *Query string caching behavior* is set to *Ignore Specified Query Strings* or *Include Specified Query Strings*. | | Compression | <ul><li>**Enabled:** Front Door dynamically compresses content at the edge, resulting in a smaller and faster response. For more information, see [File compression](front-door-caching.md#file-compression). In ARM templates, set the `isCompressionEnabled` property to `Enabled`.</li><li>**Disabled.** Front Door doesn't perform compression. In ARM templates, set the `isCompressionEnabled` property to `Disabled`.</li></ul> | | Cache behavior | <ul><li>**Honor origin:** Front Door always honors origin response header directive. If the origin directive is missing, Front Door caches contents anywhere from 1 to 3 days. In ARM templates, set the `cacheBehavior` property to `HonorOrigin`.</li><li>**Override always:** The TTL value returned from your origin is overwritten with the value specified in the action. This behavior only gets applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `OverrideAlways`.</li><li>**Override if origin missing:** If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior only gets applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `OverrideIfOriginMissing`.</li></ul> |
-| Cache duration | When _Cache behavior_ is set to `Override always` or `Override if origin missing`, these fields must specify the cache duration to use. The maximum duration is 366 days. For a value of 0 seconds, the CDN caches the content, but must revalidate each request with the origin server. This property is only set when *Cache behavior* is set to *Override always* or *Override if origin missing*.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: use the `cacheDuration` to specify the duration in the format `d.hh:mm:ss`. |
+| Cache duration | When _Cache behavior_ is set to `Override always` or `Override if origin missing`, these fields must specify the cache duration to use. The maximum duration is 366 days. This property is only set when *Cache behavior* is set to *Override always* or *Override if origin missing*.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: use the `cacheDuration` to specify the duration in the format `d.hh:mm:ss`. |
### Examples
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Finally, to identify the AKS cluster version that you're using, follow the linke
### Add-on versions available per each AKS cluster version
+#### 1.7.1
+Introducing CEL and VAP. Common Expression Language (CEL) is a Kubernetes-native expression language that can be used to declare validation rules of a policy. Validating Admission Policy (VAP) feature provides in-tree policy evaluation, reduces admission request latency, and improves reliability and availability. The supported validation actions include Deny, Warn, and Audit. Custom policy authoring for CEL/VAP is allowed, and existing users won't need to convert their Rego to CEL as they will both be supported and be used to enforce policies. To use CEL and VAP, users need to enroll in the feature flag `AKS-AzurePolicyK8sNativeValidation` in the `Microsoft.ContainerService` namespace. For more information, view the [Gatekeeper Documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/validating-admission-policy/).
+
+Security improvements.
+- Released Sep 2024
+- Kubernetes 1.27+ (VAP generation is only supported on 1.30+)
+- Gatekeeper 3.17.1
+ #### 1.7.0 Introducing expansion, a shift left feature that lets you know up front whether your workload resources (Deployments, ReplicaSets, Jobs, etc.) will produce admissible pods. Expansion shouldn't change the behavior of your policies; rather, it just shifts Gatekeeper's evaluation of pod-scoped policies to occur at workload admission time rather than pod admission time. However, to perform this evaluation it must generate and evaluate a what-if pod that is based on the pod spec defined in the workload, which might have incomplete metadata. For instance, the what-if pod won't contain the proper owner references. Because of this small risk of policy behavior changing, we're introducing expansion as disabled by default. To enable expansion for a given policy definition, set `.policyRule.then.details.source` to `All`. Built-ins will be updated soon to enable parameterization of this field. If you test your policy definition and find that the what-if pod being generated for evaluation purposes is incomplete, you can also use a mutation with source `Generated` to mutate the what-if pods. For more information on this option, view the [Gatekeeper documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/expansion#mutating-example).
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
Consider the following high-availability and disaster recovery requirements:
## Troubleshooting considerations
-During installation and upgrade by default, atomic and wait options are set to `true`, and the operation timeout is set to `27 minutes`. During onboarding, we recommend that you set the atomic flag to `false` to prevent the Helm rollback upon failure. The optimal way to accomplish that is in the ARM template of the NF.
+During installation and upgrade by default, atomic and wait options are set to `true`, and the operation timeout is set to `27 minutes`. During initial onboarding, only while you are still debugging and developing artifacts, we recommend that you set the atomic flag to `false.` This prevents a helm rollback upon failure and retains any logs or errors which may otherwise be lost. The optimal way to accomplish that is in the ARM template of the NF.
In the ARM template, add the following section:
The component name is defined in the NFDV:
} </pre>
+> [!IMPORTANT]
+> Make sure atomic and wait are set back to `true` after initial onboarding is complete.
+ ## Cleanup considerations Delete operator resources in the following order to make sure no orphaned resources are left behind:
operator-service-manager Safe Upgrade Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/safe-upgrade-practices.md
When planning for an upgrade using Azure Operator Service Manager, address the f
## Upgrade procedure Follow the following process to trigger an upgrade with Azure Operator Service Manager.
-### Create new NFDV template
+### Create new NFDV resource
For new NFDV versions, it must be in a valid SemVer format, where only higher incrementing values of patch and minor versions updates are allowed. A lower NFDV version is not allowed. Given a CNF deployed using NFDV 2.0.0, the new NFDV can be of version 2.0.1, or 2.1.0, but not 1.0.0, or 3.0.0. ### Update new NFDV parameters
In the NFDV resource, under deployParametersMappingRuleProfile there is the prop
For the applicationEnablement property, the publisher has two options: either provide a default value or parameterize it. #### Sample NFDV
+The NFDV is used by publisher to set default values for applicationEnablement.
+ ```json { "location":"<location>",
For the applicationEnablement property, the publisher has two options: either pr
} ```
-### Operator changes
-Operators specify applicationEnablement as defined by the NFDV. If applicationEnablement for specific application is parameterized, then it must be passed through the deploymentValues property at runtime.
- #### Sample configuration group schema (CGS) resource
+The CGS is used by publisher to require the roleOverrideValues variable(s) to be provided by Operator at runt-time. These roleOverrideValues can include non-dedfault settings for applicationEnablement.
+ ```json { "type": "object",
Operators specify applicationEnablement as defined by the NFDV. If applicationEn
] } ```+
+### Operator changes
+Operators inherity default applicationEnablement values as defined by the NFDV. If applicationEnablement is parameterized in CGS, then it must be passed through the deploymentValues property at runtime.
#### Sample configuration group value (CGV) resource
+The CGV is used by operator to set the roleOverrideValues variable(s) at runt-time. The roleOverrideValues includes a non-dedfault settings for applicationEnablement.
+ ```json { "location": "<location>",
Operators specify applicationEnablement as defined by the NFDV. If applicationEn
} ```
-#### Sample NF template
+#### Sample NF ARM template
+The NF ARM template is used by operator to submit the roleOverrideValues variable(s), set by CGV, to the resource provider (RP). The operator can change the applicationEnablement setting in CGV, as needed, and resubmit the same NF ARM template, to alter behavior between iterations.
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
reliability Overview Reliability Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
| Product| Availability zone guide | Disaster recovery guide | |-|-|-| |Azure API Center | [Reliability in Azure API Center](reliability-api-center.md) | [Reliability in Azure API Center](reliability-api-center.md)|
+|Azure API for FHIR|| [Disaster recovery for Azure API for FHIR](../healthcare-apis/azure-api-for-fhir/disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
|Azure Application Gateway for Containers | [Reliability in Azure Application Gateway for Containers](reliability-app-gateway-containers.md) | [Reliability in Azure Application Gateway for Containers](reliability-app-gateway-containers.md)| |Azure Chaos Studio | [Reliability in Azure Chaos Studio](reliability-chaos-studio.md)| [Reliability in Azure Chaos Studio](reliability-chaos-studio.md)| |Azure Community Training|[Reliability in Community Training](reliability-community-training.md) |[Reliability in Community Training](reliability-community-training.md) |
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure DevOps|| [Azure DevOps Data protection - data availability](/azure/devops/organizations/security/data-protection?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json&preserve-view=true&#data-availability)| |Azure Elastic SAN|[Availability zone support](reliability-elastic-san.md#availability-zone-support)|[Disaster recovery and business continuity](reliability-elastic-san.md#disaster-recovery-and-business-continuity)| |Azure HDInsight on AKS |[Reliability in HDInsight on AKS](reliability-hdinsight-on-aks.md) | [Reliability in HDInsight on AKS](reliability-hdinsight-on-aks.md) |
-|Azure Health Data Services - Azure API for FHIR|| [Disaster recovery for Azure API for FHIR](../healthcare-apis/azure-api-for-fhir/disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+|Azure Health Data Services (FHIR, DICOM, MedTech) | | [Disaster recovery for Azure Health Data Services](../healthcare-apis/business-continuity-disaster-recovery.md) |
+|Azure Health Data Services de-identification service || [Disaster recovery for Azure Health Data Services de-identification service](reliability-health-data-services-deidentification.md) |
|Azure Health Insights|[Reliability in Azure Health Insights](reliability-health-insights.md)|[Reliability in Azure Health Insights](reliability-health-insights.md)| |Azure IoT Hub| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Machine Learning Service|| [Failover for business continuity and disaster recovery](/azure/machine-learning/how-to-high-availability-machine-learning?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
reliability Reliability Health Data Services Deidentification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-health-data-services-deidentification.md
+
+ Title: Reliability in Azure Health Data Services de-identification service
+description: Find out about reliability in the Azure Health Data Services de-identification service.
++++++ Last updated : 09/27/2024
+#Customer intent: As an IT admin, I want to understand reliability support for the de-identification service so that I can respond to and/or avoid failures in order to minimize downtime and data loss.
++
+# Reliability in the Azure Health Data Services de-identification service (preview)
+
+This article describes reliability support in the de-identification service (preview). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+## Cross-region disaster recovery
++
+Each de-identification service (preview) is deployed to a single Azure region. In the event of a region-wide degredation or outage:
+- ARM control plane functionality is limited to read-only during the outage. Your service metadata (such as resource properties) is always backed up outside of the region by Microsoft. Once the outage is over, you can read and write to the control plane.
+- All data plane requests fail during the outage, such as de-identification or job API requests. No customer data is lost, but there is the potential for job progress metadata to be lost. Once the outage is over, you can read and write to the data plane.
+
+### Disaster recovery tutorial
+If an entire Azure region is not available, you can still assure high availability of your workloads. You can deploy two or more de-identification services in an active-active configuration, with Azure Front door used to route traffic to both regions.
+
+With this example architecture:
+
+- Identical de-identification services are deployed in two separate regions.
+- Azure Front Door is used to route traffic to both regions.
+- During a disaster, one region becomes offline, and Azure Front Door routes traffic exclusively to the other region. The recovery time objective during such a geo-failover is limited to the time Azure Front Door takes to detect that one service is unhealthy.
+
+#### RTO and RPO
+
+If you adopt the active-active configuration, you should expect a recovery time objective (RTO) of **5 minutes**. In any configuration, you should expect a recovery point objective (RPO) of **0 minutes** (no customer data will be lost).
+
+### Validate disaster recovery plan
+#### Prerequisites
++
+To complete this tutorial:
++
+#### Create a resource group
+
+You need two instances of a de-identification service (preview) in different Azure regions for this tutorial. The tutorial uses the [region pair](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) East US/West US as your two regions, but feel free to choose your own regions.
+
+To make management and clean-up simpler, you use a single resource group for all resources in this tutorial. Consider using separate resource groups for each region/resource to further isolate your resources in a disaster recovery situation.
+
+Run the following command to create your resource group.
+
+```azurecli-interactive
+az group create --name my-deid --location eastus
+```
+
+#### Create de-identification services (preview)
+
+Follow the steps at [Quickstart: Deploy the de-identification service (preview)](/azure/healthcare-apis/deidentification/quickstart) to create two separate services, one in East US and one in West US.
+
+Note the service URL of each de-identification service so you can define the backend addresses when you deploy the Azure Front Door in the next step.
+
+#### Create an Azure Front Door
+
+A multi-region deployment can use an active-active or active-passive configuration. An active-active configuration distributes requests across multiple active regions. An active-passive configuration keeps running instances in the secondary region, but doesn't send traffic there unless the primary region fails.
+Azure Front Door has a built-in feature that allows you to enable these configurations. For more information on designing apps for high availability and fault tolerance, see [Architect Azure applications for resiliency and availability](/azure/architecture/reliability/architect).
+
+#### Create an Azure Front Door profile
+
+You now create an [Azure Front Door Premium](../frontdoor/front-door-overview.md) to route traffic to your services.
+
+Run [`az afd profile create`](/cli/azure/afd/profile#az-afd-profile-create) to create an Azure Front Door profile.
+
+> [!NOTE]
+> If you want to deploy Azure Front Door Standard instead of Premium, substitute the value of the `--sku` parameter with Standard_AzureFrontDoor. You can't deploy managed rules with WAF Policy if you choose the Standard tier. For a detailed comparison of the pricing tiers, see [Azure Front Door tier comparison](../frontdoor/standard-premium/tier-comparison.md).
+
+```azurecli-interactive
+az afd profile create --profile-name myfrontdoorprofile --resource-group my-deid --sku Premium_AzureFrontDoor
+```
+
+|Parameter |Value |Description |
+||||
+|`profile-name` |`myfrontdoorprofile` |Name for the Azure Front Door profile, which is unique within the resource group. |
+|`resource-group` |`my-deid` |The resource group that contains the resources from this tutorial. |
+|`sku` |`Premium_AzureFrontDoor` |The pricing tier of the Azure Front Door profile. |
++
+### Add an endpoint
+
+Run [`az afd endpoint create`](/cli/azure/afd/endpoint#az-afd-endpoint-create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurecli-interactive
+az afd endpoint create --resource-group my-deid --endpoint-name myendpoint --profile-name myfrontdoorprofile --enabled-state Enabled
+```
+
+|Parameter |Value |Description |
+||||
+|`endpoint-name` |`myendpoint` |Name of the endpoint under the profile, which is unique globally. |
+|`enabled-state` |`Enabled` |Whether to enable this endpoint. |
+
+#### Create an origin group
+
+Run [`az afd origin-group create`](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two de-identification services.
+
+```azurecli-interactive
+az afd origin-group create --resource-group my-deid --origin-group-name myorigingroup --profile-name myfrontdoorprofile --probe-request-type GET --probe-protocol Https --probe-interval-in-seconds 60 --probe-path /health --sample-size 1 --successful-samples-required 1 --additional-latency-in-milliseconds 50 --enable-health-probe
+```
+
+|Parameter |Value |Description |
+||||
+|`origin-group-name` |`myorigingroup` |Name of the origin group. |
+|`probe-request-type` |`GET` |The type of health probe request that is made. |
+|`probe-protocol` |`Https` |Protocol to use for health probe. |
+|`probe-interval-in-seconds` |`60` |The number of seconds between health probes. |
+|`probe-path` |`/health` |The path relative to the origin that is used to determine the health of the origin. |
+|`sample-size` |`1` |The number of samples to consider for load balancing decisions. |
+|`successful-samples-required` |`1` |The number of samples within the sample period that must succeed. |
+|`additional-latency-in-milliseconds` |`50` |The extra latency in milliseconds for probes to fall into the lowest latency bucket. |
+|`enable-health-probe` | | Switch to control the status of the health probe. |
+
+### Add origins to the group
+
+Run [`az afd origin create`](/cli/azure/afd/origin#az-afd-origin-create) to add an origin to your origin group. For the `--host-name` and `--origin-host-header` parameters, replace the placeholder value `<service-url-east-us>` with your East US service URL, leaving out the scheme (`https://`). You should have a value like `abcdefghijk.api.eastus.deid.azure.com`.
+
+```azurecli-interactive
+az afd origin create --resource-group my-deid --host-name <service-url-east-us> --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name primarydeid --origin-host-header <service-url-east-us> --priority 1 --weight 1000 --enabled-state Enabled --https-port 443
+```
+
+|Parameter |Value |Description |
+||||
+|`host-name` |`<service-url-east-us>` |The hostname of the primary de-identification service. |
+|`origin-name` |`deid1` |Name of the origin. |
+|`origin-host-header` |`<service-url-east-us>` |The host header to send for requests to this origin. |
+|`priority` |`1` |Set this parameter to 1 to direct all traffic to the primary de-identification service. |
+|`weight` |`1000` |Weight of the origin in given origin group for load balancing. Must be between 1 and 1000. |
+|`enabled-state` |`Enabled` |Whether to enable this origin. |
+|`https-port` |`443` |The port used for HTTPS requests to the origin. |
+
+Repeat this step to add your second origin. For the `--host-name` and `--origin-host-header` parameters, replace the placeholder value `<service-url-west-us>` with your West US service URL, leaving out the scheme (`https://`).
+
+```azurecli-interactive
+az afd origin create --resource-group my-deid --host-name <service-url-west-us> --profile-name myfrontdoorprofile --origin-group-name myorigingroup --origin-name deid2 --origin-host-header <service-url-west-us> --priority 1 --weight 1000 --enabled-state Enabled --https-port 443
+```
+
+Pay attention to the `--priority` parameters in both commands. Because both origins are set to priority `1`, Azure Front Door treats both origins as active and direct traffic to both regions. If the priority for one origin is set to `2`, Azure Front Door will treat that origin as secondary and will direct all traffic to the other origin unless it goes down.
+
+#### Add a route
+
+Run [`az afd route create`](/cli/azure/afd/route#az-afd-route-create) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group.
+
+```azurecli-interactive
+az afd route create --resource-group my-deid --profile-name myfrontdoorprofile --endpoint-name myendpoint --forwarding-protocol MatchRequest --route-name route --origin-group myorigingroup --supported-protocols Https --link-to-default-domain Enabled
+```
+
+|Parameter |Value |Description |
+||||
+|`endpoint-name` |`myendpoint` |Name of the endpoint. |
+|`forwarding-protocol` |MatchRequest |Protocol this rule uses when forwarding traffic to backends. |
+|`route-name` |`route` |Name of the route. |
+|`supported-protocols` |`Https` |List of supported protocols for this route. |
+|`link-to-default-domain` |`Enabled` |Whether this route is linked to the default endpoint domain. |
+
+Allow about 15 minutes for this step to complete as it takes some time for this change to propagate globally. After this period, your Azure Front Door is fully functional.
+
+## Test the Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created.
+
+Run [`az afd endpoint show`](/cli/azure/afd/endpoint#az-afd-endpoint-show) to get the hostname of the Front Door endpoint. It should look like `abddefg.azurefd.net`
+
+```azurecli-interactive
+az afd endpoint show --resource-group my-deid --profile-name myfrontdoorprofile --endpoint-name myendpoint --query "hostName"
+```
+
+In a browser, go to the endpoint hostname that the previous command returned: `<endpoint>.azurefd.net/health`. Your request should automatically get routed to the primary de-identification service in East US.
+
+To test instant global failover:
+
+1. Open a browser and go to the endpoint hostname: `<endpoint>.azurefd.net/health`.
+1. Follow the steps at [Configure private access](/azure/healthcare-apis/deidentification/configure-private-endpoints#configure-private-access) to disable public network access for the de-identification service in East US.
+1. Refresh your browser. You should see the same information page because traffic is now directed to the de-identification service in West US.
+
+ > [!TIP]
+ > You might need to refresh the page a few times for the failover to complete.
+
+1. Now disable public network access for the de-identification service in West US.
+1. Refresh your browser. This time, you should see an error message.
+1. Re-enable public network access for one of the de-identification services. Refresh your browser and you should see the health status again.
+
+You've now validated that you can access your services through Azure Front Door and that failover functions as intended. Enable public network access on the other service if you're done with failover testing.
+
+#### Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command:
+
+```azurecli-interactive
+az group delete --name my-deid
+```
+
+This command might take a few minutes to complete.
+
+#### Initiate recovery
+In the case of disaster, you can check the health status of your de-identification service (preview) by sending requests to `<service-url>/health`.
+
+## Related content
+
+- [Reliability in Azure](/azure/reliability/overview)
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
You must install one or more of the supported data connectors to produce high-fi
- Syslog - Office activity logs - Azure activity logs
+ - ASIM DNS logs
+ - ASIM Network sessions
:::image type="content" source="media/use-matching-analytics-to-detect-threats/data-sources.png" alt-text="A screenshot that shows the Microsoft Defender Threat Intelligence Analytics rule data source connections.":::
Microsoft Defender Threat Intelligence Analytics matches your logs with domain,
- **Syslog events**, where `Facility == "cron"` ingested into the `Syslog` table matches domain and IPv4 indicators directly from the `SyslogMessage` field. - **Office activity logs** ingested into the `OfficeActivity` table match IPv4 indicators directly from the `ClientIP` field. - **Azure activity logs** ingested into the `AzureActivity` table match IPv4 indicators directly from the `CallerIpAddress` field.
+- **ASIM DNS logs** ingested into the `ASimDnsActivityLogs` table match domain indicators if populated in the `DnsQuery` field, and IPv4 indicators in the `DnsResponseName` field.
+- **ASIM Network Sessions** ingested into the `ASimNetworkSessionLogs` table match IPv4 indicators if populated in one or more of the following fields: `DstIpAddr`, `DstNatIpAddr`, `SrcNatIpAddr`, `SrcIpAddr`, `DvcIpAddr`.
## Triage an incident generated by matching analytics
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
This section compares advanced capabilities provided by Storage queues and Servi
| Storage metrics |Yes<br/><br/>Minute Metrics provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |Yes<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](monitor-service-bus-reference.md#message-metrics). | | State management |No |Yes (Active, Disabled, SendDisabled, ReceiveDisabled. For details on these states, see [Queue status](entity-suspend.md#queue-status)) | | Message autoforwarding |No |Yes |
-| Purge queue function |Yes |No |
+| Purge queue function |Yes |[Yes](/azure/service-bus-messaging/batch-delete)|
| Message groups |No |Yes<br/><br/>(by using messaging sessions) | | Application state per message group |No |Yes | | Duplicate detection |No |Yes<br/><br/>(configurable on the sender side) |
This section compares Storage queues and Service Bus queues from the perspective
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Maximum queue size |500 TB<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |1 GB to 80 GB<br/><br/>(defined upon creation of a queue and [enabling partitioning](service-bus-partitioning.md) ΓÇô see the ΓÇ£Additional InformationΓÇ¥ section) |
-| Maximum message size |64 KB<br/><br/>(48 KB when using Base 64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
+| Maximum queue size |500 TB<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |1 GB to 80 GB<br/><br/>(Premium SKU or Standard SKU with partitioning)|
+| Maximum message size |64 KB<br/><br/>(48 KB when using Base 64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB, 1 MB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
| Maximum message TTL |Infinite (api-version 2017-07-27 or later) |TimeSpan.MaxValue |
-| Maximum number of queues |Unlimited |10,000<br/><br/>(per service namespace) |
+| Maximum number of queues |Unlimited |10,000 (Standard SKU)<br/>1000 / Messaging Unit (Premium SKU)<br/>(per service namespace) |
| Maximum number of concurrent clients |Unlimited |5,000 | ### Additional information
This section compares Storage queues and Service Bus queues from the perspective
* With Storage queues, if the content of the message isn't XML-safe, then it must be **Base64** encoded. If you **Base64**-encode the message, the user payload can be up to 48 KB, instead of 64 KB. * With Service Bus queues, each message stored in a queue is composed of two parts: a header and a body. The total size of the message can't exceed the maximum message size supported by the service tier. * When clients communicate with Service Bus queues over the TCP protocol, the maximum number of concurrent connections to a single Service Bus queue is limited to 100. This number is shared between senders and receivers. If this quota is reached, requests for additional connections will be rejected and an exception will be received by the calling code. This limit isn't imposed on clients connecting to the queues using REST-based API.
-* If you require more than 10,000 queues in a single Service Bus namespace, you can contact the Azure support team and request an increase. To scale beyond 10,000 queues with Service Bus, you can also create additional namespaces using the [Azure portal].
+* To scale beyond 10,000 queues with Service Bus Standard SKU or 1000 queues/Messaging Unit with Service Bus Premium SKU, you can also create additional namespaces using the [Azure portal].
## Management and operations This section compares the management features provided by Storage queues and Service Bus queues.
static-web-apps Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-overview.md
Previously updated : 06/14/2022 Last updated : 10/02/2024
Key features of Azure Static Web Apps APIs include:
- **Integrated security** with direct access to user [authentication and role-based authorization](user-information.md) data. -- **Seamless routing** that makes the `/api` route available to the front-end web app without requiring custom CORS rules.
+- **Seamless routing** that makes the back-end `/api` route available to the front-end web app without requiring custom CORS rules.
## API options
static-web-apps Deployment Token Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deployment-token-management.md
Previously updated : 06/24/2024 Last updated : 10/02/2024 # Reset deployment tokens in Azure Static Web Apps
To keep automated deployment running, after resetting a token you need to set th
1. Go to your project's repository on GitHub, and select the **Settings** tab.
-1. Select **Secrets** from the menu item. Find a secret generated during Static Web App provisioning named _AZURE_STATIC_WEB_APPS_API_TOKEN_... in the _Repository secrets_ section.
+1. Under the *Security* section, select **Actions**.
- :::image type="content" source="./media/deployment-token-management/github-repo-secrets.png" alt-text="Listing repository secrets":::
+1. Find a secret generated during Static Web App provisioning named _AZURE_STATIC_WEB_APPS_API_TOKEN_... in the _Repository secrets_ section.
> [!NOTE] > If you created the Azure Static Web Apps site against multiple branches of this repository, you see multiple _AZURE_STATIC_WEB_APPS_API_TOKEN_... secrets in this list. Select the correct one by matching the file name listed in the _Edit workflow_ field on the _Overview_ tab of the Static Web Apps site.
-1. Select **Update**.
+1. Select pen icon button to update the value.
1. **Paste the value** of the deployment token to the _Value_ field.
static-web-apps External Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/external-providers.md
Title: Deploy to Azure Static Web Apps with external providers
-description: Learn to use CI/CD providers that aren't supported out-of-the-box to build and deploy your website in Azure Static Web Apps.
+ Title: Set up Azure Static Web Apps to deploy to external providers
+description: Learn how to set up your static web app to use CI/CD providers that aren't supported out-of-the-box.
Previously updated : 06/05/2023 Last updated : 10/02/2024
-# Deploy to Azure Static Web Apps with external providers
+# Set up Azure Static Web Apps to deploy to external providers
Azure Static Web Apps supports a series of built-in providers to help you publish your website. If you would like to use a provider beyond the out-of-the-box options, use the following guide to build and deploy your static web app.
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
Title: "Tutorial: Configure Azure Front Door for Azure Static Web Apps"
-description: Learn how to set up Azure Front Door for Azure Static Web Apps
+ Title: "Tutorial: Configure a CDN for Azure Static Web Apps"
+description: Learn how to set up a content delivery network (CDN) for Azure Static Web Apps
Previously updated : 01/24/2023 Last updated : 10/01/2024 zone_pivot_groups: static-web-apps-afd-methods
-# Tutorial: Configure Azure Front Door for Azure Static Web Apps
+# Tutorial: Configure a CDN for Azure Static Web Apps
By adding [Azure Front Door](../frontdoor/front-door-overview.md) as the CDN for your static web app, you benefit from a secure entry point for fast delivery of your web applications.
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
Previously updated : 08/22/2024 Last updated : 10/02/2024 ms.devlang: csharp
To learn more about generating and managing SAS tokens, see the following articl
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) - [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)-- [Create a service SAS for a container with .NET](sas-service-create-dotnet-container.md)-- [Create a service SAS for a blob with .NET](sas-service-create-dotnet.md)-- [Create a user delegation SAS for a container with .NET](storage-blob-container-user-delegation-sas-create-dotnet.md)-- [Create a user delegation SAS for a blob with .NET](storage-blob-user-delegation-sas-create-dotnet.md)
+- [Create a service SAS with .NET](sas-service-create-dotnet.md)
+- [Create a user delegation SAS with .NET](storage-blob-user-delegation-sas-create-dotnet.md)
> [!NOTE] > For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key.
The following guides show you how to access data and perform specific actions us
| [Configure a retry policy](storage-retry-policy.md) | Implement retry policies for client operations. | | [Copy blobs](storage-blob-copy.md) | Copy a blob from one location to another. | | [Create a container](storage-blob-container-create.md) | Create containers. |
-| [Create a user delegation SAS (blobs)](storage-blob-user-delegation-sas-create-dotnet.md) | Create a user delegation SAS for a blob. |
-| [Create a user delegation SAS (containers))](storage-blob-container-user-delegation-sas-create-dotnet.md) | Create a user delegation SAS for a container. |
+| [Create a user delegation SAS](storage-blob-user-delegation-sas-create-dotnet.md) | Create a user delegation SAS for a container or blob. |
| [Create and manage blob leases](storage-blob-lease.md) | Establish and manage a lock on a blob. | | [Create and manage container leases](storage-blob-container-lease.md) | Establish and manage a lock on a container. |
-| [Delete and restore](storage-blob-delete.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
+| [Delete and restore blobs](storage-blob-delete.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
| [Delete and restore containers](storage-blob-container-delete.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. | | [Download blobs](storage-blob-download.md) | Download blobs by using strings, streams, and file paths. | | [Find blobs using tags](storage-blob-tags.md) | Set and retrieve tags, and use tags to find blobs. |
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
Previously updated : 08/22/2024 Last updated : 10/02/2024
To learn more about generating and managing SAS tokens, see the following articl
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) - [Create an account SAS with Java](../common/storage-account-sas-create-java.md)-- [Create a service SAS for a container with Java](sas-service-create-java-container.md)-- [Create a service SAS for a blob with Java](sas-service-create-java.md)-- [Create a user delegation SAS for a container with Java](storage-blob-container-user-delegation-sas-create-java.md)-- [Create a user delegation SAS for a blob with Java](storage-blob-user-delegation-sas-create-java.md)
+- [Create a service SAS with Java](sas-service-create-java.md)
+- [Create a user delegation SAS with Java](storage-blob-user-delegation-sas-create-java.md)
> [!NOTE] > For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key.
The following guides show you how to access data and perform specific actions us
| [Configure a retry policy](storage-retry-policy-java.md) | Implement retry policies for client operations. | | [Copy blobs](storage-blob-copy-java.md) | Copy a blob from one location to another. | | [Create a container](storage-blob-container-create-java.md) | Create blob containers. |
-| [Create a user delegation SAS (blobs)](storage-blob-user-delegation-sas-create-java.md) | Create a user delegation SAS for a blob. |
-| [Create a user delegation SAS (containers)](storage-blob-container-user-delegation-sas-create-java.md) | Create a user delegation SAS for a container. |
+| [Create a user delegation SAS](storage-blob-user-delegation-sas-create-java.md) | Create a user delegation SAS for a container or blob. |
| [Create and manage blob leases](storage-blob-lease-java.md) | Establish and manage a lock on a blob. | | [Create and manage container leases](storage-blob-container-lease-java.md) | Establish and manage a lock on a container. |
-| [Delete and restore](storage-blob-delete-java.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
+| [Delete and restore blobs](storage-blob-delete-java.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
| [Delete and restore containers](storage-blob-container-delete-java.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. | | [Download blobs](storage-blob-download-java.md) | Download blobs by using strings, streams, and file paths. | | [Find blobs using tags](storage-blob-tags-java.md) | Set and retrieve tags as well as use tags to find blobs. |
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
Previously updated : 08/05/2024 Last updated : 10/02/2024 ai-usage: ai-assisted
To learn more about generating and managing SAS tokens, see the following articl
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) - [Create an account SAS with Python](../common/storage-account-sas-create-python.md)-- [Create a service SAS for a container with Python](sas-service-create-python-container.md)-- [Create a service SAS for a blob with Python](sas-service-create-python.md)-- [Create a user delegation SAS for a container with Python](storage-blob-container-user-delegation-sas-create-python.md)-- [Create a user delegation SAS for a blob with Python](storage-blob-user-delegation-sas-create-python.md)
+- [Create a service SAS with Python](sas-service-create-python.md)
+- [Create a user delegation SAS with Python](storage-blob-user-delegation-sas-create-python.md)
> [!NOTE] > For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key.
The following guides show you how to access data and perform specific actions us
| [Configure a retry policy](storage-retry-policy-python.md) | Implement retry policies for client operations. | | [Copy blobs](storage-blob-copy-python.md) | Copy a blob from one location to another. | | [Create a container](storage-blob-container-create-python.md) | Create blob containers. |
-| [Create a user delegation SAS (blobs)](storage-blob-user-delegation-sas-create-python.md) | Create a user delegation SAS for a blob. |
-| [Create a user delegation SAS (containers))](storage-blob-container-user-delegation-sas-create-python.md) | Create a user delegation SAS for a container. |
+| [Create a user delegation SAS](storage-blob-user-delegation-sas-create-python.md) | Create a user delegation SAS for a container or blob. |
| [Create and manage blob leases](storage-blob-lease-python.md) | Establish and manage a lock on a blob. | | [Create and manage container leases](storage-blob-container-lease-python.md) | Establish and manage a lock on a container. |
-| [Delete and restore](storage-blob-delete-python.md) | Delete blobs and restore soft-deleted blobs. |
+| [Delete and restore blobs](storage-blob-delete-python.md) | Delete blobs and restore soft-deleted blobs. |
| [Delete and restore containers](storage-blob-container-delete-python.md) | Delete containers and restore soft-deleted containers. | | [Download blobs](storage-blob-download-python.md) | Download blobs by using strings, streams, and file paths. | | [Find blobs using tags](storage-blob-tags-python.md) | Set and retrieve tags, and use tags to find blobs. |
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
You can install AzCopy by using a Linux package that is hosted on the [Linux Sof
4. Update the package index files. ```bash
- sudo dnf update
+ sudo zypper --gpg-auto-import-keys refresh
```-
+
5. Install AzCopy. ```bash
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Previously updated : 02/14/2024 Last updated : 10/01/2024 #Customer intent: As an IT admin/developer, I want to run a Stream Analytics job to analyze phone call data and visualize results in a Power BI dashboard.
The last step is to define an output sink where the job can write the transforme
1. From the Azure portal, open **All resources**, and select the *ASATutorial* Stream Analytics job. 2. In the **Job Topology** section of the Stream Analytics job, select the **Outputs** option.
-3. Select **+ Add** > **Power BI**.
-4. Fill the output form with the following details:
+3. Select **+ Add output** > **Power BI**.
+
+ :::image type="content" source="./media/stream-analytics-real-time-fraud-detection/select-output-type.png" alt-text="Screenshot that shows the Outputs page with Add output -> Power BI menu selected.":::
+1. Fill the output form with the following details:
|**Setting** |**Suggested value** | |||
When you use a join with streaming data, the join must provide some limits on ho
4. From your Power BI workspace, select **+ Create** to create a new dashboard named *Fraudulent Calls*.
-5. At the top of the window, select **Edit** and **Add tile**. Then select **Custom Streaming Data** and **Next**. Choose the **ASAdataset** under **Your Datasets**. Select **Card** from the **Visualization type** dropdown, and add **fraudulent calls** to **Fields**. Select **Next** to enter a name for the tile, and then select **Apply** to create the tile.
+5. At the top of the window, select **Edit** and **Add tile**.
+1. In the **Add tile** window, select **Custom Streaming Data** and **Next**.
+1. Choose the **ASAdataset** under **Your Datasets**, and select **Next**.
+1. Select **Card** from the **Visualization type** dropdown, add **fraudulent calls** to **Fields**, and then select **Next**.
- ![Create Power BI dashboard tiles](media/stream-analytics-real-time-fraud-detection/create-power-bi-dashboard-tiles.png)
+ :::image type="content" source="./media/stream-analytics-real-time-fraud-detection/chart-settings.png" alt-text="Screenshot that shows the chart settings for a Power BI dashboard." lightbox="./media/stream-analytics-real-time-fraud-detection/chart-settings.png":::
+1. Enter a name for the tile (for example, **Fraudulent calls**), and then select **Apply** to create the tile.
-6. Follow the step 5 again with the following options:
+ :::image type="content" source="./media/stream-analytics-real-time-fraud-detection/tile-details.png" alt-text="Screenshot that shows the Tile details page." lightbox="./media/stream-analytics-real-time-fraud-detection/tile-details.png":::
+1. Follow the step 5 again with the following options:
* When you get to Visualization Type, select Line chart. * Add an axis and select **windowend**. * Add a value and select **fraudulent calls**. * For **Time window to display**, select the last 10 minutes.
+ :::image type="content" source="./media/stream-analytics-real-time-fraud-detection/line-chart-settings.png" alt-text="Screenshot that shows settings for a line chart on the dashboard." lightbox="./media/stream-analytics-real-time-fraud-detection/line-chart-settings.png":::
7. Your dashboard should look like the following example once both tiles are added. Notice that, if your event hub sender application and Streaming Analytics application are running, your Power BI dashboard periodically updates as new data arrives.
- ![Screenshot of results in Power BI dashboard.](media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png)
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png" alt-text="Screenshot that shows the Power BI dashboard." lightbox="media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png":::
## Embedding your Power BI Dashboard in a web application For this part of the tutorial, you use a sample [ASP.NET](https://asp.net/) web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see [embedding with Power BI](/power-bi/developer/embedding) article.
-To set up the application, go to the [PowerBI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) GitHub repository and follow the instructions under the **User Owns Data** section (use the redirect and homepage URLs under the **integrate-web-app** subsection). Since we're using the Dashboard example, use the **integrate-web-app** sample code located in the [GitHub repository](https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/.NET%20Framework/Embed%20for%20your%20organization/).
+To set up the application, go to the [Power BI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) GitHub repository and follow the instructions under the **User Owns Data** section (use the redirect and homepage URLs under the **integrate-web-app** subsection). Since we're using the Dashboard example, use the **integrate-web-app** sample code located in the [GitHub repository](https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/.NET%20Framework/Embed%20for%20your%20organization/).
Once you have the application running in your browser, follow these steps to embed the dashboard you created earlier into the web page: 1. Select **Sign in to Power BI**, which grants the application access to the dashboards in your Power BI account.
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
We recommend the following limits for the indicators.
| Total number of resource associations to a schedule | 3,000 | | Resource associations on each dynamic scope | 1,000 | | Number of dynamic scopes per resource group or subscription per region | 250 |
-| Number of dynamic scopes per schedule | 30 |
-| Total number of subscriptions attached to all dynamic scopes per schedule | 30 |
+| Number of dynamic scopes per schedule | 200 |
+| Total number of subscriptions attached to all dynamic scopes per schedule | 200 |
For more information, see the [service limits for Dynamic scope](dynamic-scope-overview.md#service-limits).
virtual-network Troubleshoot Outbound Smtp Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
Outbound email messages that are sent directly to external domains (such as outl
## Recommended method of sending email
-We recommend you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. (These relay services typically connect through TCP port 587, but they support other ports.) These services are used to maintain IP and domain reputation to minimize the possibility that external domains reject your messages or put them to the SPAM folder. [SendGrid](https://sendgrid.com/partners/azure/) is one such SMTP relay service, but there are others. You might also have an authenticated SMTP relay service on your on-premises servers.
+We recommend you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. Connections to authenticated SMTP relay services are typically on TCP port 587 which is not blocked. These services are used in part to maintain IP reputation which is critical for delivery reliability. [Azure Communication Services](/azure/communication-services/overview) offers an [authenticated SMTP relay service](/azure/communication-services/quickstarts/email/send-email-smtp/smtp-authentication). Ensure that the [default rate limits](/azure/communication-services/concepts/service-limits#rate-limits) are appropriate for your application and open a support case to raise them if needed.
-Using these email delivery services isn't restricted in Azure, regardless of the subscription type.
+Using these email delivery services on authenticated SMTP port 587 isn't restricted in Azure, regardless of the subscription type.
## Enterprise and MCA-E
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
In the following steps, you'll use commands to switch between the context of the
$remote = Get-AzVirtualNetwork -Name "[vnet name]" -ResourceGroupName "[resource group name]" ```
-1. Switch back to the parent account:
+1. Connect to parent account:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount -TenantID "[parent tenant ID]"
+ ```
+
+1. Select the parent subscription:
```azurepowershell-interactive Select-AzSubscription -SubscriptionId "[parent ID]"