Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-management | Api Management Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md | |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Title: Feature-based comparison of the Azure API Management tiers | Microsoft Docs + Title: Feature-based comparison of Azure API Management tiers description: Compare API Management tiers based on the features they offer. See a table that summarizes the key features available in each pricing tier. - Previously updated : 03/13/2024+ Last updated : 10/15/2024 -Each API Management [pricing tier](api-management-key-concepts.md#api-management-tiers) offers a distinct set of features and per unit [capacity](api-management-capacity.md). The following table summarizes the key features available in each of the tiers. Some features might work differently or have different capabilities depending on the tier. In such cases the differences are called out in the documentation articles describing these individual features. +Each API Management [pricing tier](api-management-key-concepts.md#api-management-tiers) offers a distinct set of features to meet the needs of different customers. The following table summarizes the key features available in each of the tiers. Some features might work differently or have different capabilities depending on the tier. In such cases the differences are called out in the documentation articles describing these individual features. > [!IMPORTANT] > * The Developer tier is for non-production use cases and evaluations. It doesn't offer SLA. Each API Management [pricing tier](api-management-key-concepts.md#api-management > * For information about APIs supported in the API Management gateway available in different tiers, see [API Management gateways overview](api-management-gateways-overview.md#backend-apis). -| Feature | Consumption | Developer | Basic | Basic v2 |Standard | Standard v2 | Premium | -| -- | -- | | | | -- | -- | - | -| Microsoft Entra integration<sup>1</sup> | No | Yes | No | Yes | Yes | Yes | Yes | -| Virtual Network (VNet) injection support | No | Yes | No | No | No | No | Yes | -| Private endpoint support for inbound connections | No | Yes | Yes | No | Yes | No | Yes | -| Outbound virtual network integration support | No | No | No | No | No | Yes | No | -| Multi-region deployment | No | No | No | No | No | No | Yes | -| Availability zones | No | No | No | No | No | No | Yes | -| Multiple custom domain names for gateway | No | Yes | No | No | No | No | Yes | -| Developer portal<sup>2</sup> | No | Yes | Yes | Yes | Yes | Yes | Yes | -| Built-in cache | No | Yes | Yes | Yes | Yes | Yes | Yes | -| [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes | -| Autoscaling | No | No | Yes | No | Yes | No |Yes | -| API analytics | No | Yes | Yes | Yes | Yes | Yes | Yes | -| [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | No | No | Yes | -| [Workspaces](workspaces-overview.md) | No | No | No | No | No | No | Yes | -| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes | -| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| [Credential manager](credentials-overview.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | No | Yes | No | Yes | -| [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes |No | Yes | No | Yes | -| Direct management API | No | Yes | Yes | No | Yes |No | Yes | -| Azure Monitor metrics | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| Azure Monitor and Log Analytics request logs | No | Yes | Yes | Yes | Yes | Yes |Yes | -| Application Insights request logs | Yes | Yes | Yes | Yes | Yes | Yes |Yes | -| Static IP | No | Yes | Yes | No |Yes | No | Yes | +| Feature | Consumption | Developer | Basic | Basic v2 |Standard | Standard v2 | Premium | Premium v2 (preview) | +| -- | -- | | | | -- | -- | - | - | +| Microsoft Entra integration<sup>1</sup> | No | Yes | No | Yes | Yes | Yes | Yes | Yes | +| Virtual network injection support | No | Yes | No | No | No | No | Yes | Yes | +| Private endpoint support for inbound connections | No | Yes | Yes | No | Yes | No | Yes | No | +| Outbound virtual network integration support | No | No | No | No | No | Yes | No | Yes | +| Multi-region deployment | No | No | No | No | No | No | Yes | No | +| Availability zones | No | No | No | No | No | No | Yes | No | +| Multiple custom domain names for gateway | No | Yes | No | No | No | No | Yes | No | +| Developer portal<sup>2</sup> | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| Built-in cache | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes | Yes +| Autoscaling | No | No | Yes | No | Yes | No |Yes | No | +| API analytics | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | No | No | Yes | No | +| [Workspaces](workspaces-overview.md) | No | No | No | No | No | No | Yes | No | +| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes | Yes | +| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| [Credential manager](credentials-overview.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | No | Yes | No | Yes | No | +| [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes |No | Yes | No | Yes | No | +| Azure Monitor metrics | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| Azure Monitor and Log Analytics request logs | No | Yes | Yes | Yes | Yes | Yes |Yes | Yes | +| Application Insights request logs | Yes | Yes | Yes | Yes | Yes | Yes |Yes | Yes | +| Static IP | No | Yes | Yes | No |Yes | No | Yes | No | <sup>1</sup> Enables the use of Microsoft Entra ID (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/>-<sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/> +<sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier, self-hosted gateways are limited to a single gateway node. <br/> <sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the classic, v2, consumption, and self-hosted gateways. <br/>++## Related content ++* [Overview of Azure API Management](api-management-key-concepts.md) +* [API Management limits](/azure/azure-resource-manager/management/azure-subscription-service-limits?toc=/azure/api-management/toc.json&bc=/azure/api-management/breadcrumb/toc.json#api-management-limits) +* [V2 tiers overview](v2-service-tiers-overview.md) +* [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/) |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | API Management offers both managed and self-hosted gateways: The following tables compare features available in the following API Management gateways: * **Classic** - the managed gateway available in the Developer, Basic, Standard, and Premium service tiers (formerly grouped as *dedicated* tiers)-* **V2** - the managed gateway available in the Basic v2 and Standard v2 tiers +* **V2** - the managed gateway available in the Basic v2, Standard v2, and Premium v2 tiers * **Consumption** - the managed gateway available in the Consumption tier * **Self-hosted** - the optional self-hosted gateway available in select service tiers * **Workspace** - the managed gateway available in a [workspace](workspaces-overview.md) in select service tiers The following tables compare features available in the following API Management ### Infrastructure | Feature support | Classic | V2 | Consumption | Self-hosted | Workspace |-| | | -- | -- | - | +| | | -- | -- | - | -- | | [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | | [Built-in cache](api-management-howto-cache.md) | ✔️ | ✔️ | ❌ | ❌ | ✔️ | | [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ |✔️ | ✔️ | ❌ |-| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ❌ | ✔️<sup>1,2</sup> | ✔️ | +| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | Premium v2 | ❌ | ✔️<sup>1,2</sup> | ✔️ | | [Inbound private endpoints](private-endpoint.md) | Developer, Basic, Standard, Premium | ❌ | ❌ | ❌ | ❌ |-| [Outbound virtual network integration](integrate-vnet-outbound.md) | ❌ | Standard V2 | ❌ | ❌ | ✔️ | -| [Availability zones](zone-redundancy.md) | Premium | ✔️<sup>3</sup> | ❌ | ✔️<sup>1</sup> | ✔️<sup>3</sup> | +| [Outbound virtual network integration](integrate-vnet-outbound.md) | ❌ | Standard v2, Premium v2 | ❌ | ❌ | ✔️ | +| [Availability zones](zone-redundancy.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> | ✔️<sup>3</sup> | | [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> | ❌ |-| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>4</sup> | ❌ | +| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> | ❌ | | [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | Developer, Basic, Standard, Premium | ❌ | ✔️ | ❌ | ❌ | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |-| **HTTP/2** (Client-to-gateway) | ✔️<sup>5</sup> | ✔️<sup>5</sup> |❌ | ✔️ | ❌ | +| **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ✔️<sup>4</sup> |❌ | ✔️ | ❌ | | **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ❌ | ✔️ | ❌ | | API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ✔️ | ❌ | ❌ | ❌ | <sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the endpoint hostname.<br/>-<sup>3</sup> Two zones are enabled by default; not configurable.<br/> -<sup>4</sup> CA root certificates for self-hosted gateway are managed separately per gateway<br/> -<sup>5</sup> Client protocol needs to be enabled. +<sup>3</sup> CA root certificates for self-hosted gateway are managed separately per gateway<br/> +<sup>4</sup> Client protocol needs to be enabled. ### Backend APIs The following tables compare features available in the following API Management | [Pass-through WebSocket](websocket-api.md) | ✔️ | ✔️ | ❌ | ✔️ | ❌ | | [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ❌ | ✔️ | ❌ | | [OData](import-api-from-odata.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |-| [Azure OpenAI](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | +| [Azure OpenAI and LLM](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | [Circuit breaker in backend](backends.md#circuit-breaker) | ✔️ | ✔️ | ❌ | ✔️ | ✔️ | | [Load-balanced backend pool](backends.md#load-balanced-pool) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | The following tables compare features available in the following API Management ### Policies -Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions. +Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions. See the policy reference for details about each policy. | Feature support | Classic | V2 | Consumption | Self-hosted<sup>1</sup> | Workspace | | | | -- | -- | - | -- | | [Dapr integration](api-management-policies.md#integration-and-external-communication) | ❌ | ❌ |❌ | ✔️ | ❌ | | [GraphQL resolvers](api-management-policies.md#graphql-resolvers) and [GraphQL validation](api-management-policies.md#content-validation)| ✔️ | ✔️ |✔️ | ❌ | ❌ | | [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ |✔️ | ❌ | ❌ |+| [Authenticate with managed identity](authentication-managed-identity-policy.md) | ✔️ | ✔️ |✔️ | ✔️ | ❌ | +| [Azure OpenAI and LLM semantic caching](api-management-policies.md#caching) | ❌ | ✔️ |❌ | ❌ | ❌ | | [Quota and rate limit](api-management-policies.md#rate-limiting-and-quotas) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | ✔️<sup>4</sup> | ✔️ | <sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> |
api-management | Api Management Howto Aad B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md | |
api-management | Api Management Howto Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md | |
api-management | Api Management Howto Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md | |
api-management | Api Management Howto Configure Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md | |
api-management | Api Management Howto Create Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md | |
api-management | Api Management Howto Create Or Invite Developers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-or-invite-developers.md | |
api-management | Api Management Howto Developer Portal Customize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md | |
api-management | Api Management Howto Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md | API Management uses a public IP address for a connection outside the VNet or a p * When a request is sent from API Management to a public (internet-facing) backend, a public IP address will always be visible as the origin of the request. -## IP addresses of Consumption, Basic v2, and Standard v2 tier API Management service +## IP addresses of Consumption, Basic v2, Standard v2, and Premium v2 tier API Management service -If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2, Standard v2. +If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2, Standard v2, Premium v2. -If you need to add the outbound IP addresses used by your Consumption, Basic v2, or Standard v2 tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in. +If you need to add the outbound IP addresses used by your Consumption, Basic v2, Standard v2, or Premium v2 tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in. For example, the following JSON fragment is what the allowlist for Western Europe might look like: |
api-management | Api Management Howto Manage Protocols Ciphers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md | By default, API Management enables TLS 1.2 for client and backend connectivity a > [!NOTE] > * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites.-> * The following tiers don't support changes to the default cipher configuration: **Consumption**, **Basic v2**, **Standard v2**. +> * The following tiers don't support changes to the default cipher configuration: **Consumption**, **Basic v2**, **Standard v2**, **Premium v2**. > * In [workspaces](workspaces-overview.md), the managed gateway doesn't support changes to the default protocol and cipher configuration. ## Prerequisites |
api-management | Api Management Howto Mutual Certificates For Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md | To receive and verify client certificates over HTTP/2 in the Developer, Basic, S ![Negotiate client certificate](./media/api-management-howto-mutual-certificates-for-clients/negotiate-client-certificate.png) -### Consumption, Basic v2, Standard v2 tier -To receive and verify client certificates in the Consumption, Basic v2, or Standard v2 tier, you must enable the **Request client certificate** setting on the **Custom domains** blade as shown below. +### Consumption, Basic v2, Standard v2, or Premium v2 tier +To receive and verify client certificates in the Consumption, Basic v2, Standard v2, or Premium v2 tier, you must enable the **Request client certificate** setting on the **Custom domains** blade as shown below. ![Request client certificate](./media/api-management-howto-mutual-certificates-for-clients/request-client-certificate.png) |
api-management | Api Management Howto Oauth2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md | |
api-management | Api Management Howto Setup Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md | |
api-management | Api Management Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md | Using the developer portal, developers can: API Management is offered in a variety of pricing tiers to meet the needs of different customers. Each tier offers a distinct combination of features, performance, capacity limits, scalability, SLA, and pricing for different scenarios. The tiers are grouped as follows: * **Classic** - The original API Management offering, including the Developer, Basic, Standard, and Premium tiers. The Premium tier is designed for enterprises requiring access to private backends, enhanced security features, multi-region deployments, availability zones, and high scalability. The Developer tier is an economical option for non-production use, while the Basic, Standard, and Premium tiers are production-ready tiers. -* **V2** - A new set of tiers that offer fast provisioning and scaling, including Basic v2 for development and testing, and Standard v2 for production workloads. Standard v2 supports simplified connection to network-isolated backends. +* **V2** - A new set of tiers that offer fast provisioning and scaling, including Basic v2 for development and testing, and Standard v2 and Premium v2 for production workloads. Standard v2 and Premium v2 support virtual network integration for simplified connection to network-isolated backends. Premium v2 also supports virtual network injection for full isolation of network traffic to and from the gateway. * **Consumption** - The Consumption tier is a serverless gateway for managing APIs that scales based on demand and billed per execution. It is designed for applications with serverless compute, microservices-based architectures, and those with variable traffic patterns. **More information**: |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: ## Rate limiting and quotas -|Policy |Description |Classic | V2 | Consumption | Self-hosted | -||||||--| -| [Limit call rate by subscription](rate-limit-policy.md) | Prevents API usage spikes by limiting call rate, on a per subscription basis. | Yes | Yes | Yes | Yes | -| [Limit call rate by key](rate-limit-by-key-policy.md) | Prevents API usage spikes by limiting call rate, on a per key basis. | Yes | Yes | No | Yes | -| [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes -| [Set usage quota by key](quota-by-key-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes | -| [Limit concurrency](limit-concurrency-policy.md) | Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes | -| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | No | -| [Limit large language model API token usage](llm-token-limit-policy.md) | Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | No | +|Policy |Description |Classic | V2 | Consumption | Self-hosted | Workspace | +||||||--|--| +| [Limit call rate by subscription](rate-limit-policy.md) | Prevents API usage spikes by limiting call rate, on a per subscription basis. | Yes | Yes | Yes | Yes | Yes | +| [Limit call rate by key](rate-limit-by-key-policy.md) | Prevents API usage spikes by limiting call rate, on a per key basis. | Yes | Yes | No | Yes | Yes | +| [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes | Yes | +| [Set usage quota by key](quota-by-key-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes | Yes | +| [Limit concurrency](limit-concurrency-policy.md) | Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes | Yes | +| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | No | Yes | +| [Limit large language model API token usage](llm-token-limit-policy.md) | Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | No | Yes | ## Authentication and authorization -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Check HTTP header](check-header-policy.md) | Enforces existence and/or value of an HTTP header. | Yes | Yes | Yes | Yes | -| [Get authorization context](get-authorization-context-policy.md) | Gets the authorization context of a specified [connection](credentials-overview.md) to a credential provider configured in the API Management instance. | Yes | Yes | Yes | No | -| [Restrict caller IPs](ip-filter-policy.md) | Filters (allows/denies) calls from specific IP addresses and/or address ranges. | Yes | Yes | Yes | Yes | -| [Validate Microsoft Entra token](validate-azure-ad-token-policy.md) | Enforces existence and validity of a Microsoft Entra (formerly called Azure Active Directory) JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes | -| [Validate JWT](validate-jwt-policy.md) | Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes | -| [Validate client certificate](validate-client-certificate-policy.md) |Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. | Yes | Yes | Yes | Yes | -| [Authenticate with Basic](authentication-basic-policy.md) | Authenticates with a backend service using Basic authentication. | Yes | Yes | Yes | Yes | -| [Authenticate with client certificate](authentication-certificate-policy.md) | Authenticates with a backend service using client certificates. | Yes | Yes | Yes | Yes | -| [Authenticate with managed identity](authentication-managed-identity-policy.md) | Authenticates with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md). | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|--| +| [Check HTTP header](check-header-policy.md) | Enforces existence and/or value of an HTTP header. | Yes | Yes | Yes | Yes | Yes | +| [Get authorization context](get-authorization-context-policy.md) | Gets the authorization context of a specified [connection](credentials-overview.md) to a credential provider configured in the API Management instance. | Yes | Yes | Yes | No | No | +| [Restrict caller IPs](ip-filter-policy.md) | Filters (allows/denies) calls from specific IP addresses and/or address ranges. | Yes | Yes | Yes | Yes | Yes | +| [Validate Microsoft Entra token](validate-azure-ad-token-policy.md) | Enforces existence and validity of a Microsoft Entra (formerly called Azure Active Directory) JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes | Yes | +| [Validate JWT](validate-jwt-policy.md) | Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes | Yes | +| [Validate client certificate](validate-client-certificate-policy.md) |Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. | Yes | Yes | Yes | Yes | Yes | +| [Authenticate with Basic](authentication-basic-policy.md) | Authenticates with a backend service using Basic authentication. | Yes | Yes | Yes | Yes | Yes | +| [Authenticate with client certificate](authentication-certificate-policy.md) | Authenticates with a backend service using client certificates. | Yes | Yes | Yes | Yes | Yes | +| [Authenticate with managed identity](authentication-managed-identity-policy.md) | Authenticates with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md). | Yes | Yes | Yes | Yes | No | ## Content validation -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Validate content](validate-content-policy.md) | Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML. | Yes | Yes | Yes | Yes | -| [Validate GraphQL request](validate-graphql-request-policy.md) | Validates and authorizes a request to a GraphQL API. | Yes | Yes | Yes | Yes | -| [Validate OData request](validate-odata-request-policy.md) | Validates a request to an OData API to ensure conformance with the OData specification. | Yes | Yes | Yes | Yes | -| [Validate parameters](validate-parameters-policy.md) | Validates the request header, query, or path parameters against the API schema. | Yes | Yes | Yes | Yes | -| [Validate headers](validate-headers-policy.md) | Validates the response headers against the API schema. | Yes | Yes | Yes | Yes | -| [Validate status code](validate-status-code-policy.md) | Validates the HTTP status codes in responses against the API schema. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted |Workspace | +||||||--|| +| [Validate content](validate-content-policy.md) | Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML. | Yes | Yes | Yes | Yes | Yes | +| [Validate GraphQL request](validate-graphql-request-policy.md) | Validates and authorizes a request to a GraphQL API. | Yes | Yes | Yes | Yes | No | +| [Validate OData request](validate-odata-request-policy.md) | Validates a request to an OData API to ensure conformance with the OData specification. | Yes | Yes | Yes | Yes | Yes | +| [Validate parameters](validate-parameters-policy.md) | Validates the request header, query, or path parameters against the API schema. | Yes | Yes | Yes | Yes | Yes | +| [Validate headers](validate-headers-policy.md) | Validates the response headers against the API schema. | Yes | Yes | Yes | Yes | Yes | +| [Validate status code](validate-status-code-policy.md) | Validates the HTTP status codes in responses against the API schema. | Yes | Yes | Yes | Yes | Yes | ## Routing -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Forward request](forward-request-policy.md) | Forwards the request to the backend service. | Yes | Yes | Yes | Yes | -| [Set backend service](set-backend-service-policy.md) | Changes the backend service base URL of an incoming request to a URL or a [backend](backends.md). Referencing a backend resource allows you to manage the backend service base URL and other settings in a single place. Also implement [load balancing of traffic across a pool of backend services](backends.md#load-balanced-pool) and [circuit breaker rules](backends.md#circuit-breaker) to protect the backend from too many requests. | Yes | Yes | Yes | Yes | -| [Set HTTP proxy](proxy-policy.md) | Allows you to route forwarded requests via an HTTP proxy. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| +| [Forward request](forward-request-policy.md) | Forwards the request to the backend service. | Yes | Yes | Yes | Yes | Yes | +| [Set backend service](set-backend-service-policy.md) | Changes the backend service base URL of an incoming request to a URL or a [backend](backends.md). Referencing a backend resource allows you to manage the backend service base URL and other settings in a single place. Also implement [load balancing of traffic across a pool of backend services](backends.md#load-balanced-pool) and [circuit breaker rules](backends.md#circuit-breaker) to protect the backend from too many requests. | Yes | Yes | Yes | Yes | Yes | +| [Set HTTP proxy](proxy-policy.md) | Allows you to route forwarded requests via an HTTP proxy. | Yes | Yes | Yes | Yes | Yes | ## Caching -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Get from cache](cache-lookup-policy.md) | Performs cache lookup and return a valid cached response when available. | Yes | Yes | Yes | Yes | -| [Store to cache](cache-store-policy.md) | Caches response according to the specified cache control configuration. | Yes | Yes | Yes | Yes | -| [Get value from cache](cache-lookup-value-policy.md) | Retrieves a cached item by key. | Yes | Yes | Yes | Yes | -| [Store value in cache](cache-store-value-policy.md) | Stores an item in the cache by key. | Yes | Yes | Yes | Yes | -| [Remove value from cache](cache-remove-value-policy.md) | Removes an item in the cache by key. | Yes | Yes | Yes | Yes | -| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | -| [Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md) | Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes | -| [Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md) | Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | -| [Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md) | Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|--| +| [Get from cache](cache-lookup-policy.md) | Performs cache lookup and return a valid cached response when available. | Yes | Yes | Yes | Yes | Yes | +| [Store to cache](cache-store-policy.md) | Caches response according to the specified cache control configuration. | Yes | Yes | Yes | Yes | Yes | +| [Get value from cache](cache-lookup-value-policy.md) | Retrieves a cached item by key. | Yes | Yes | Yes | Yes | Yes | +| [Store value in cache](cache-store-value-policy.md) | Stores an item in the cache by key. | Yes | Yes | Yes | Yes | Yes | +| [Remove value from cache](cache-remove-value-policy.md) | Removes an item in the cache by key. | Yes | Yes | Yes | Yes | Yes | +| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | No | +| [Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md) | Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes | No | +| [Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md) | Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | No | +| [Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md) | Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes | No | ## Transformation -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Set request method](set-method-policy.md) | Allows you to change the HTTP method for a request. | Yes | Yes | Yes | Yes | -| [Set status code](set-status-policy.md) | Changes the HTTP status code to the specified value. | Yes | Yes | Yes | Yes | -| [Set variable](set-variable-policy.md) | Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access. | Yes | Yes | Yes | Yes | -| [Set body](set-body-policy.md) | Sets the message body for a request or response. | Yes | Yes | Yes | Yes | -| [Set HTTP header](set-header-policy.md) | Assigns a value to an existing response and/or request header or adds a new response and/or request header. | Yes | Yes | Yes | Yes | -| [Set query string parameter](set-query-parameter-policy.md) | Adds, replaces value of, or deletes request query string parameter. | Yes | Yes | Yes | Yes | -| [Rewrite URL](rewrite-uri-policy.md) | Converts a request URL from its public form to the form expected by the web service. | Yes | Yes | Yes | Yes | -| [Convert JSON to XML](json-to-xml-policy.md) | Converts request or response body from JSON to XML. | Yes | Yes | Yes | Yes | -| [Convert XML to JSON](xml-to-json-policy.md) | Converts request or response body from XML to JSON. | Yes | Yes | Yes | Yes | -| [Find and replace string in body](find-and-replace-policy.md) | Finds a request or response substring and replaces it with a different substring. | Yes | Yes | Yes | Yes | -| [Mask URLs in content](redirect-content-urls-policy.md) | Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. | Yes | Yes | Yes | Yes | -| [Transform XML using an XSLT](xsl-transform-policy.md) | Applies an XSL transformation to XML in the request or response body. | Yes | Yes | Yes | Yes | -| [Return response](return-response-policy.md) | Aborts pipeline execution and returns the specified response directly to the caller. | Yes | Yes | Yes | Yes | -| [Mock response](mock-response-policy.md) | Aborts pipeline execution and returns a mocked response directly to the caller. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| +| [Set request method](set-method-policy.md) | Allows you to change the HTTP method for a request. | Yes | Yes | Yes | Yes | Yes | +| [Set status code](set-status-policy.md) | Changes the HTTP status code to the specified value. | Yes | Yes | Yes | Yes | Yes | +| [Set variable](set-variable-policy.md) | Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access. | Yes | Yes | Yes | Yes | Yes | +| [Set body](set-body-policy.md) | Sets the message body for a request or response. | Yes | Yes | Yes | Yes | Yes | +| [Set HTTP header](set-header-policy.md) | Assigns a value to an existing response and/or request header or adds a new response and/or request header. | Yes | Yes | Yes | Yes | Yes | +| [Set query string parameter](set-query-parameter-policy.md) | Adds, replaces value of, or deletes request query string parameter. | Yes | Yes | Yes | Yes | Yes | +| [Rewrite URL](rewrite-uri-policy.md) | Converts a request URL from its public form to the form expected by the web service. | Yes | Yes | Yes | Yes | Yes | +| [Convert JSON to XML](json-to-xml-policy.md) | Converts request or response body from JSON to XML. | Yes | Yes | Yes | Yes | Yes | +| [Convert XML to JSON](xml-to-json-policy.md) | Converts request or response body from XML to JSON. | Yes | Yes | Yes | Yes | Yes | +| [Find and replace string in body](find-and-replace-policy.md) | Finds a request or response substring and replaces it with a different substring. | Yes | Yes | Yes | Yes | Yes | +| [Mask URLs in content](redirect-content-urls-policy.md) | Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. | Yes | Yes | Yes | Yes | Yes | +| [Transform XML using an XSLT](xsl-transform-policy.md) | Applies an XSL transformation to XML in the request or response body. | Yes | Yes | Yes | Yes | Yes | +| [Return response](return-response-policy.md) | Aborts pipeline execution and returns the specified response directly to the caller. | Yes | Yes | Yes | Yes | Yes | +| [Mock response](mock-response-policy.md) | Aborts pipeline execution and returns a mocked response directly to the caller. | Yes | Yes | Yes | Yes | Yes | ## Cross-domain -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Allow cross-domain calls](cross-domain-policy.md) | Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. | Yes | Yes | Yes | Yes | -| [CORS](cors-policy.md) | Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients. | Yes | Yes | Yes | Yes | -| [JSONP](jsonp-policy.md) | Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|--| +| [Allow cross-domain calls](cross-domain-policy.md) | Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. | Yes | Yes | Yes | Yes | Yes | +| [CORS](cors-policy.md) | Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients. | Yes | Yes | Yes | Yes | Yes | +| [JSONP](jsonp-policy.md) | Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. | Yes | Yes | Yes | Yes | Yes | ## Integration and external communication -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| - | [Send request](send-request-policy.md) | Sends a request to the specified URL. | Yes | Yes | Yes | Yes | - | [Send one way request](send-one-way-request-policy.md) | Sends a request to the specified URL without waiting for a response. | Yes | Yes | Yes | Yes | -| [Log to event hub](log-to-eventhub-policy.md) | Sends messages in the specified format to an event hub defined by a Logger entity.| Yes | Yes | Yes | Yes | -| [Send request to a service (Dapr)](set-backend-service-dapr-policy.md)| Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file. | No | No | No | Yes | -| [Send message to Pub/Sub topic (Dapr)](publish-to-dapr-policy.md) | Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes | -| [Trigger output binding (Dapr)](invoke-dapr-binding-policy.md) | Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| + | [Send request](send-request-policy.md) | Sends a request to the specified URL. | Yes | Yes | Yes | Yes | Yes | + | [Send one way request](send-one-way-request-policy.md) | Sends a request to the specified URL without waiting for a response. | Yes | Yes | Yes | Yes | Yes | +| [Log to event hub](log-to-eventhub-policy.md) | Sends messages in the specified format to an event hub defined by a Logger entity.| Yes | Yes | Yes | Yes | Yes | +| [Send request to a service (Dapr)](set-backend-service-dapr-policy.md)| Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file. | No | No | No | Yes | No | +| [Send message to Pub/Sub topic (Dapr)](publish-to-dapr-policy.md) | Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes | No | +| [Trigger output binding (Dapr)](invoke-dapr-binding-policy.md) | Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes | No | ## Logging -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes | -| [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes | -| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | Yes | -| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| +| [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes | Yes | +| [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes | Yes | +| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | Yes | Yes | +| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | Yes | Yes | <sup>1</sup> In the V2 gateway, the `trace` policy currently does not add tracing output in the test console. ## GraphQL resolvers -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Azure SQL data source for resolver](sql-data-source-policy.md) | Configures the Azure SQL request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No | -| [Cosmos DB data source for resolver](cosmosdb-data-source-policy.md) | Configures the Cosmos DB request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No | -| [HTTP data source for resolver](http-data-source-policy.md) | Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | Yes | No | -| [Publish event to GraphQL subscription](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a GraphQL resolver for a related field in the schema for another operation type such as a mutation. | Yes | Yes | Yes | No | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| +| [Azure SQL data source for resolver](sql-data-source-policy.md) | Configures the Azure SQL request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No | No | +| [Cosmos DB data source for resolver](cosmosdb-data-source-policy.md) | Configures the Cosmos DB request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No | No | +| [HTTP data source for resolver](http-data-source-policy.md) | Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | Yes | No | No | +| [Publish event to GraphQL subscription](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a GraphQL resolver for a related field in the schema for another operation type such as a mutation. | Yes | Yes | Yes | No | No | ## Policy control and flow -|Policy |Description | Classic | V2 | Consumption |Self-hosted | -||||||--| -| [Control flow](choose-policy.md) | Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). | Yes | Yes | Yes | Yes | -| [Include fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. | Yes | Yes | Yes | Yes | -| [Retry](retry-policy.md) | Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count. | Yes | Yes | Yes | Yes | - | [Wait](wait-policy.md) | Waits for enclosed [Send request](send-request-policy.md), [Get value from cache](cache-lookup-value-policy.md), or [Control flow](choose-policy.md) policies to complete before proceeding. | Yes | Yes | Yes | Yes | +|Policy |Description | Classic | V2 | Consumption |Self-hosted | Workspace | +||||||--|-| +| [Control flow](choose-policy.md) | Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). | Yes | Yes | Yes | Yes | Yes | +| [Include fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. | Yes | Yes | Yes | Yes | Yes | +| [Retry](retry-policy.md) | Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count. | Yes | Yes | Yes | Yes | Yes | + | [Wait](wait-policy.md) | Waits for enclosed [Send request](send-request-policy.md), [Get value from cache](cache-lookup-value-policy.md), or [Control flow](choose-policy.md) policies to complete before proceeding. | Yes | Yes | Yes | Yes | Yes | [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)] |
api-management | Api Management Region Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-region-availability.md | + + Title: Azure API Management - Availability of v2 tiers and workspace gateways +description: Availability of API Management v2 tiers and workspace gateways in Azure regions. This information supplements product availability by region. +++ ++ Last updated : 11/20/2024+++++# Azure API Management - Availability of v2 tiers and workspace gateways +++API Management [v2 tiers](v2-service-tiers-overview.md) and API Management [workspace gateways](workspaces-overview.md#workspace-gateway) are available in a subset of the regions where the classic tiers are available. For information about the availability of the API Management classic tiers, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). +++## Supported regions for v2 tiers and workspace gateways ++Information in the following table is updated regularly. Capacity availability in Azure regions may vary. +++| Region | Basic v2 | Standard v2 | Premium v2 (preview) | Workspace gateway (Premium) | +|--|::|::|::|::| +| Australia Central | ✅ | ✅ | | | +| Australia East | ✅ | ✅ | ✅ | ✅ | +| Australia Southeast | ✅ | ✅ | | | +| Brazil South | ✅ | ✅ | | | +| Central India | ✅ | ✅ | | | +| East Asia | ✅ | ✅ | | ✅ | +| East US | ✅ | ✅ | | | +| East US 2 | ✅ | ✅ | ✅ | ✅ | +| France Central | ✅ | ✅ | | ✅ | +| Germany West Central | ✅ | ✅ | ✅ | ✅ | +| Japan East | ✅ | ✅ | | ✅ | +| Korea Central | ✅ | ✅ | ✅ | | +| North Central US | ✅ | ✅ | | ✅ | +| North Europe | ✅ | ✅ | | ✅ | +| Norway East | ✅ | ✅ | ✅ | | +| South Africa North | ✅ | ✅ | | | +| South Central US | ✅ | ✅ | | | +| South India | ✅ | ✅ | | | +| Southeast Asia | ✅ | ✅ | | ✅ | +| Switzerland North | ✅ |✅ | | | +| UK South | ✅ | ✅ | ✅ | ✅ | +| UK West | ✅ | ✅ | | | +| West Europe | ✅ | ✅ | | | +| West US | ✅ | ✅ | | ✅ | +| West US 2 | ✅ | ✅ | | | +++## Related content ++Learn more about: ++* [API Management pricing](https://aka.ms/apimpricing) |
api-management | Automate Portal Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/automate-portal-deployments.md | |
api-management | Azure Openai Token Limit Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md | |
api-management | Identity Provider Adal Retirement Sep 2025 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md | |
api-management | Cosmosdb Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md | Use the policy to configure a single query request, read request, delete request | Attribute | Description | Required | Default | | -- | - | -- | - |-| template | Used to set the templating mode for the `max-item-count`. Currently the only supported value is:<br /><br />- `liquid` - the `max-item-count` will use the liquid templating engine. | No | N/A | +| template | Used to set the templating mode for the `max-item-count`. Currently the only supported value is:<br /><br />- `liquid` - the `max-item-count` uses the liquid templating engine. | No | N/A | #### continuation-token attribute | Attribute | Description | Required | Default | | -- | - | -- | - |-| template | Used to set the templating mode for the continuation token. Currently the only supported value is:<br /><br />- `liquid` - the continuation token will use the liquid templating engine. | No | N/A | +| template | Used to set the templating mode for the continuation token. Currently the only supported value is:<br /><br />- `liquid` - the continuation token uses the liquid templating engine. | No | N/A | ### read-request elements Use the policy to configure a single query request, read request, delete request |-|--|--| | id | Identifier of the item in the container. | Yes when `type` is `replace`. | | [etag](#etag-attribute) | Entity tag for the item in the container, used for [optimistic concurrency control](/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency#implementing-optimistic-concurrency-control-using-etag-and-http-headers). | No |-| [set-body](set-body-policy.md) | Sets the body in the write request. If not provided, the request payload will map arguments into JSON format.| No | +| [set-body](set-body-policy.md) | Sets the body in the write request. If not provided, the request payload maps arguments into JSON format.| No | ### response elements Use the policy to configure a single query request, read request, delete request | Attribute | Description | Required | Default | | -- | - | -- | - | | data-type | The data type of the partition key: `string`, `number`, `bool`, `none`, or `null`. | No | `string` |-| template | Used to set the templating mode for the partition key. Currently the only supported value is:<br /><br />- `liquid` - the partition key will use the liquid templating engine | No | N/A | +| template | Used to set the templating mode for the partition key. Currently the only supported value is:<br /><br />- `liquid` - the partition key uses the liquid templating engine | No | N/A | #### etag attribute | Attribute | Description | Required | Default | | -- | - | -- | - | | type | String. One of the following values:<br /><br />- `match` - the `etag` value must match the system-generated entity tag for the item<br /><br /> - `no-match` - the `etag` value isn't required to match the system-generated entity tag for the item | No | `match` |-| template | Used to set the templating mode for the etag. Currently the only supported value is:<br /><br />- `liquid` - the etag will use the liquid templating engine | No | N/A | +| template | Used to set the templating mode for the etag. Currently the only supported value is:<br /><br />- `liquid` - the etag uses the liquid templating engine | No | N/A | ## Usage |
api-management | Developer Portal Alternative Processes Self Host | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-alternative-processes-self-host.md | |
api-management | Developer Portal Basic Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md | |
api-management | Developer Portal Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md | |
api-management | Developer Portal Integrate Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-application-insights.md | |
api-management | Developer Portal Integrate Google Tag Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-google-tag-manager.md | |
api-management | Developer Portal Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-overview.md | |
api-management | Developer Portal Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-testing.md | |
api-management | How To Server Sent Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md | Last updated 02/24/2022 # Configure API for server-sent events This article provides guidelines for configuring an API in API Management that implements server-sent events (SSE). SSE is based on the HTML5 `EventSource` standard for streaming (pushing) data automatically to a client over HTTP after a client has established a connection. |
api-management | Howto Protect Backend Frontend Azure Ad B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md | Open the Azure AD B2C blade in the portal and do the following steps. > > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. >- > If you're using the API Management Consumption, Basic v2, and Standard v2 tiers then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-and-standard-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management classic (dedicated) tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply. + > If you're using the API Management Consumption, Basic v2, Standard v2, and Premium v2 tiers, then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-standard-v2-and-premium-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management classic (dedicated) tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply. 1. Close the 'Authentication' blade from the App Service / Functions portal. 1. Open the *API Management blade of the portal*, then open *your instance*. |
api-management | Howto Use Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md | With API analytics, analyze the usage and performance of the APIs in your API Ma > [!NOTE] > * API analytics provides data on requests, including failed and unauthorized requests. > * Geography values are approximate based on IP address mapping.-> * There may be a delay of 15 minutes or more in the availability of analytics data. +> * There may be a delay in the availability of analytics data. ## Azure Monitor-based dashboard If you need to configure one, the following are brief steps to send gateway logs 1. Make sure **Resource specific** is selected as the destination table. 1. Select **Save**. +> [!IMPORTANT] +> A new Log Analytics workspace can take up to 2 hours to start receiving data. An existing workspace should start receiving data within approximately 15 minutes. + ### Access the dashboard After a Log Analytics workspace is configured, access the Azure Monitor-based dashboard to analyze the usage and performance of your APIs. |
api-management | Inject Vnet V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/inject-vnet-v2.md | + + Title: Inject API Management in virtual network - Premium v2 +description: Learn how to deploy (inject) an Azure API Management instance in the Premium v2 tier in a virtual network to isolate inbound and outbound traffic. ++++ Last updated : 11/18/2024+++# Inject an Azure API Management instance in a private virtual network - Premium v2 tier +++This article guides you through the requirements to inject your Azure API Management Premium v2 (preview) instance in a virtual network. ++> [!NOTE] +> To inject a classic Developer or Premium tier instance in a virtual network, the requirements and configuration are different. [Learn more](virtual-network-injection-resources.md). ++When an API Management Premium v2 instance is injected in a virtual network: ++* The API Management gateway endpoint is accessible through the virtual network at a private IP address. +* API Management can make outbound requests to API backends that are isolated in the network. ++This configuration is recommended for scenarios where you want to isolate network traffic to both the API Management instance and the backend APIs. +++If you want to enable *public* inbound access to an API Management instance in the Standard v2 or Premium v2 tier, but limit outbound access to network-isolated backends, see [Integrate with a virtual network for outbound connections](integrate-vnet-outbound.md). +++> [!IMPORTANT] +> * Virtual network injection described in this article is available only for API Management instances in the Premium v2 tier (preview). For networking options in the different tiers, see [Use a virtual network with Azure API Management](virtual-network-concepts.md). +> * Currently, you can inject a Premium v2 instance into a virtual network only when the instance is **created**. You can't inject an existing Premium v2 instance into a virtual network. However, you can update the subnet settings for injection after the instance is created. +> * Currently, you can't switch between virtual network injection and virtual network integration for a Premium v2 instance. ++## Prerequisites ++- An Azure API Management instance in the [Premium v2](v2-service-tiers-overview.md) pricing tier. +- A virtual network where your client apps and your API Management backend APIs are hosted. See the following sections for requirements and recommendations for the virtual network and subnet used for the API Management instance. ++### Network location ++* The virtual network must be in the same region and Azure subscription as the API Management instance. ++### Subnet requirements ++* The subnet for the API Management instance can't be shared with another Azure resource. ++### Subnet size ++* Minimum: /27 (32 addresses) +* Recommended: /24 (256 addresses) - to accommodate scaling of API Management instance ++### Network security group ++A network security group must be associated with the subnet. ++### Subnet delegation ++The subnet needs to be delegated to the **Microsoft.Web/hostingEnvironments** service. ++++> [!NOTE] +> You might need to register the `Microsoft.Web/hostingEnvironments` resource provider in the subscription so that you can delegate the subnet to the service. ++For more information about configuring subnet delegation, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md). ++### Permissions ++You must have at least the following role-based access control permissions on the subnet or at a higher level to configure virtual network integration: ++| Action | Description | +|-|-| +| Microsoft.Network/virtualNetworks/read | Read the virtual network definition | +| Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | +| Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network | ++++## Inject API Management in a virtual network ++When you [create](get-started-create-service-instance.md) a Premium v2 instance using the Azure portal, you can optionally configure settings for virtual network injection. ++1. In the **Create API Management service** wizard, select the **Networking** tab. +1. In **Connectivity type**, select **Virtual network**. +1. In **Type**, select **Internal**. +1. In **Configure virtual networks**, select the virtual network and the delegated subnet that you want to integrate. ++ Optionally, provide a public IP address resource if you want to own and control an IP address that's used only for outbound connection to the internet. +1. Complete the wizard to create the API Management instance. ++## DNS settings for integration with private IP address ++When a Premium v2 API Management instance is injected in a virtual network, you have to manage your own DNS to enable inbound access to API Management. ++While you have the option to use your own custom DNS server, we recommend: ++1. Configure an Azure [DNS private zone](../dns/private-dns-overview.md). +1. Link the Azure DNS private zone to the virtual network. ++Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-portal.md). ++### Endpoint access on default hostname ++When you create an API Management instance in the Premium v2 tier, the following endpoint is assigned a default hostname: ++* **Gateway** - example: `contoso-apim.azure-api.net` ++### Configure DNS record ++Create an A record in your DNS server to access the API Management instance from within your virtual network. Map the endpoint record to the private VIP address of your API Management instance. ++For testing purposes, you might update the hosts file on a virtual machine in a subnet connected to the virtual network in which API Management is deployed. Assuming the private virtual IP address for your API Management instance is 10.1.0.5, you can map the hosts file as shown in the following example. The hosts mapping file is at `%SystemDrive%\drivers\etc\hosts` (Windows) or `/etc/hosts` (Linux, macOS). For example: ++| Internal virtual IP address | Gateway hostname | +| -- | -- | +| 10.1.0.5 | `contoso-apim.portal.azure-api.net` | ++## Related content ++* [Use a virtual network with Azure API Management](virtual-network-concepts.md) +* [Configure a custom domain name for your Azure API Management instance](configure-custom-domain.md) ++++ |
api-management | Integrate Vnet Outbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/integrate-vnet-outbound.md | Title: Connect API Management instance to a private network | Microsoft Docs -description: Learn how to integrate an Azure API Management instance in the Standard v2 tier with a virtual network to access backend APIs hosted within the network. + Title: Integrate API Management in private network +description: Learn how to integrate an Azure API Management instance in the Standard v2 or Premium v2 tier with a virtual network to access backend APIs in the network. Previously updated : 05/20/2024 Last updated : 11/18/2024 -# Integrate an Azure API Management instance with a private VNet for outbound connections +# Integrate an Azure API Management instance with a private virtual network for outbound connections -This article guides you through the process of configuring *VNet integration* for your Azure API Management instance so that your API Management instance can make outbound requests to API backends that are isolated in the network. +This article guides you through the process of configuring *virtual network integration* for your Standard v2 or Premium v2 (preview) Azure API Management instance. With virtual network integration, your instance can make outbound requests to APIs hosted in a delegated subnet of a single connected virtual network. ++When an API Management instance is integrated with a virtual network for outbound requests, the gateway and developer portal endpoints remain publicly accessible. The API Management instance can reach both public and network-isolated backend services. +++If you want to inject a Premium v2 API Management instance into a virtual network to isolate both inbound and outbound traffic, see [Inject a Premium v2 instance into a virtual network](inject-vnet-v2.md). ++> [!IMPORTANT] +> * Outbound virtual network integration described in this article is available only for API Management instances in the Standard v2 and Premium v2 tiers. For networking options in the different tiers, see [Use a virtual network with Azure API Management](virtual-network-concepts.md). +> * You can enable virtual network integration when you create an API Management instance in the Standard v2 or Premium v2 tier, or after the instance is created. +> * Currently, you can't switch between virtual network injection and virtual network integration for a Premium v2 instance. -When an API Management instance is integrated with a virtual network for outbound requests, the API Management itself is not deployed in a VNet; the gateway and other endpoints remain publicly accessible. In this configuration, the API Management instance can reach both public and network-isolated backend services. ## Prerequisites -- An Azure API Management instance in the [Standard v2](v2-service-tiers-overview.md) pricing tier-- A virtual network with a subnet where your API Management backend APIs are hosted- - The network must be deployed in the same region and subscription as your API Management instance. - - The subnet should be dedicated to VNet integration. - - A minimum subnet size of `/26` or `/27` is recommended when creating a new subnet; `/28` can be used with an existing subnet. +- An Azure API Management instance in the [Standard v2 or Premium v2](v2-service-tiers-overview.md) pricing tier - (Optional) For testing, a sample backend API hosted within a different subnet in the virtual network. For example, see [Tutorial: Establish Azure Functions private site access](../azure-functions/functions-create-private-site-access.md).+- A virtual network with a subnet where your API Management backend APIs are hosted. See the following sections for requirements and recommendations for the virtual network and subnet. ++### Network location ++* The virtual network must be in the same region and Azure subscription as the API Management instance. ++### Subnet requirements ++* The subnet can't be shared with another Azure resource. ++### Subnet size ++* Minimum: /27 (32 addresses) +* Recommended: /24 (256 addresses) - to accommodate scaling of API Management instance ++### Network security group ++A network security group must be associated with the subnet. ++### Subnet delegation ++The subnet needs to be delegated to the **Microsoft.Web/serverFarms** service. ++++> [!NOTE] +> You might need to register the `Microsoft.Web/serverFarms` resource provider in the subscription so that you can delegate the subnet to the service. +++For more information about configuring subnet delegation, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md). ### Permissions You must have at least the following role-based access control permissions on th | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network | -### Register Microsoft.Web resource provider --Ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). --## Delegate the subnet --The subnet used for integration must be delegated to the **Microsoft.Web/serverFarms** service. The subnet can't be delegated to another service. In the subnet settings, in **Delegate subnet to a service**, select **Microsoft.Web/serverFarms**. +## Configure virtual network integration -## Enable VNet integration +This section guides you through the process of configure external virtual network integration for an existing Azure API Management instance. -This section will guide you through the process of enabling VNet integration for your Azure API Management instance. 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. In the left menu, under **Deployment + Infrastructure**, select **Network**.-1. On the **Outbound traffic** card, select **VNET integration**. +1. On the **Outbound traffic** card, select **virtual network integration**. - :::image type="content" source="media/integrate-vnet-outbound/integrate-vnet.png" lightbox="media/integrate-vnet-outbound/integrate-vnet.png" alt-text="Screenshot of VNet integration in the portal."::: + :::image type="content" source="media/integrate-vnet-outbound/integrate-vnet.png" lightbox="media/integrate-vnet-outbound/integrate-vnet.png" alt-text="Screenshot of virtual network integration in the portal."::: 1. In the **Virtual network** blade, enable the **Virtual network** checkbox. 1. Select the location of your API Management instance. 1. In **Virtual network**, select the virtual network and the delegated subnet that you want to integrate. -1. Select **Apply**, and then select **Save**. The VNet is integrated. +1. Select **Apply**, and then select **Save**. The virtual network is integrated. - :::image type="content" source="media/integrate-vnet-outbound/vnet-settings.png" lightbox="media/integrate-vnet-outbound/vnet-settings.png" alt-text="Screenshot of VNet settings in the portal."::: + :::image type="content" source="media/integrate-vnet-outbound/vnet-settings.png" lightbox="media/integrate-vnet-outbound/vnet-settings.png" alt-text="Screenshot of virtual network settings in the portal."::: -## (Optional) Test VNet integration +## (Optional) Test virtual network integration -If you have an API hosted in the virtual network, you can import it to your Management instance and test the VNet integration. For basic steps, see [Import and publish an API](import-and-publish.md). +If you have an API hosted in the virtual network, you can import it to your Management instance and test the virtual network integration. For basic steps, see [Import and publish an API](import-and-publish.md). ## Related content |
api-management | Llm Token Limit Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-token-limit-policy.md | |
api-management | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/plan-manage-costs.md | When you create or use Azure resources with API Management, you'll get charged b | Tiers | Description | | -- | -- |-| Consumption | Incurs no fixed costs. You are billed based on the number of API calls to the service above a certain threshold. | -| Developer, Basic, Basic v2, Standard, Standard v2, and Premium | Incur monthly costs, based on the number of [units](./api-management-capacity.md) and [self-hosted gateways](./self-hosted-gateway-overview.md). Self-hosted gateways are free for the Developer tier. Different [upgrade](./upgrade-and-scale.md) options are available, depending on your service tier. | +| Consumption | Incurs no fixed costs. You are billed based on the number of API requests to the service above a certain threshold. | +| Developer, Basic, Standard, Premium | Incur monthly costs, based on the number of [units](./api-management-capacity.md), [workspaces](workspaces-overview.md), and [self-hosted gateways](./self-hosted-gateway-overview.md). Self-hosted gateways are free for the Developer tier. | +| Basic v2, Standard v2, Premium v2 | Incur monthly costs, based on the number of [units](./api-management-capacity.md). Above a certain threshold of API requests, additional requests are billed. | ++Different [upgrade](./upgrade-and-scale.md) options are available, depending on your service tier. You may also incur additional charges when you use other Azure resources with API Management, like virtual networks, availability zones, and multi-region writes. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all API Management costs. There's a separate line item for each meter. In the preceding example, you see the current cost for the service. Costs by Azu You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy. -Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ## Export cost data You can also [export your cost data](../cost-management-billing/costs/tutorial-e ### Scale using capacity units -Except in the Consumption service tier, API Management supports scaling by adding or removing [*capacity units*](api-management-capacity.md). As the load increases on an API Management instance, adding capacity units may be more economical than upgrading to a higher service tier. The maximum number of units depends on the service tier. +Except in the Consumption and Developer service tiers, API Management supports scaling by adding or removing [*capacity units*](api-management-capacity.md). As the load increases on an API Management instance, adding capacity units may be more economical than upgrading to a higher service tier. The maximum number of units depends on the service tier. Each capacity unit has a certain request processing capability that depends on the serviceΓÇÖs tier. For example, a unit of the Basic tier has an estimated maximum throughput of approximately 1,000 requests per second. |
api-management | Protect With Defender For Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md | |
api-management | Rate Limit By Key Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md | |
api-management | Secure Developer Portal Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/secure-developer-portal-access.md | |
api-management | Sql Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sql-data-source-policy.md | |
api-management | Upgrade And Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md | To follow the steps from this article, you must: ## Upgrade and scale -You can choose between the following dedicated tiers: **Developer**, **Basic**, **Basic v2**, **Standard**, **Standard v2**, and **Premium**. +You can choose between the following dedicated tiers: **Developer**, **Basic**, **Basic v2**, **Standard**, **Standard v2**, **Premium**, and **Premium v2**. * The **Developer** tier should be used to evaluate the service; it shouldn't be used for production. The **Developer** tier doesn't have SLA and you can't scale this tier (add/remove units). -* **Basic**, **Basic v2**, **Standard**, **Standard v2**, and **Premium** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing). +* **Basic**, **Basic v2**, **Standard**, **Standard v2**, **Premium**, and **Premium v2** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing). * The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region (the **primary** region). You can choose between the following dedicated tiers: **Developer**, **Basic**, * You can upgrade and downgrade to and from certain dedicated services tiers: * You can upgrade and downgrade to and from classic tiers (**Developer**, **Basic**, **Standard**, and **Premium**). - * You can upgrade and downgrade to and from v2 tiers (**Basic v2** and **Standard v2**). + * You can upgrade and downgrade to and from v2 tiers (**Basic v2**, **Standard v2**, and **Premium v2**). Downgrading can remove some features. For example, downgrading to **Standard** or **Basic** from the **Premium** tier can remove virtual networks or multi-region deployment. |
api-management | V2 Service Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md | -We're introducing a new set of pricing tiers (SKUs) for Azure API Management: the *v2 tiers*. The new tiers are built on a new, more reliable and scalable platform and are designed to make API Management accessible to a broader set of customers and offer flexible options for a wider variety of scenarios. The v2 tiers are in addition to the existing classic tiers (Developer, Basic, Standard, and Premium) and the Consumption tier. [Learn more](api-management-key-concepts.md#api-management-tiers). +The API Management v2 tiers (SKUs) are built on a new, more reliable and scalable platform and are designed to make API Management accessible to a broader set of customers and offer flexible options for a wider variety of scenarios. The v2 tiers are in addition to the existing classic tiers (Developer, Basic, Standard, and Premium) and the Consumption tier. [See detailed comparison of API Management tiers](api-management-features.md). The following v2 tiers are generally available: The following v2 tiers are generally available: * **Standard v2** - Standard v2 is a production-ready tier with support for network-isolated backends. +The following v2 tier is in preview: ++* **Premium v2** - Premium v2 offers enterprise features including full virtual network isolation and scaling for high volume workloads. ++ > [!NOTE] + > The Premium v2 tier is currently in limited preview. To sign up, fill [this form](https://aka.ms/premiumv2). + ## Key capabilities -* **Faster deployment, configuration, and scaling** - Deploy a production-ready API Management instance in minutes. Quickly apply configurations such as certificate and hostname updates. Scale a Basic v2 or Standard v2 instance quickly to up to 10 units to meet the needs of your API management workloads. +* **Faster deployment, configuration, and scaling** - Deploy a production-ready API Management instance in minutes. Quickly apply configurations such as certificate and hostname updates. Scale a Basic v2 or Standard v2 instance quickly to up to 10 units to meet the needs of your API management workloads. Scale a Premium v2 instance to up to 30 units. -* **Simplified networking** - The Standard v2 tier supports [outbound connections](#networking-options) to network-isolated backends. +* **Simplified networking** - The Standard v2 and Premium v2 tiers provide [networking options](#networking-options) to isolate API Management's inbound and outbound traffic. -* **More options for production workloads** - The v2 tiers are all supported with an SLA. Upgrade from Basic v2 to Standard v2 to add more production options. +* **More options for production workloads** - The v2 tiers are all supported with an SLA. * **Developer portal options** - Enable the [developer portal](api-management-howto-developer-portal.md) when you're ready to let API consumers discover your APIs. -## Networking options --The Standard v2 tier supports VNet integration to allow your API Management instance to reach API backends that are isolated in a single connected VNet. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. The VNet must be in the same region as the API Management instance. [Learn more](integrate-vnet-outbound.md). ## Features ### API version -The v2 tiers are supported in API Management API version **2023-05-01-preview** or later. +The latest capabilities of the v2 tiers are supported in API Management API version **2024-05-01** or later. ++## Networking options ++* **Standard v2** and **Premium v2** support **virtual network integration** to allow your API Management instance to reach API backends that are isolated in a single connected virtual network. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. The virtual network must be in the same region and subscription as the API Management instance. [Learn more](integrate-vnet-outbound.md). ++* **Premium v2** also supports simplified **virtual network injection** for complete isolation of inbound and outbound gateway traffic without requiring network security group rules, route tables, or service endpoints. The virtual network must be in the same region and subscription as the API Management instance. [Learn more](inject-vnet-v2.md). ### Supported regions-The v2 tiers are available in the following regions: -* East US -* East US 2 -* South Central US -* North Central US -* West US -* West US 2 -* France Central -* Germany West Central -* North Europe -* Norway East -* West Europe -* Switzerland North -* UK South -* UK West -* South Africa North -* Central India -* South India -* Brazil South -* Australia Central -* Australia East -* Australia Southeast -* East Asia -* Japan East -* Southeast Asia -* Korea Central --### Feature availability ++For a current list of regions where the v2 tiers are available, see [Availability of v2 tiers and workspace gateways](api-management-region-availability.md). ++### Classic feature availability Most capabilities of the classic API Management tiers are supported in the v2 tiers. However, the following capabilities aren't supported in the v2 tiers: * API Management service configuration using Git * Back up and restore of API Management instance * Enabling Azure DDoS Protection-* Built-in analytics (replaced with Azure Monitor-based dashboard) +* Direct Management API access ### Limitations The following API Management capabilities are currently unavailable in the v2 tiers. **Infrastructure and networking**-* Zone redundancy * Multi-region deployment * Multiple custom domain names -* Capacity metric - replaced by CPU Percentage of Gateway and Memory Percentage of Gateway metrics +* Capacity metric - *replaced by CPU Percentage of Gateway and Memory Percentage of Gateway metrics* +* Built-in analytics - *replaced by Azure Monitor-based dashboard* * Autoscaling * Inbound connection using a private endpoint-* Injection in a VNet in external mode or internal mode -* Upgrade to v2 tiers from v1 tiers -* Workspaces +* Upgrade to v2 tiers from classic tiers * CA Certificates **Developer portal** The following limits apply to the developer portal in the v2 tiers. ## Deployment -Deploy an instance of the Basic v2 or Standard v2 tier using the Azure portal, Azure REST API, or Azure Resource Manager or Bicep template. +Deploy a v2 tier instance using the Azure portal or using tools such as the Azure REST API, Azure Resource Manager, Bicep template, or Terraform. ## Frequently asked questions A: No. Currently you can't migrate an existing API Management instance (in the C ### Q: What's the relationship between the stv2 compute platform and the v2 tiers? -A: They're not related. stv2 is a [compute platform](compute-infrastructure.md) version of the Developer, Basic, Standard, and Premium tier service instances. stv2 is a successor to the stv1 platform [scheduled for retirement in 2024](./breaking-changes/stv1-platform-retirement-august-2024.md). --### Q: Will I still be able to provision Basic or Standard tier services? +A: They're not related. stv2 is a [compute platform](compute-infrastructure.md) version of the Developer, Basic, Standard, and Premium tier service instances. stv2 is a successor to the stv1 compute platform [that retired in 2024](./breaking-changes/stv1-platform-retirement-august-2024.md). -A: Yes, there are no changes to the Basic or Standard tiers. +### Q: Will I still be able to provision Developer, Basic, Standard, or Premium tier services? -### Q: What is the difference between VNet integration in Standard v2 tier and VNet support in the Premium tier? +A: Yes, there are no changes to the classic Developer, Basic, Standard, or Premium tiers. -A: A Standard v2 service instance can be integrated with a VNet to provide secure access to the backends residing there. A Standard v2 service instance integrated with a VNet will have a public IP address. The Premium tier supports a [fully private integration](api-management-using-with-internal-vnet.md) with a VNet (often referred to as injection into VNet) without exposing a public IP address. +### Q: What is the difference between virtual network integration in Standard v2 tier and virtual network injection in the Premium and Premium v2 tiers? -### Q: Can I deploy an instance of the Basic v2 or Standard v2 tier entirely in my VNet? +A: A Standard v2 service instance can be integrated with a virtual network to provide secure access to the backends residing there. A Standard v2 service instance integrated with a virtual network has a public IP address for inbound access. -A: No, such a deployment is only supported in the Premium tier. +The Premium tier and Premium v2 tier support full network isolation by deployment (injection) into a virtual network without exposing a public IP address. [Learn more about networking options in API Management](virtual-network-concepts.md). -### Q: Is a Premium v2 tier planned? +### Q: Can I deploy an instance of the Basic v2 or Standard v2 tier entirely in my virtual network? -A: Yes, a Premium v2 preview is planned and will be announced separately. +A: No, such a deployment is only supported in the Premium and Premium v2 tiers. ## Related content |
api-management | Virtual Network Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md | description: Learn about scenarios and requirements to secure inbound or outboun - Previously updated : 03/26/2024+ Last updated : 10/16/2024 # Use a virtual network to secure inbound or outbound traffic for Azure API Management -By default your API Management is accessed from the internet at a public endpoint, and acts as a gateway to public backends. API Management provides several options to secure access to your API Management instance and to backend APIs using an Azure virtual network. Available options depend on the [service tier](api-management-features.md) of your API Management instance. +By default your API Management instance is accessed from the internet at a public endpoint, and acts as a gateway to public backends. API Management provides several options to use an Azure virtual network to secure access to your API Management instance and to backend APIs. Available options depend on the [service tier](api-management-features.md) of your API Management instance. Choose networking capabilities to meet your organization's needs. -* **Injection** of the API Management instance into a subnet in the virtual network, enabling the gateway to access resources in the network. -- You can choose one of two injection modes: *external* or *internal*. They differ in whether inbound connectivity to the gateway and other API Management endpoints is allowed from the internet or only from within the virtual network. --* **Integration** of your API Management instance with a subnet in a virtual network so that your API Management gateway can make outbound requests to API backends that are isolated in the network. --* **Enabling secure and private inbound connectivity** to the API Management gateway using a *private endpoint*. The following table compares virtual networking options. For more information, see later sections of this article and links to detailed guidance. |Networking model |Supported tiers |Supported components |Supported traffic |Usage scenario | |||||-|-|**[Virtual network injection - external](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends | -|**[Virtual network injection - internal](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends | -|**[Outbound integration](#outbound-integration)** | Standard v2 | Gateway only | Outbound request traffic can reach APIs hosted in a delegated subnet of a virtual network. | External access to private and on-premises backends | -|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported) | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway | +|**[Virtual network injection (classic tiers) - external](#virtual-network-injection-classic-tiers)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, ExpressRoute, and S2S VPN connections. | External access to private and on-premises backends | +|**[Virtual network injection (classic tiers) - internal](#virtual-network-injection-classic-tiers)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to peered virtual networks, ExpressRoute, and S2S VPN connections. | Internal access to private and on-premises backends | +|**[Virtual network injection (v2 tiers)](#virtual-network-injection-v2-tiers)** | Premium v2 | Gateway only | Inbound and outbound traffic can be allowed to a delegated subnet of a virtual network, peered virtual networks, ExpressRoute, and S2S VPN connections. | Internal access to private and on-premises backends | +|**[Virtual network integration (v2 tiers)](#virtual-network-integration-v2-tiers)** | Standard v2, Premium v2 | Gateway only | Outbound request traffic can reach APIs hosted in a delegated subnet of a single connected virtual network. | External access to private and on-premises backends | +|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported) | Only inbound traffic can be allowed from internet, peered virtual networks, ExpressRoute, and S2S VPN connections. | Secure client connection to API Management gateway | -## Virtual network injection +## Virtual network injection (classic tiers) -With VNet injection, deploy ("inject") your API Management instance in a subnet in a non-internet-routable network to which you control access. In the virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md). +In the API Management classic Developer and Premium tiers, deploy ("inject") your API Management instance in a subnet in a non-internet-routable network to which you control access. In the virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups](../virtual-network/network-security-groups-overview.md). For detailed deployment steps and network configuration, see: * [Network resource requirements for API Management injection into a virtual network](virtual-network-injection-resources.md). ### Access options-Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNet (internal mode). +Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the virtual network (internal mode). -* **External** - The API Management endpoints are accessible from the public internet via an external load balancer. The gateway can access resources within the VNet. +* **External** - The API Management endpoints are accessible from the public internet via an external load balancer. The gateway can access resources within the virtual network. - :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Diagram showing a connection to external VNet." ::: + :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Diagram showing a connection to external virtual network." ::: Use API Management in external mode to access backend services deployed in the virtual network. -* **Internal** - The API Management endpoints are accessible only from within the VNet via an internal load balancer. The gateway can access resources within the VNet. +* **Internal** - The API Management endpoints are accessible only from within the virtual network via an internal load balancer. The gateway can access resources within the virtual network. - :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Diagram showing a connection to internal VNet." lightbox="media/virtual-network-concepts/api-management-vnet-internal.png"::: + :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Diagram showing a connection to internal virtual network." lightbox="media/virtual-network-concepts/api-management-vnet-internal.png"::: Use API Management in internal mode to: Using a virtual network, you can configure the developer portal, API gateway, an * Enable hybrid cloud scenarios by exposing your cloud-based APIs and on-premises APIs through a common gateway. * Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint. -## Outbound integration +## Virtual network injection (v2 tiers) ++In the API Management Premium v2 tier, inject your instance into a delegated subnet of a virtual network to secure the gateway's inbound and outbound traffic. Currently, you can configure settings for virtual network injection at the time you create the instance. ++In this configuration: ++* The API Management gateway endpoint is accessible through the virtual network at a private IP address. +* API Management can make outbound requests to API backends that are isolated in the network. ++This configuration is recommended for scenarios where you want to isolate both the API Management instance and the backend APIs. Virtual network injection in the Premium v2 tier automatically manages network connectivity to most service dependencies for Azure API Management. + -The Standard v2 tier supports VNet integration to allow your API Management instance to reach API backends that are isolated in a single connected VNet. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. +For more information, see [Inject a Premium v2 instance into a virtual network](inject-vnet-v2.md). ++## Virtual network integration (v2 tiers) ++The Standard v2 and Premium v2 tiers support outbound virtual network integration to allow your API Management instance to reach API backends that are isolated in a single connected virtual network. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. Outbound integration enables the API Management instance to reach both public and network-isolated backend services. -For more information, see [Integrate an Azure API Management instance with a private VNet for outbound connections](integrate-vnet-outbound.md). +For more information, see [Integrate an Azure API Management instance with a private virtual network for outbound connections](integrate-vnet-outbound.md). ## Inbound private endpoint One example is to deploy an API Management instance in an internal virtual netwo For more information, see [Deploy API Management in an internal virtual network with Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md). -## Next steps +## Related content -Learn more about: --Virtual network configuration with API Management: +Learn more about virtual network configuration with API Management: * [Deploy your Azure API Management instance to a virtual network - external mode](./api-management-using-with-vnet.md). * [Deploy your Azure API Management instance to a virtual network - internal mode](./api-management-using-with-internal-vnet.md). * [Connect privately to API Management using a private endpoint](private-endpoint.md)-* [Integrate an Azure API Management instance with a private VNet for outbound connections](integrate-vnet-outbound.md) +* [Inject a Premium v2 instance into a virtual network](inject-vnet-v2.md) +* [Integrate an Azure API Management instance with a private virtual network for outbound connections](integrate-vnet-outbound.md) * [Defend your Azure API Management instance against DDoS attacks](protect-with-ddos-protection.md) -Related articles: --* [Connecting a Virtual Network to backend using Vpn Gateway](../vpn-gateway/design.md#s2smulti) -* [Connecting a Virtual Network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md) -* [Virtual Network Frequently asked Questions](../virtual-network/virtual-networks-faq.md) ----+To learn more about Azure virtual networks, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md). |
api-management | Virtual Network Injection Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-injection-resources.md | Title: Azure API Management virtual network injection - network resources -description: Learn about requirements for network resources when you deploy (inject) your API Management instance in an Azure virtual network. +description: Learn about requirements for network resources when you deploy (inject) your API Management Developer or Premium tier instance in an Azure virtual network. -The following are virtual network resource requirements for API Management injection into a virtual network. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance. +The following are virtual network resource requirements for injection of an API Management Developer or Premium instance into a virtual network. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance. ++> [!NOTE] +> To inject a Premium v2 instance in a virtual network, the requirements and configuration are different. [Learn more](inject-vnet-v2.md) #### [stv2](#tab/stv2) |
api-management | Visualize Using Managed Grafana Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visualize-using-managed-grafana-dashboard.md | |
api-management | Websocket Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md | |
api-management | Workspaces Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md | Manage gateway capacity by manually adding or removing scale units, similar to t ### Regional availability -Workspace gateways are currently available in the following regions: --> [!NOTE] -> These regions are a subset of those where API Management is available. --* West US -* North Central US -* East US 2 -* UK South -* France Central -* Germany West Central -* North Europe -* East Asia -* Southeast Asia -* Australia East -* Japan East +For a current list of regions where workspace gateways are available, see [Availability of v2 tiers and workspace gateways](api-management-region-availability.md). ### Gateway constraints The following constraints currently apply to workspace gateways: |
app-service | App Service Undelete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-undelete.md | Restore-AzDeletedWebApp -TargetResourceGroupName <my_rg> -Name <my_app> -TargetA >Restore used for scenarios where multiple apps with the same name have been deleted with `-DeletedSiteId` ```powershell-Restore-AzDeletedWebApp -ResourceGroupName <original_rg> -Name <original_app> -DeletedId /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Web/locations/location/deletedSites/1234 -TargetAppServicePlanName <my_asp> +Restore-AzDeletedWebApp -ResourceGroupName <original_rg> -Name <original_app> -DeletedId /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Web/locations/location/deletedSites/1234 -TargetAppServicePlanName <my_asp> ``` Currently there's no support for Undelete (Restore-AzDeletedWebApp) Function app | **AzureWebJobsStorage** | Connection String for the storage account used by the deleted app. | | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | Connection String for the storage account used by the deleted app. | | **WEBSITE_CONTENTSHARE** | File share on storage account used by the deleted app. | -- |
app-service | Configure Authentication Api Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md | The following steps will allow you to manually migrate the application to the V2 ```json {- "id": "/subscriptions/00d563f8-5b89-4c6a-bcec-c1b9f6d607e0/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsettings", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsettings", "name": "authsettings", "type": "Microsoft.Web/sites/config", "location": "Central US", |
app-service | Configure Basic Auth Disable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md | To confirm that the logs are shipped to your selected service(s), try logging in <pre> { "time": "2023-10-16T17:42:32.9322528Z",- "ResourceId": "/SUBSCRIPTIONS/EF90E930-9D7F-4A60-8A99-748E0EEA69DE/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.WEB/SITES/MY-DEMO-APP", + "ResourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.WEB/SITES/MY-DEMO-APP", "Category": "AppServiceAuditLogs", "OperationName": "Authorization", "Properties": { The following are corresponding policies for slots: #### Why do I get a warning in Visual Studio saying that basic authentication is disabled? -Visual Studio requires basic authentication to deploy to Azure App Service. The warning reminds you that the configuration on your app changed and you can no longer deploy to it. Either you disabled basic authentication on the app yourself, or your organization policy enforces that basic authentication is disabled for App Service apps. +Visual Studio requires basic authentication to deploy to Azure App Service. The warning reminds you that the configuration on your app changed and you can no longer deploy to it. Either you disabled basic authentication on the app yourself, or your organization policy enforces that basic authentication is disabled for App Service apps. |
app-service | How To Custom Domain Suffix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md | You need to configure the managed identity and ensure it exists before assigning "identity": { "type": "UserAssigned", "userAssignedIdentities": {- "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-cdns-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-cdns-managed-identity" + "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-cdns-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-cdns-managed-identity" } }, "properties": { "customDnsSuffixConfiguration": { "dnsSuffix": "antares-test.net", "certificateUrl": "https://kv-sample-key-vault.vault.azure.net/secrets/wildcard-antares-test-net",- "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-cdns-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-cdns-managed-identity" + "keyVaultReferenceIdentity": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-cdns-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-cdns-managed-identity" }, "internalLoadBalancingMode": "Web, Publishing", etc... |
app-service | How To Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md | If you're using a user-assigned managed identity for your custom domain suffix c "customDnsSuffixConfiguration": { "dnsSuffix": "internal.contoso.com", "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",- "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" + "keyVaultReferenceIdentity": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" } } } |
app-service | How To Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md | If you're using a user assigned managed identity for your custom domain suffix c "customDnsSuffixConfiguration": { "dnsSuffix": "internal.contoso.com", "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",- "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" + "keyVaultReferenceIdentity": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" } } } |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | If you're using a user-assigned managed identity for your custom domain suffix c "customDnsSuffixConfiguration": { "dnsSuffix": "internal.contoso.com", "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",- "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" + "keyVaultReferenceIdentity": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" } } } |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | If you're using a user assigned managed identity for your custom domain suffix c "customDnsSuffixConfiguration": { "dnsSuffix": "internal.contoso.com", "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",- "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" + "keyVaultReferenceIdentity": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity" } } } |
app-service | Overview Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md | To access App Service diagnostics, navigate to your App Service web app or App S For Azure Functions, navigate to your function app, and in the top navigation, click on **Platform features**, and select **Diagnose and solve problems** from the **Resource management** section. -In the App Service diagnostics homepage, you can perform a search for a symptom with your app, or choose a diagnostic category that best describes the issue with your app. Next, there is a new feature called Risk Alerts that provides an actionable report to improve your App. Finally, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools). +The App Service diagnostics homepage provides many tools to diagnose app problems. For more information, see [Diagnostic tools](#diagnostic-tools) in this article. ![App Service Diagnose and solve problems homepage with diagnostic search box, Risk Alerts assessments, and Troubleshooting categories for discovering diagnostics for the selected Azure Resource.](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png) |
app-service | Tutorial Sidecar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-sidecar.md | First you create the resources that the tutorial uses. They're used for this par Open resource group in the portal: <b>https://portal.azure.com/#@/resource/subscriptions/<subscription-id>/resourceGroups/<group-name></b> </pre> -1. Open the resource group link in a browser tab. You'll need these output values later. +1. Copy these output values for later. You can also find them in the portal, in the management pages of the respective resources. > [!NOTE] > `azd provision` uses the included templates to create the following Azure resources: In this section, you add a sidecar container to your Linux app. The portal exper ### [Use ARM template](#tab/template) -1. In the Cloud Shell, run the following command to add to the web app the user-assigned managed identity that `azd provision` created. Use the value of `<managed-identity-resource-id>` in the `azd provision` output. +1. In the Cloud Shell, run the following command to add to the web app the user-assigned managed identity that `azd provision` created. Use the value of `<managed-identity-resource-id>` (a very long string) in the `azd provision` output. ```azurecli-interactive az webapp identity assign --identities <managed-identity-resource-id> In this step, you create the instrumentation for your app according to the steps The otel-collector sidecar should export data to Application Insights now. 1. Back in the browser tab for `https://<app-name>.azurewebsites.net`, refresh the page a few times to generate some web requests.-1. Go back to the resource group overview page, then select the Application Insights resource. You should now see some data in the default charts. +1. Go back to the resource group overview page, then select the Application Insights resource that `azd up` created. You should now see some data in the default charts. :::image type="content" source="media/tutorial-sidecar/app-insights-view.png" alt-text="Screenshot of the Application Insights page showing data in the default charts."::: |
application-gateway | Overview V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md | The v2 SKU includes the following enhancements: - **Autoscaling**: Application Gateway or WAF deployments under the autoscaling SKU can scale out or in based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning. This SKU offers true elasticity. In the Standard_v2 and WAF_v2 SKU, Application Gateway can operate both in fixed capacity (autoscaling disabled) and in autoscaling enabled mode. Fixed capacity mode is useful for scenarios with consistent and predictable workloads. Autoscaling mode is beneficial in applications that see variance in application traffic. - **Zone redundancy**: An Application Gateway or WAF deployment can span multiple Availability Zones, removing the need to provision separate Application Gateway instances in each zone with a Traffic Manager. You can choose a single zone or multiple zones where Application Gateway instances are deployed, which makes it more resilient to zone failure. The backend pool for applications can be similarly distributed across availability zones. - Zone redundancy is available only where Azure Zones are available. In other regions, all other features are supported. For more information, see [Regions and Availability Zones in Azure](../reliability/availability-zones-service-support.md) + Zone redundancy is available only where Azure availability zones are available. In other regions, all other features are supported. For more information, see [Azure regions with availability zone support](../reliability/availability-zones-region-support.md). - **Static VIP**: Application Gateway v2 SKU supports the static VIP type exclusively. Static VIP ensures that the VIP associated with the application gateway doesn't change for the lifecycle of the deployment, even after a restart. You must use the application gateway URL for domain name routing to App Services via the application gateway, as v1 doesn't have a static VIP. - **Header Rewrite**: Application Gateway allows you to add, remove, or update HTTP request and response headers with v2 SKU. For more information, see [Rewrite HTTP headers with Application Gateway](./rewrite-http-headers-url.md) - **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md). |
automation | Automation Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md | In the event when a zone is down, there's no action required by you to recover f ## Supported regions with availability zones -See [Regions and Availability Zones in Azure](../reliability/availability-zones-service-support.md) for the Azure regions that have availability zones. +See [Azure regions with availability zone support](../reliability/availability-zones-region-support.md) for the Azure regions that have availability zones. Automation accounts currently support the following regions: - Australia East There is no change to the [Service Level Agreement](https://azure.microsoft.com/ ## Next steps -- Learn more about [regions that support availability zones](../reliability/availability-zones-service-support.md).+- Learn more about [regions that support availability zones](../reliability/availability-zones-region-support.md). |
automation | Automation Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-disaster-recovery.md | -You must have a disaster recovery strategy to handle a region-wide service outage or zone-wide failure to help reduce the impact and effects arising from unpredictable events on your business and customers. You are responsible to set up disaster recovery of Automation accounts, and its dependent resources such as Modules, Connections, Credentials, Certificates, Variables and Schedules. An important aspect of a disaster recovery plan is preparing to failover to the replica of the Automation account created in advance in the secondary region, if the Automation account in the primary region becomes unavailable. Ensure that your disaster recovery strategy considers your Automation account and the dependent resources. +You must have a disaster recovery strategy to handle a region-wide service outage or zone-wide failure to help reduce the impact and effects arising from unpredictable events on your business and customers. You're responsible to set up disaster recovery of Automation accounts, and its dependent resources such as Modules, Connections, Credentials, Certificates, Variables and Schedules. An important aspect of a disaster recovery plan is preparing to fail over to the replica of the Automation account created in advance in the secondary region, if the Automation account in the primary region becomes unavailable. Ensure that your disaster recovery strategy considers your Automation account and the dependent resources. In addition to high availability offered by Availability zones, some regions are paired with another region to provide protection from regional or large geographical disasters. Irrespective of whether the primary region has a regional pair or not, the disaster recovery strategy for the Automation account remains the same. For more information about regional pairs, [learn more](../availability-zones/cross-region-replication-azure.md). requires a location that you must use for deployment. This would be the primary - Begin by [creating a replica Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal#create-automation-account) in any alternate [region](https://azure.microsoft.com/global-infrastructure/services/?products=automation®ions=all). - Select the secondary region of your choice - paired region or any other region where Azure Automation is available. - Apart from creating a replica of the Automation account, replicate the dependent resources such as Runbooks, Modules, Connections, Credentials, Certificates, Variables, Schedules and permissions assigned for the Run As account and Managed Identities in the Automation account in primary region to the Automation account in secondary region. You can use the [PowerShell script](#script-to-migrate-automation-account-assets-from-one-region-to-another) to migrate assets of the Automation account from one region to another.-- If you are using [ARM templates](../azure-resource-manager/management/overview.md) to define and deploy Automation runbooks, you can use these templates to deploy the same runbooks in any other Azure region where you create the replica Automation account. In case of a region-wide outage or zone-wide failure in the primary region, you can execute the runbooks replicated in the secondary region to continue business as usual. This ensures that the secondary region steps up to continue the work if the primary region has a disruption or failure. +- If you're using [ARM templates](../azure-resource-manager/management/overview.md) to define and deploy Automation runbooks, you can use these templates to deploy the same runbooks in any other Azure region where you create the replica Automation account. In case of a region-wide outage or zone-wide failure in the primary region, you can execute the runbooks replicated in the secondary region to continue business as usual. This ensures that the secondary region steps up to continue the work if the primary region has a disruption or failure. >[!NOTE] > Due to data residency requirements, jobs data and logs present in the primary region are not available in the secondary region. If the Linux Hybrid Runbook worker is deployed using agent-based approach in a r ### Scenario: Execute jobs on Hybrid Runbook Worker deployed in the primary region of failure-If the Hybrid Runbook worker is deployed in the primary region, and there is a compute failure in that region, the machine will not be available for executing Automation jobs. You must provision a new virtual machine in an alternate region and register it as Hybrid Runbook Worker in Automation account in the secondary region. +If the Hybrid Runbook worker is deployed in the primary region, and there's a compute failure in that region, the machine won't be available for executing Automation jobs. You must provision a new virtual machine in an alternate region and register it as Hybrid Runbook Worker in Automation account in the secondary region. - See the installation steps in [how to deploy an extension-based Windows or Linux User Hybrid Runbook Worker](extension-based-hybrid-runbook-worker-install.md?tabs=windows#create-hybrid-worker-group). - See the installation steps in [how to deploy an agent-based Windows Hybrid Worker](automation-windows-hrw-install.md#installation-options). If the Hybrid Runbook worker is deployed in the primary region, and there is a c ## Script to migrate Automation account assets from one region to another -You can use these scripts for migration of Automation account assets from the account in primary region to the account in the secondary region. These scripts are used to migrate only Runbooks, Modules, Connections, Credentials, Certificates and Variables. The execution of these scripts does not affect the Automation account and its assets present in the primary region. +You can use these scripts for migration of Automation account assets from the account in primary region to the account in the secondary region. These scripts are used to migrate only Runbooks, Modules, Connections, Credentials, Certificates and Variables. The execution of these scripts doesn't affect the Automation account and its assets present in the primary region. ### Prerequisites - 1. Ensure that the Automation account in the secondary region is created and available so that assets from primary region can be migrated to it. It is preferred if the destination automation account is one without any custom resources as it prevents potential resource clash due to same name and loss of data. + 1. Ensure that the Automation account in the secondary region is created and available so that assets from primary region can be migrated to it. It's preferred if the destination automation account is one without any custom resources as it prevents potential resource clash due to same name and loss of data. 1. Ensure that the system assigned managed identities are enabled in the Automation account in the primary region.- 1. Ensure that the system assigned managed identities of the primary Automation account has contributor access to the subscription it belongs to. + 1. Ensure that the system assigned managed identities of the primary Automation account have contributor access to the subscription it belongs to. 1. Ensure that the primary Automation account's managed identity has Contributor access with read and write permissions to the Automation account in secondary region. To enable, provide the necessary permissions in secondary Automation account's managed identities. [Learn more](../role-based-access-control/quickstart-assign-role-user-portal.md). 1. Ensure that the script has access to the Automation account assets in primary region. Hence, it should be executed as a runbook in that Automation account for successful migration. 1. If the primary Automation account is deployed using a Run as account, then it must be switched to Managed Identity before migration. [Learn more](migrate-run-as-accounts-managed-identity.md). Type[] | True | Array consisting of all the types of assets that need to be migr ### Limitations-- The script migrates only Custom PowerShell modules. Default modules and Python packages would not be migrated to replica Automation account.-- The script does not migrate **Schedules** and **Managed identities** present in Automation account in primary region. These would have to be created manually in replica Automation account.-- Jobs data and activity logs would not be migrated to the replica account.+- The script migrates only Custom PowerShell modules. Default modules and Python packages wouldn't be migrated to replica Automation account. +- The script doesn't migrate **Schedules** and **Managed identities** present in Automation account in primary region. These would have to be created manually in replica Automation account. +- Jobs data and activity logs wouldn't be migrated to the replica account. ## Next steps -- Learn more about [regions that support availability zones](../availability-zones/az-region.md).+- Learn more about [regions that support availability zones](../reliability/availability-zones-region-support.md). |
automation | Manage Change Tracking Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-change-tracking-monitoring-agent.md | description: This article tells how to use change tracking and inventory to trac Previously updated : 09/19/2024 Last updated : 11/19/2024 To manage tracking and inventory, ensure that you enable Change tracking with AM 1. In the [Azure portal](https://portal.azure.com), select the virtual machine. 1. Select a specific VM for which you would like to configure the Change tracking settings. -1. Under **Operations**, select **Change tracking** +1. Under **Operations**, select **Change tracking**. + + :::image type="content" source="media/manage-change-tracking-monitoring-agent/configure-file-settings.png" alt-text="Screenshot of selecting the change tracking to configure file settings." lightbox="media/manage-change-tracking-monitoring-agent/configure-file-settings.png"::: + 1. Select **Settings** to view the **Data Collection Rule Configuration** (DCR) page. Here, you can do the following actions: 1. Configure changes on a VM at a granular level. 1. Select the filter to configure the workspace. |
azure-app-configuration | Azure Pipeline Export Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/azure-pipeline-export-task.md | + + Title: Export settings from App Configuration with Azure Pipelines +description: Learn how to use Azure Pipelines to export key-values from an App Configuration Store ++++ Last updated : 10/29/2024++++# Export settings from App Configuration with Azure Pipelines ++The Azure App Configuration Export task exports key-values from your App Configuration store and sets them as Azure pipeline variables, which subsequent tasks can consume. This task complements the Azure App Configuration Import task that imports key-values from a configuration file into your App Configuration store. For more information, see [Import settings to App Configuration with Azure Pipelines](azure-pipeline-import-task.md). ++## Prerequisites ++- Azure subscription - [create one for free](https://azure.microsoft.com/free/) +- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store) +- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) +- [Azure Pipelines agent version 2.144.0](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.144.0) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents. ++## Create a service connection +++## Add role assignment ++Assign the proper App Configuration role assignments to the credentials being used within the task so that the task can access the App Configuration store. ++1. Go to your target App Configuration store. +1. In the left menu, select **Access control (IAM)**. +1. In the right pane, select **Add role assignments**. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button."::: +1. For **Role**, select **App Configuration Data Reader**. This role allows the task to read from the App Configuration store. +1. Select the service principal associated with the service connection that you created in the previous section. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-reader.png" alt-text="Screenshot shows the Add role assignment dialog."::: +1. Select **Review + assign**. +1. If the store contains Key Vault references, go to relevant Key Vault and assign **Key Vault Secret User** role to the service principal created in the previous step. From the Key Vault menu, select **Access policies** and ensure [Azure role-based access control](/azure/key-vault/general/rbac-guide) is selected as the permission model. ++## Use in builds ++This section covers how to use the Azure App Configuration Export task in an Azure DevOps build pipeline. ++1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For build pipeline documentation, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?tabs=net%2Ctfs-2018-2%2Cbrowser). + - If you're creating a new build pipeline, on the last step of the process, on the **Review** tab, select **Show assistant** on the right side of the pipeline. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Show assistant button for a new pipeline.](./media/new-pipeline-show-assistant.png) + - If you're using an existing build pipeline, click the **Edit** button at the top-right. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Edit button for an existing pipeline.](./media/existing-pipeline-show-assistant.png) +1. Search for the **Azure App Configuration Export** Task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Add Task dialog with Azure App Configuration Export in the search box.](./media/add-azure-app-configuration-export-task.png) +1. To export the key-values from the App Configuration store, configure the necessary parameters for the task. Descriptions of the parameters are available in the **Parameters** section and in tooltips next to each parameter. + - Set the **Azure subscription** parameter to the name of the service connection you created in a previous step. + - Set the **App Configuration Endpoint** to the endpoint of your App Configuration store. + - Leave the default values for the remaining parameters. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the app configuration task parameters.](./media/azure-app-configuration-export-parameters.png) +1. Save and queue a build. The build log displays any failures that occurred during the execution of the task. ++## Use in releases ++This section covers how to use the Azure App Configuration Export task in an Azure DevOps release pipeline. ++1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. For release pipeline documentation, see [Release pipelines](/azure/devops/pipelines/release). +1. Choose an existing release pipeline. If you donΓÇÖt have one, click **New pipeline** to create a new one. +1. Select the **Edit** button in the top-right corner to edit the release pipeline. +1. From the **Tasks** dropdown, choose the **Stage** to which you want to add the task. More information about stages can be found in [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments). + > [!div class="mx-imgBorder"] + > ![Screenshot shows the selected stage in the Tasks dropdown.](./media/pipeline-stage-tasks.png) +1. Click **+** next to the Job to which you want to add a new task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the plus button next to the job.](./media/add-task-to-job.png) +1. Search for the **Azure App Configuration Export** Task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Add Task dialog with Azure App Configuration Export in the search box.](./media/add-azure-app-configuration-export-task.png) +1. To export your key-values from your App Configuration store, configure the necessary parameters within the task. Descriptions of the parameters are available in the **Parameters** section and in tooltips next to each parameter. + - Set the **Azure subscription** parameter to the name of the service connection you created in a previous step. + - Set the **App Configuration Endpoint** to the endpoint of your App Configuration store. + - Leave the default values for the remaining parameters. +1. Save and queue a release. The release log displays any failures encountered during the execution of the task. ++## Parameters ++The following parameters are used by the Azure App Configuration Export task: ++- **Azure subscription**: A drop-down containing your available Azure service connections. To update and refresh your list of available Azure service connections, press the **Refresh Azure subscription** button to the right of the textbox. +- **App Configuration Endpoint**: A drop-down that loads your available configuration stores endpoints under the selected subscription. To update and refresh your list of available configuration stores endpoints, press the **Refresh App Configuration Endpoint** button to the right of the textbox. +- **Selection Mode**: Specifies how the key-values read from a configuration store are selected. The 'Default' selection mode allows the use of key and label filters. The 'Snapshot' selection mode allows key-values to be selected from a snapshot. Default value is **Default**. +- **Key Filter**: The filter can be used to select what key-values are requested from Azure App Configuration. A value of * selects all key-values. For more information on, see [Query key-values](concept-key-value.md#query-key-values). +- **Label**: Specifies which label should be used when selecting key-values from the App Configuration store. If no label is provided, then key-values with the no label are retrieved. The following characters aren't allowed: , *. +- **Snapshot Name**: Specifies snapshot from which key-values should be retrieved in Azure App Configuration. +- **Trim Key Prefix**: Specifies one or more prefixes that should be trimmed from App Configuration keys before setting them as variables. A new-line character can be used to separate multiple prefixes. +- **Suppress Warning For Overridden Keys**: Default value is unchecked. Specifies whether to show warnings when existing keys are overridden. Enable this option when it's expected that the key-values downloaded from App Configuration have overlapping keys with what exists in pipeline variables. ++## Use key-values in subsequent tasks ++The key-values that are fetched from App Configuration are set as pipeline variables, which are accessible as environment variables. The key of the environment variable is the key of the key-value that is retrieved from App Configuration after trimming the prefix, if specified. ++For example, if a subsequent task runs a PowerShell script, it could consume a key-value with the key 'myBuildSetting' like this: +```powershell +echo "$env:myBuildSetting" +``` +And the value is printed to the console. ++> [!NOTE] +> Azure Key Vault references within App Configuration will be resolved and set as [secret variables](/azure/devops/pipelines/process/variables#secret-variables). In Azure pipelines, secret variables are masked out from log. They aren't passed into tasks as environment variables and must instead be passed as inputs. ++## Troubleshooting ++If an unexpected error occurs, debug logs can be enabled by setting the pipeline variable `system.debug` to `true`. ++## FAQ ++**How do I compose my configuration from multiple keys and labels?** ++There are times when configuration may need to be composed from multiple labels, for example, default and dev. Multiple App Configuration tasks may be used in one pipeline to implement this scenario. The key-values fetched by a task in a later step supersedes any values from previous steps. In the aforementioned example, a task can be used to select key-values with the default label while a second task can select key-values with the dev label. The keys with the dev label override the same keys with the default label. ++## Next step ++For a complete reference of the parameters or to use this pipeline task in YAML pipelines, refer to the following document. ++> [!div class="nextstepaction"] +> [Azure App Configuration Export Task reference](/azure/devops/pipelines/tasks/reference/azure-app-configuration-export-v10) ++To learn how to import key-values from a configuration file into your App Configuration store, continue to the following document. ++> [!div class="nextstepaction"] +> [Import settings to App Configuration with Azure pipelines](./azure-pipeline-import-task.md) ++To learn how to create snapshot in an App Configuration store, continue to the following document. ++> [!div class="nextstepaction"] +> [Create snapshots in App Configuration with Azure Pipelines](./azure-pipeline-snapshot-task.md) + |
azure-app-configuration | Azure Pipeline Import Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/azure-pipeline-import-task.md | + + Title: Import settings to App Configuration with Azure Pipelines +description: Learn to use Azure Pipelines to import key-values to an App Configuration Store ++++ Last updated : 10/29/2024++++# Import settings to App Configuration with Azure Pipelines ++The Azure App Configuration Import task imports key-values from a configuration file into your App Configuration store. This task enables full circle functionality within the pipeline as you're now able to export settings from the App Configuration store and import settings to the App Configuration store. ++## Prerequisites ++- Azure subscription - [create one for free](https://azure.microsoft.com/free/) +- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store) +- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) +- [Azure Pipelines agent version 2.144.0](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.144.0) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents. ++## Create a service connection +++## Add role assignment ++Assign the proper App Configuration role assignments to the credentials being used within the task so that the task can access the App Configuration store. ++1. Go to your target App Configuration store. +1. In the left menu, select **Access control (IAM)**. +1. In the right pane, select **Add role assignments**. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button."::: +1. For **Role**, select **App Configuration Data Owner**. This role allows the task to read from and write to the App Configuration store. +1. Select the service principal associated with the service connection that you created in the previous section. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-owner.png" alt-text="Screenshot shows the Add role assignment dialog."::: +1. Select **Review + assign**. ++## Use in builds ++This section covers how to use the Azure App Configuration Import task in an Azure DevOps build pipeline. ++1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For more information about build pipelines go to [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?tabs=tfs-2018-2). + - If you're creating a new build pipeline, on the last step of the process, on the **Review** tab, select **Show assistant** on the right side of the pipeline. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Show assistant button for a new pipeline.](./media/new-pipeline-show-assistant.png) + - If you're using an existing build pipeline, click the **Edit** button at the top-right. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Edit button for an existing pipeline.](./media/existing-pipeline-show-assistant.png) +1. Search for the **Azure App Configuration Import** Task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Add Task dialog with Azure App Configuration Import in the search box.](./media/add-azure-app-configuration-import-task.png) +1. Configure the necessary parameters for the task to import key-values from the configuration file to the App Configuration store. Explanations of the parameters are available in the **Parameters** section, and in tooltips next to each parameter. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the app configuration import task parameters.](./media/azure-app-configuration-import-parameters.png) +1. Save and queue a build. The build log displays any failures that occurred during the execution of the task. ++## Use in releases ++This section covers how to use the Azure App Configuration Import task in an Azure DevOps release pipeline. ++1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. For more information about release pipelines, go to [Create your first release pipeline](/azure/devops/pipelines/release). +1. Choose an existing release pipeline. If you donΓÇÖt have one, select **+ New** to create a new one. +1. Select the **Edit** button in the top-right corner to edit the release pipeline. +1. From the **Tasks** dropdown, choose the **Stage** to which you want to add the task. More information about stages can be found in [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments). + > [!div class="mx-imgBorder"] + > ![Screenshot shows the selected stage in the Tasks dropdown.](./media/pipeline-stage-tasks.png) +1. Click **+** next to the Job to which you want to add a new task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the plus button next to the job.](./media/add-task-to-job.png) +1. In the **Add tasks** dialog, type **Azure App Configuration Import** into the search box and select it. +1. Configure the necessary parameters within the task to import your key-values from your configuration file to your App Configuration store. Explanations of the parameters are available in the **Parameters** section, and in tooltips next to each parameter. +1. Save and queue a release. The release log displays any failures encountered during the execution of the task. ++## Parameters ++The following parameters are used by the App Configuration Import task: ++- **Azure subscription**: A drop-down containing your available Azure service connections. To update and refresh your list of available Azure service connections, press the **Refresh Azure subscription** button to the right of the textbox. +- **App Configuration Endpoint**: A drop-down that loads your available configuration stores endpoint under the selected subscription. To update and refresh your list of available configuration stores endpoint, press the **Refresh App Configuration Endpoint** button to the right of the textbox. +- **Configuration File Path**: The path to your configuration file. The **Configuration File Path** parameter begins at the root of the file repository. You can browse through your build artifact to select a configuration file. (`...` button to the right of the textbox). The supported file formats depend on the file content profile. For the default profile the supported file formats are yaml, json and properties. For KvSet profile the supported file format is json. +- **File Content Profile**: The Configuration File's [content profile](./concept-config-file.md). Default value is **Default**. + - **Default**: Refers to the conventional configuration file formats that are directly consumable by applications. + - **Kvset**: Refers to a [file schema](https://aka.ms/latest-kvset-schema) that contains all properties of an App Configuration key-value, including key, value, label, content type, and tags. The task parameters 'Separator', 'Label', 'Content type', 'Prefix', 'Tags', and 'Depth' aren't applicable when using the Kvset profile. +- **Import Mode**: The default value is **All**. Determines the behavior when importing key-values. + - **All**: Imports all key-values in the configuration file to App Configuration. + - **Ignore-Match**: Imports only settings that have no matching key-value in App Configuration. Matching key-values are considered to be key-values with the same key, label, value, content type, and tags. +- **Dry Run**: Default value is **Unchecked**. + - **Checked**: No updates are performed to App Configuration. Instead any updates that would have been performed in a normal run are printed to the console for review. + - **Unchecked**: Performs any updates to App Configuration and doesn't print to the console. +- **Separator**: The separator that's used to flatten .json and .yml files. +- **Depth**: The depth that the .json and .yml files are flattened to. +- **Prefix**: A string appended to the beginning of each key imported to the App Configuration store. +- **Label**: A string added to each key-value as the label within the App Configuration store. +- **Content Type**: A string added to each key-value as the content type within the App Configuration store. +- **Tags**: A JSON object in the format of `{"tag1":"val1", "tag2":"val2"}`, which defines tags that are added to each key-value imported to your App Configuration store. +- **Delete key-values that are not included in the configuration file**: Default value is **Unchecked**. The behavior of this option depends on the configuration file content profile. + - **Checked**: + - **Default content profile**: Removes all key-values in the App Configuration store that match both the specified prefix and label before importing new key-values from the configuration file. + - **Kvset content profile**: Removes all key-values in the App Configuration store that aren't included in the configuration file before importing new key-values from the configuration file. + - **Unchecked**: Imports all key-values from the configuration file into the App Configuration store and leaves everything else in the App Configuration store intact. ++++## Troubleshooting ++If an unexpected error occurs, debug logs can be enabled by setting the pipeline variable `system.debug` to `true`. ++## FAQ ++**How can I upload multiple configuration files?** ++To import multiple configuration files to the App Configuration store, create multiple instances of the Azure App Configuration Import task within the same pipeline. ++**How can I create Key Vault references or feature flags using this task?** ++Depending on the file content profile you selected, refer to examples in the [Azure App Configuration support for configuration file](./concept-config-file.md). ++**Why am I receiving a 409 error when attempting to import key-values to my configuration store?** ++A 409 Conflict error message occurs if the task tries to remove or overwrite a key-value that is locked in the App Configuration store. ++## Next step ++For a complete reference of the parameters or to use this pipeline task in YAML pipelines, refer to the following document. ++> [!div class="nextstepaction"] +> [Azure App Configuration Import Task reference](/azure/devops/pipelines/tasks/reference/azure-app-configuration-import-v10) ++To learn how to export key-values from your App Configuration store and set them as Azure pipeline variables, continue to the following document. ++> [!div class="nextstepaction"] +> [Export settings from App Configuration with Azure pipelines](./azure-pipeline-export-task.md) ++To learn how to create snapshot in an App Configuration store, continue to the following document. ++> [!div class="nextstepaction"] +> [Create snapshots in App Configuration with Azure Pipelines](./azure-pipeline-snapshot-task.md) |
azure-app-configuration | Azure Pipeline Snapshot Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/azure-pipeline-snapshot-task.md | + + Title: Create snapshots in App Configuration with Azure Pipelines +description: Learn to use Azure Pipelines to create a snapshot in an App Configuration Store ++++ Last updated : 09/09/2024++++# Create snapshots in App Configuration with Azure Pipelines ++The Azure App Configuration snapshot task is designed to create snapshots in Azure App Configuration. ++## Prerequisites ++- Azure subscription - [create one for free](https://azure.microsoft.com/free/) +- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store) +- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) +- [Azure Pipelines agent version 2.144.0](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.144.0) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents. ++## Create a service connection +++## Add role assignment ++Assign the proper App Configuration role assignment to the credentials being used within the task so that the task can access the App Configuration store. ++1. Go to your target App Configuration store. +1. In the left menu, select **Access control (IAM)**. +1. In the right pane, select **Add role assignment**. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button."::: +1. For the **Role**, select **App Configuration Data Owner**. This role allows the task to read from and write to the App Configuration store. +1. Select the service principal associated with the service connection that you created in the previous section. ++ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-owner.png" alt-text="Screenshot shows the Add role assignment dialog."::: +1. Select **Review + assign**. ++## Use in builds ++In this section, learn how to use the Azure App Configuration snapshot task in an Azure DevOps build pipeline. ++1. Navigate to the build pipeline page by clicking **Pipelines** > **Pipelines**. For more information about build pipelines got to [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?tabs=tfs-2018-2). + - If you're creating a new build pipeline, on the last step of the process, on the **Review** tab, select **Show assistant** on the right side of the pipeline. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Show assistant button for a new pipeline.](./media/new-pipeline-show-assistant.png) + - If you're using an existing build pipeline, click the **Edit** button at the top-right. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Edit button for an existing pipeline.](./media/existing-pipeline-show-assistant.png) +1. Search for the **Azure App Configuration snapshot** Task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the Add Task dialog with Azure App Configuration snapshot in search box.](./media/add-azure-app-configuration-snapshot-task.png) +1. Configure the necessary parameters for the task to create a snapshot in an App Configuration store. Explanations of the parameters are available in the **Parameters** section below and in tooltips next to each parameter. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the app configuration snapshot task parameters.](./media/azure-app-configuration-snapshot-parameters.png) +1. Save and queue a build. The build log displays any failures that occurred during the execution of the task. ++## Use in releases ++In this section, learn how to use the Azure App Configuration snapshot task in an Azure DevOps release pipeline. ++1. Navigate to the release pipeline page by selecting, **Pipelines** > **Releases**. For more information about release pipelines, go to [Create your first pipeline](/azure/devops/pipelines/release). +1. Choose an existing release pipeline. If you donΓÇÖt have one, select **+ New** to create a new one. +1. Select the **Edit** button in the top-right corner to edit the release pipeline. +1. From the **Tasks** dropdown, choose the **Stage** to which you want to add the task. More information about stages can be found in [Add stages, dependencies, & conditions](/azure/devops/pipelines/release/environments). + > [!div class="mx-imgBorder"] + > ![Screenshot shows the selected stage in the Tasks dropdown.](./media/pipeline-stage-tasks.png) +1. Click **+** next to the job to which you want to add a new task. + > [!div class="mx-imgBorder"] + > ![Screenshot shows the plus button next to the job.](./media/add-task-to-job.png) +1. In the **Add tasks** dialog, type **Azure App Configuration snapshot** into the search box and select it. +1. Configure the necessary parameters within the task to create a snapshot within your App Configuration store. Explanations of the parameters are available in the **Parameters** section below, and in tooltips next to each parameter. +1. Save and queue a release. The release log displays any failures encountered during the execution of the task. ++## Parameters ++The following parameters are used by the App Configuration snapshot task: ++- **Azure subscription**: A drop-down containing your available Azure service connections. To update and refresh your list of available Azure service connections, press the **Refresh Azure subscription** button to the right of the textbox. ++- **App Configuration Endpoint**: A drop-down that loads your available configuration store endpoints under the selected subscription. To update and refresh your list of available configuration store endpoints, press the **Refresh App Configuration Endpoint** button to the right of the textbox. ++- **Snapshot Name**: Specify the name for the snapshot. ++- **Composition Type**: The default value is **Key**. + - **Key**: The filters are applied in order for this composition type. Each key-value in the snapshot is uniquely identified by the key only. If there are multiple key-values with the same key and multiple labels, only one key-value will be retained based on the last applicable filter. ++ - **Key-Label**: Filters will be applied and every key-value in the resulting snapshot will be uniquely identified by the key and label together. ++- **Filters**: Represents key and label filter used to build an App Configuration snapshot. Filters should be of a valid JSON format. Example `[{"key":"abc*", "label":"1.0.0"}]`. At least one filter should be specified and a max of three filters can be specified. ++- **Retention period**: The default value is 30 days. Refers to the number of days the snapshot will be retained after it's archived. Archived snapshots can be recovered during the retention period. ++- **Tags**: A JSON object in the format of `{"tag1":"val1", "tag2":"val2"}`, which defines tags that are added to each snapshot created in your App Configuration store. ++## Troubleshooting ++If an unexpected error occurs, debug logs can be enabled by setting the pipeline variable `system.debug` to `true`. ++## Next step ++For a complete reference of the parameters or to use this pipeline task in YAML pipelines, please refer to the following document. ++> [!div class="nextstepaction"] +> [Azure App Configuration Snapshot Task reference](/azure/devops/pipelines/tasks/reference/azure-app-configuration-snapshot-v1) ++To learn how to export key-values from your App Configuration store and set them as Azure pipeline variables, continue to the following document. ++> [!div class="nextstepaction"] +> [Export settings from App Configuration with Azure pipelines](./azure-pipeline-export-task.md) ++To learn how to import key-values from a configuration file into your App Configuration store, continue to the following document. ++> [!div class="nextstepaction"] +> [Import settings to App Configuration with Azure pipelines](./azure-pipeline-import-task.md) |
azure-app-configuration | Concept Config File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-config-file.md | Last updated 05/30/2024 Files are one of the most common ways to store configuration data. To help you start quickly, App Configuration has tools to assist you in [importing your configuration files](./howto-import-export-data.md), so you don't have to type in your data manually. This operation is a one-time data migration if you plan to manage your data in App Configuration after importing them. In some other cases, for example, where you adopt [configuration as code](./howto-best-practices.md#configuration-as-code), you may continue managing your configuration data in files and importing them as part of your CI/CD process recurrently. You may find one of these two scenarios applies to you: -- You keep the configuration file in the format you had before. This format is helpful if you want to use the file as the fallback configuration for your application or the local configuration during development. When you import the configuration file, specify how you want the data transformed to App Configuration key-values. This option is the [**default file content profile**](#file-content-profile-default) in App Configuration importing tools such as portal, Azure CLI, Azure Pipeline Push task, GitHub Actions, etc.-- You keep the configuration file in the format that contains all App Configuration key-value properties. When you import the file, you don't need to specify any transformation rules because all properties of a key-value is already in the file. This option is called [**KVSet file content profile**](#file-content-profile-kvset) in App Configuration importing tools. It's helpful if you want to manage all your App Configuration data, including regular key-values, Key Vault references, and feature flags, in one file and import them in one shot.+- You keep the configuration file in the format you had before. This format is helpful if you want to use the file as the fallback configuration for your application or the local configuration during development. When you import the configuration file, specify how you want the data transformed to App Configuration key-values and feature flags. This option is the [**default file content profile**](#file-content-profile-default) in App Configuration importing tools such as portal, Azure CLI, Azure Pipeline Import task, GitHub Actions, etc. +- You keep the configuration file in the format that contains all App Configuration key-value properties. When you import the file, you don't need to specify any transformation rules because all properties of a key-value are already in the file. This option is called [**KVSet file content profile**](#file-content-profile-kvset) in App Configuration importing tools. It's helpful if you want to manage all your App Configuration data, including regular key-values, Key Vault references, and feature flags, in one file and import them in one shot. -The rest of this document will discuss both file content profiles in detail and use Azure CLI as an example. The same concept applies to other App Configuration importing tools too. +The rest of this document discusses both file content profiles in detail and use Azure CLI as an example. The same concept applies to other App Configuration importing tools too. ## File content profile: default The following table shows all the imported data in your App Configuration store. | Key | Value | Label | Content type | |||||-| .appconfig.featureflag/Beta | {"id":"Beta","description":"","enabled":false,"conditions":{"client_filters":[]}} | dev | application/vnd.microsoft.appconfig.ff+json;charset=utf-8 | +| .appconfig.featureflag/Beta | {"id":"Beta","description":"","enabled": false,"conditions":{"client_filters":[]}} | dev | application/vnd.microsoft.appconfig.ff+json;charset=utf-8 | | Logging:LogLevel:Default | Warning | dev | | | Database:ConnectionString | {\"uri\":\"https://\<your-vault-name\>.vault.azure.net/secrets/db-secret\"} | test | application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8 | az appconfig kv import --profile appconfig/kvset --name <your store name> --sour > [!NOTE] > The KVSet file content profile is currently supported in > - Azure CLI version 2.30.0 or later-> - [Azure App Configuration Push Task](./push-kv-devops-pipeline.md) version 3.3.0 or later +> - [Azure App Configuration Import Task](./azure-pipeline-import-task.md) version 10.0.0 or later +> - Azure portal The following table shows all the imported data in your App Configuration store. |
azure-app-configuration | Howto Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md | Excessive requests to App Configuration can result in throttling or overage char ## Importing configuration data into App Configuration -App Configuration offers the option to bulk [import](./howto-import-export-data.md) your configuration settings from your current configuration files using either the Azure portal or CLI. You can also use the same options to export key-values from App Configuration, for example between related stores. If you have adopted Configuration as Code and manage your configurations in GitHub or Azure DevOps, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or [Azure Pipeline Push Task](./push-kv-devops-pipeline.md). +App Configuration offers the option to bulk [import](./howto-import-export-data.md) your configuration settings from your current configuration files using either the Azure portal or CLI. You can also use the same options to export key-values from App Configuration, for example between related stores. If you have adopted Configuration as Code and manage your configurations in GitHub or Azure DevOps, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or [Azure Pipeline Import Task](./azure-pipeline-import-task.md). ## Multi-region deployment in App Configuration A multitenant application is built on an architecture where a shared instance of Configuration as code is a practice of managing configuration files under your source control system, for example, a git repository. It gives you benefits like traceability and approval process for any configuration changes. If you adopt configuration as code, App Configuration has tools to assist you in [managing your configuration data in files](./concept-config-file.md) and deploying them as part of your build, release, or CI/CD process. This way, your applications can access the latest data from your App Configuration store(s). - For GitHub, you can import configuration files from your GitHub repository into your App Configuration store using [GitHub Actions](./push-kv-github-action.md)-- For Azure DevOps, you can include the [Azure App Configuration Push](push-kv-devops-pipeline.md), an Azure pipeline task, in your build or release pipelines for data synchronization. +- For Azure DevOps, you can include the [Azure App Configuration Import](azure-pipeline-import-task.md +), an Azure pipeline task, in your build or release pipelines for data synchronization. - You can also import configuration files to App Configuration using Azure CLI as part of your CI/CD system. For more information, see [az appconfig kv import](scripts/cli-import.md). This model allows you to include validation and testing steps before committing data to App Configuration. If you use multiple App Configuration stores, you can also push the configuration data to them incrementally or all at once. |
azure-app-configuration | Howto Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md | -This article provides a guide for importing and exporting data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md). If you have adopted [Configuration as Code](./howto-best-practices.md#configuration-as-code) and manage your configurations in GitHub or Azure DevOps, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or use the [Azure Pipeline Push Task](./push-kv-devops-pipeline.md). +This article provides a guide for importing and exporting data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md). If you have adopted [Configuration as Code](./howto-best-practices.md#configuration-as-code) and manage your configurations in GitHub or Azure DevOps, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or use the [Azure Pipeline Import Task](./azure-pipeline-import-task.md). ## Import data |
azure-app-configuration | Integrate Ci Cd Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md | This article explains how to use data from Azure App Configuration in a continuo ## Use App Configuration in your Azure DevOps Pipeline -If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The Azure App Configuration DevOps extension is an add-on module that provides this functionality. [Get this module](https://go.microsoft.com/fwlink/?linkid=2091063) and refer to [Pull settings from App Configuration with Azure Pipelines](./pull-key-value-devops-pipeline.md) for instructions to use it in your Azure Pipelines. +If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The Azure App Configuration DevOps extension is an add-on module that provides this functionality. [Get this module](https://go.microsoft.com/fwlink/?linkid=2091063) and refer to [Export settings from App Configuration with Azure Pipelines](./azure-pipeline-export-task.md) for instructions to use it in your Azure Pipelines. ## Deploy App Configuration data with your application |
azure-cache-for-redis | Cache How To Zone Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md | To create a cache, follow these steps: ### Why can't I enable zone redundancy when creating a Premium cache? -Zone redundancy is available only in Azure regions that have Availability Zones. See [Azure regions with Availability Zones](../availability-zones/az-region.md#azure-regions-with-availability-zones) for the latest list. +Zone redundancy is available only in Azure regions that have Availability Zones. See [Azure regions with Availability Zones](../reliability/availability-zones-region-support.md) for the latest list. ### Why can't I select all three zones during cache create? |
azure-cache-for-redis | Cache Troubleshoot Timeouts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md | Because of optimistic TCP settings in Linux, client applications hosted on Linux ### RedisSessionStateProvider retry timeout -If you're using `RedisSessionStateProvider`, ensure you set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration). +If you're using `RedisSessionStateProvider`, ensure you set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [ASP.NET Output Cache Provider for Azure Cache for Redis](cache-aspnet-output-cache-provider.md). ```xml <add |
azure-cache-for-redis | Managed Redis Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/managed-redis/managed-redis-overview.md | Azure Managed Redis improves application performance by supporting common applic | [Deduplication](https://redis.io/solutions/deduplication/) | Often, you need to determine if an action already happened in a system, such as determining if a username is taken or if a customer was already sent an email. In Azure Managed Redis, bloom filters can be used to rapidly determine duplicates and prevent problems. | | [Leaderboards](../cache-web-app-cache-aside-leaderboard.md) | Redis offers simple and powerful support for developing leaderboards of all kinds using the [sorted set](https://redis.io/solutions/leaderboards/) data structure. Additionally, using [active geo-replication](managed-redis-how-to-active-geo-replication.md) can allow one leaderboard to be shared globally. | | Job and message queuing | Applications often add tasks to a queue when the operations associated with the request take time to execute. Longer running operations are queued to be processed in sequence, often by another server. This method of deferring work is called task queuing. Azure Managed Redis provides a distributed queue to enable this pattern in your application.|+| [PowerBI/Analytics Acceleration](https://techcommunity.microsoft.com/blog/analyticsonazure/how-to-use-redis-as-a-data-source-for-power-bi-with-redis-sql-odbc/3799471) | You can use the Redis ODBC driver to utilize Redis for BI, reporting, and analytics use-cases. Because Redis is typically much faster than relational databases, using Redis in this way can dramatically increase query responsiveness. | | Distributed transactions | Applications sometimes require a series of commands against a backend data-store to execute as a single atomic operation. All commands must succeed, or all must be rolled back to the initial state. Azure Managed Redis supports executing a batch of commands as a single [transaction](https://redis.io/topics/transactions). | ## Redis version |
azure-functions | Functions Bindings Service Bus Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md | public String pushToQueue( } ``` - In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on function parameters whose value would be written to a Service Bus queue. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type of a POJO. + In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on function parameters whose value would be written to a Service Bus queue. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type of a plan old Java object (POJO). Java functions can also write to a Service Bus topic. The following example uses the `@ServiceBusTopicOutput` annotation to describe the configuration for the output binding. To output multiple messages, return an array instead of a single object. For exa # [Model v3](#tab/nodejs-v3) -TypeScript samples are not documented for model v3. +TypeScript samples aren't documented for model v3. The following table explains the properties you can set using the attribute: | Property |Description| | | |-|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. | +|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. | |**TopicName**|Name of the topic. Set only if sending topic messages, not for a queue.| |**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|-|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| +|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that doesn't have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property isn't available because the latest version of the Service Bus SDK doesn't support manage operations.| Here's an example that shows the attribute applied to the return value of the function: public static string Run([HttpTrigger] dynamic input, ILogger log) For a complete example, see [Example](#example). -You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Attributes](functions-bindings-service-bus-trigger.md#attributes) in the trigger reference. +You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Attributes](functions-bindings-service-bus-trigger.md#attributes) in the trigger reference. For Python v2 functions defined using a decorator, the following properties on t | Property | Description | |-|--| | `arg_name` | The name of the variable that represents the queue or topic message in function code. |-| `queue_name` | Name of the queue. Set only if sending queue messages, not for a topic. | +| `queue_name` | Name of the queue. Set only if sending queue messages, not for a topic. | | `topic_name` | Name of the topic. Set only if sending topic messages, not for a queue. | | `connection` | The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections). | The following table explains the binding configuration properties that you set i | Property | Description | |||-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.| -|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. | +|**type** |Must be set to `serviceBus`. This property is set automatically when you create the trigger in the Azure portal.| +|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |-|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.| +|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.| |**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.| |**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).| The following table explains the binding configuration properties that you set i |function.json property | Description| |||-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.| -|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. | +|**type** |Must be set to `serviceBus`. This property is set automatically when you create the trigger in the Azure portal.| +|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |-|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.| +|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.| |**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.| |**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|-|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| +|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that doesn't have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property isn't available because the latest version of the Service Bus SDK doesn't support manage operations.| [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" -The following output parameter types are supported by all C# modalities and extension versions: +All C# modalities and extension versions support the following output parameter types: | Type | Description | | | | The following output parameter types are supported by all C# modalities and exte | **byte[]** | Use for writing binary data messages. When the parameter value is null when the function exits, Functions doesn't create a message. | | **Object** | When a message contains JSON, Functions serializes the object into a JSON message payload. When the parameter value is null when the function exits, Functions creates a message with a null object.| -Messaging-specific parameter types contain additional message metadata. The specific types supported by the output binding depend on the Functions runtime version, the extension package version, and the C# modality used. +Messaging-specific parameter types contain extra message metadata and aren't compatible with JSON serialization. As a result, it isn't possible to use `ServiceBusMesage` with the output binding in the isolated model. The specific types supported by the output binding depend on the Functions runtime version, the extension package version, and the C# modality used. # [Extension v5.x](#tab/extensionv5/in-process) Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmes # [Functions 2.x and higher](#tab/functionsv2/isolated-process) -Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **Extension 5.x and higher** +Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. More options are available to **Extension 5.x and higher** # [Functions 1.x](#tab/functionsv1/isolated-process) |
azure-functions | Functions Consumption Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-consumption-costs.md | Azure Functions currently offers these different hosting options for your functi | [**Flex Consumption plan**](flex-consumption-plan.md)| You pay for execution time on the instances on which your functions are running, plus any _always ready_ instances. Instances are dynamically added and removed based on the number of incoming events. This is the recommended dynamic scale plan, which also supports virtual network integration. | | [**Premium**](functions-premium-plan.md) | Provides you with the same features and scaling mechanism as the Consumption plan, but with enhanced performance and virtual network integration. Cost is based on your chosen pricing tier. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md). | | [**Dedicated (App Service)**](dedicated-plan.md) <br/>(basic tier or higher) | When you need to run in dedicated VMs or in isolation, use custom images, or want to use your excess App Service plan capacity. Uses [regular App Service plan billing](https://azure.microsoft.com/pricing/details/app-service/). Cost is based on your chosen pricing tier.|-| [**Container Apps**](functions-container-apps-hosting.md) | Create and deploy containerized function apps in a fully managed environment hosted by Azure Container Apps, which lets you rRun your functions alongside other microservices, APIs, websites, and workflows as container-hosted programs. | +| [**Container Apps**](functions-container-apps-hosting.md) | Create and deploy containerized function apps in a fully managed environment hosted by Azure Container Apps, which lets you run your functions alongside other microservices, APIs, websites, and workflows as container-hosted programs. | | [**Consumption**](consumption-plan.md) | You're only charged for the time that your function app runs. This plan includes a [free grant][pricing page] on a per subscription basis.| You should always choose the option that best supports the feature, performance, and cost requirements for your function executions. To learn more, see [Azure Functions scale and hosting](functions-scale.md). |
azure-functions | Functions Cli Create Function App Connect To Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md | az group delete --name $resourceGroup | [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../consumption-plan.md). | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Create an Azure Cosmos DB database. | | [az cosmosdb show](/cli/azure/cosmosdb#az-cosmosdb-show)| Gets the database account connection. |-| [az cosmosdb list-keys](/cli/azure/cosmosdb#az-cosmosdb-list-keys)| Gets the keys for the database. | +| [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list)| Gets the keys for the database. | | [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set) | Sets the connection string as an app setting in the function app. | ## Next steps |
azure-maps | Power Bi Visual Add Reference Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md | Optionally, select the **Use custom colors** toggle switch to toggle On/Off cust #### Semantic model -| Datapoint | Country | City | Office name | +| Datapoint | Country/region | City | Office name | |-||-|-| | Datapoint_1 | US | New York | Office C | | Datapoint_1 | US | Seattle | Office A | |
azure-netapp-files | Use Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md | Latency is subject to availability zone latency for within availability zone acc ## Azure regions with availability zones -For a list of regions that currently support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md). +For a list of regions that currently support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-region-support.md). ## Next steps |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t With the announcement of cool access's general availability, you can now enable cool access for volumes in Premium and Ultra service level capacity pools, in addition to volumes in Standard service levels capacity pools. With cool access, you can transparently store data in a more cost effective manner on Azure storage accounts based on the data's access pattern. - The cool access feature provides the ability to configure a capacity pool with cool access, that moves cold (infrequently accessed) data transparently to Azure storage account to help you reduce the total cost of storage. There's a difference in data access latency as data blocks might be tiered to Azure storage account. The cool access feature provides options for the "coolness period" to optimize the days in which infrequently accessed data moves to cool tier and network transfer cost, based on your workload and read/write patterns. The "coolness period" feature is provided at the volume level. + The cool access feature provides the ability to configure a capacity pool with cool access, which moves cold (infrequently accessed) data transparently to Azure storage account to help you reduce the total cost of storage. There's a difference in data access latency as data blocks might be tiered to Azure storage account. The cool access feature provides options for the "coolness period" to optimize the days in which infrequently accessed data moves to cool tier and network transfer cost, based on your workload and read/write patterns. The "coolness period" feature is provided at the volume level. In a cross-region or cross-zone replication setting, cool access can now be configured for destination only volumes to ensure data protection. This capability provides cost savings without any latency impact on source volumes. Azure NetApp Files is updated regularly. This article provides a summary about t Cross-zone replication allows you to replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone (AZ) to another within the same region. Using technology similar to the cross-region replication feature and Azure NetApp Files availability zone volume placement feature, cross-zone replication replicates data in-region across different zones; only changed blocks are sent over the network in a compressed, efficient format. It helps you protect your data from unforeseeable zone failures without the need for host-based data replication. This feature minimizes the amount of data required to replicate across the zones, limiting data transfers required and shortens the replication time so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs and is highly cost-effective. - Cross-zone replication is available in all [AZ-enabled regions](../reliability/availability-zones-service-support.md) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp®ions=all&rar=true). + Cross-zone replication is available in all [regions with availability zones](../reliability/availability-zones-region-support.md) and with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp®ions=all&rar=true). * [Transition a volume to customer-managed keys](configure-customer-managed-keys.md#transition) (Preview) |
azure-resource-manager | Bicep Functions Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md | resource location_lock 'Microsoft.Authorization/policyAssignments@2024-04-01' = `pickZones(providerNamespace, resourceType, location, [numberOfZones], [offset])` -Determines whether a resource type supports zones for a region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). +Determines whether a resource type supports zones for a region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure services that support availability zones](../../reliability/availability-zones-service-support.md). Namespace: [az](bicep-functions.md#namespaces-for-functions). When the resource type or region doesn't support zones, an empty array is return ### Remarks -There are different categories for Azure Availability Zones - zonal and zone-redundant. The `pickZones` function can be used to return an availability zone for a zonal resource. For zone redundant services (ZRS), the function returns an empty array. Zonal resources typically have a `zones` property at the top level of the resource definition. To determine the category of support for availability zones, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). +There are different categories for Azure Availability Zones - zonal and zone-redundant. The `pickZones` function can be used to return an availability zone for a zonal resource. For zone redundant services (ZRS), the function returns an empty array. Zonal resources typically have a `zones` property at the top level of the resource definition. To determine the category of support for availability zones, see [Azure services that support availability zones](../../reliability/availability-zones-service-support.md). To determine if a given Azure region or location supports availability zones, call the `pickZones` function with a zonal resource type, such as `Microsoft.Network/publicIPAddresses`. If the response isn't empty, the region supports availability zones. |
azure-resource-manager | Operator Null Forgiving | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operator-null-forgiving.md | The following example fails the design time validation: ```bicep param inputString string -output outString string = first(skip(split(input, '/'), 1)) +output outString string = first(skip(split(inputString, '/'), 1)) ``` The warning message is: To solve the problem, use the null-forgiving operator: ```bicep param inputString string -output outString string = first(skip(split(input, '/'), 1))! +output outString string = first(skip(split(inputString, '/'), 1))! ``` ## Next steps |
azure-resource-manager | Template Functions Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md | The next example shows a `list` function that takes a parameter. In this case, t `pickZones(providerNamespace, resourceType, location, [numberOfZones], [offset])` -Determines whether a resource type supports zones for the specified location or region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). +Determines whether a resource type supports zones for the specified location or region. This function **only supports zonal resources**. Zone redundant services return an empty array. For more information, see [Azure services that support availability zones](../../reliability/availability-zones-service-support.md). In Bicep, use the [pickZones](../bicep/bicep-functions-resource.md#pickzones) function. When the resource type or region doesn't support zones, an empty array is return ### Remarks -There are different categories for Azure Availability Zones - zonal and zone-redundant. The `pickZones` function can be used to return an availability zone for a zonal resource. For zone redundant services (ZRS), the function returns an empty array. Zonal resources typically have a `zones` property at the top level of the resource definition. To determine the category of support for availability zones, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). +There are different categories for Azure Availability Zones - zonal and zone-redundant. The `pickZones` function can be used to return an availability zone for a zonal resource. For zone redundant services (ZRS), the function returns an empty array. Zonal resources typically have a `zones` property at the top level of the resource definition. To determine the category of support for availability zones, see [Azure services that support availability zones](../../reliability/availability-zones-service-support.md). To determine if a given Azure region or location supports availability zones, call the `pickZones` function with a zonal resource type, such as `Microsoft.Network/publicIPAddresses`. If the response isn't empty, the region supports availability zones. In the templates with [symbolic names](./resource-declaration.md#use-symbolic-na Returns an object representing a resource's runtime state. The output and behavior of the `reference` function highly relies on how each resource provider (RP) implements its PUT and GET responses. To return an array of objects representing a resource collections's runtime states, see [references](#references). -Bicep provide the reference function, but in most cases, the reference function isn't required. It's recommended to use the symbolic name for the resource instead. See [reference](../bicep/bicep-functions-resource.md#reference). +Bicep provides the reference function, but in most cases, the reference function isn't required. It's recommended to use the symbolic name for the resource instead. See [reference](../bicep/bicep-functions-resource.md#reference). ### Parameters |
azure-signalr | Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/availability-zones.md | Azure SignalR Service uses [Azure availability zones](../availability-zones/az-o ## Zone redundancy -Zone-enabled Azure regions (not all [regions support availability zones](../availability-zones/az-region.md)) have a minimum of three availability zones. A zone is one or more datacenters, each with its own independent power and network connections. All the zones in a region are connected by a dedicated low-latency regional network. If a zone fails, Azure SignalR Service traffic running on the affected zone is routed to other zones in the region. +Zone-enabled Azure regions (not all [regions support availability zones](../reliability/availability-zones-region-support.md)) have a minimum of three availability zones. A zone is one or more datacenters, each with its own independent power and network connections. All the zones in a region are connected by a dedicated low-latency regional network. If a zone fails, Azure SignalR Service traffic running on the affected zone is routed to other zones in the region. Azure SignalR Service uses availability zones in a *zone-redundant* manner. Zone redundancy means the service isn't constrained to run in a specific zone. Instead, total service is evenly distributed across multiple zones in a region. Zone redundancy reduces the potential for data loss and service interruption if one of the zones fails. ## Next steps -* Learn more about [regions that support availability zones](../availability-zones/az-region.md). +* Learn more about [regions that support availability zones](../reliability/availability-zones-region-support.md). * Learn more about designing for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure. |
azure-vmware | Azure Vmware Solution Horizon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md | To understand the Azure virtual machine sizes that are required for the Horizon ## References [System Requirements For Horizon Agent for Linux](https://docs.vmware.com/en/VMware-Horizon/2012/linux-desktops-setup/GUID-E268BDBF-1D89-492B-8563-88936FD6607A.html)---## Next steps -To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/docs/vmw-horizon-faqs). |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | description: This article provides details about the known issues of Azure VMwar Previously updated : 9/18/2024 Last updated : 11/20/2024 # Known issues: Azure VMware Solution Refer to the table to find details about resolution dates or possible workaround [VMSA-2024-0020](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/25047) VMware NSX command injection, local privilege escalation & content spoofing vulnerability| October 2024 | The vulnerability mentioned in the Broadcom document is not applicable to Azure VMware Solution, as attack vector mentioned does not apply. | N/A | | New Stretched Clusters private cloud deploys with vSphere 7, not vSphere 8. | September 2024 | Stretched Clusters is waiting for a Hotfix to be deployed, which will resolve this issue. | Planned November 2024 | | New Standard private cloud deploys with vSphere 7, not vSphere 8 in Australia East region (Pods 4 and 5). | October 2024 | Pods 4 and 5 in Australia East are waiting for a Hotfix to be deployed, which will resolve this issue. | Planned November 2024 |+| vCenter Server vpxd crashes when using special characters in network names with VMware HCX. For more information, see [vpxd crashes with duplicate key value in "vpx_nw_assignment" when using HCX-IX for migrations (323283)](https://knowledge.broadcom.com/external/article?articleNumber=323283). | November 2024 | Avoid using special characters in your Azure VMware Solution network names. | November 2024 | In this article, you learned about the current known issues with the Azure VMware Solution. |
azure-vmware | Migrate Sql Server Always On Availability Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-availability-group.md | For details about configuring and managing the quorum, see [Failover Clustering - [Microsoft SQL Server 2019 Documentation](/sql/sql-server/) - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/) - [Windows Server Technical Documentation](/windows-server/) -- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/docs/architecting-mssql-server-for-ha-on-vmware-vsphere-platform-final-0) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-vmware | Migrate Sql Server Failover Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md | Check the connectivity to SQL Server from other systems and applications in your - [Microsoft SQL Server 2019 Documentation](/sql/sql-server/?view=sql-server-ver15&preserve-view=true) - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true) - [Windows Server Technical Documentation](/windows-server/)-- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/docs/architecting-mssql-server-for-ha-on-vmware-vsphere-platform-final-0) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-vmware | Migrate Sql Server Standalone Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md | Check the connectivity to SQL Server from other systems and applications in your - [Microsoft SQL Server 2019 Documentation](/sql/sql-server/?view=sql-server-ver15&preserve-view=true) - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true) - [Windows Server Technical Documentation](/windows-server/)-- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/docs/architecting-mssql-server-for-ha-on-vmware-vsphere-platform-final-0) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-web-pubsub | Concept Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-availability-zones.md | Azure Web PubSub Service uses [Azure availability zones](../availability-zones/a ## Zone redundancy -Zone-enabled Azure regions (not all [regions support availability zones](../availability-zones/az-region.md)) have a minimum of three availability zones. A zone is one or more datacenters, each with its own independent power and network connections. All the zones in a region are connected by a dedicated low-latency regional network. If a zone fails, Azure Web PubSub Service traffic running on the affected zone is routed to other zones in the region. +Zone-enabled Azure regions (not all [regions support availability zones](../reliability/availability-zones-region-support.md)) have a minimum of three availability zones. A zone is one or more datacenters, each with its own independent power and network connections. All the zones in a region are connected by a dedicated low-latency regional network. If a zone fails, Azure Web PubSub Service traffic running on the affected zone is routed to other zones in the region. Azure Web PubSub Service uses availability zones in a *zone-redundant* manner. Zone redundancy means the service isn't constrained to run in a specific zone. Instead, total service is evenly distributed across multiple zones in a region. Zone redundancy reduces the potential for data loss and service interruption if one of the zones fails. ## Next steps -* Learn more about [regions that support availability zones](../availability-zones/az-region.md). +* Learn more about [regions that support availability zones](../reliability/availability-zones-region-support.md). * Learn more about designing for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure. |
azure-web-pubsub | Howto Develop Event Listener | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md | Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity ## Test your configuration with live demo -1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.blob.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. +1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.z13.web.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. -1. Use this [WebSocket Client](https://awpseventlistenerdemo.blob.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app. +1. Use this [WebSocket Client](https://awpseventlistenerdemo.z13.web.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app. :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app."::: :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="Screenshot showing the area of the WebSocket client app to generate a user event."::: |
backup | Azure File Share Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-backup-overview.md | Title: About Azure File share backup description: Learn how to back up Azure file shares in the Recovery Services vault Previously updated : 09/09/2024 Last updated : 11/20/2024 - engagement-fy23 The following diagram explains the lifecycle of the lease acquired by Azure Back :::image type="content" source="./media/azure-file-share-backup-overview/backup-lease-lifecycle-diagram.png" alt-text="Diagram explaining the lifecycle of the lease acquired by Azure Backup." border="false"::: +## How Cross Subscription Backup for Azure File share (preview) works? ++Cross Subscription Backup (CSB) for Azure File share (preview) enables you to back up file shares across subscriptions. This feature is useful when you want to centralize backup management for file shares across different subscriptions. You can back up file shares from a source subscription to a Recovery Services vault in a target subscription. ++Learn about the [additional prerequisites](backup-azure-files.md#prerequisites) and [steps to configure Cross Subscription Backup for Azure File share (preview)](backup-azure-files.md#configure-the-backup). For information on the supported regions for Cross Subscription Backup, see the [Azure File share backup support matrix](azure-file-share-support-matrix.md#supported-regions-for-cross-subscription-backup-preview). + ## Next steps * [Back up Azure File shares](backup-afs.md). |
backup | Azure File Share Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md | Title: Support Matrix for Azure file share backup by using Azure Backup description: Provides a summary of support settings and limitations when backing up Azure file shares. Previously updated : 09/09/2024 Last updated : 11/20/2024 Vaulted backup for Azure Files (preview) is available in West Central US, Southe +### Supported regions for Cross Subscription Backup (preview) ++Cross Subscription Backup (CSB) for Azure File share (preview) is currently available in the following regions: East Asia, Southeast Asia, UK South, UK West, Central India. + ## Supported storage accounts **Choose a backup tier**: |
backup | Backup Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-files.md | Title: Back up Azure File shares in the Azure portal description: Learn how to use the Azure portal to back up Azure File shares in the Recovery Services vault Previously updated : 07/29/2024 Last updated : 11/20/2024 Azure File share backup is a native, cloud based backup solution that protects y ## Prerequisites -* Ensure that the file share is present in one of the supported storage account types. Review the [support matrix](azure-file-share-support-matrix.md). +* Ensure the file share is present in one of the supported storage account types. Review the [support matrix](azure-file-share-support-matrix.md). * Identify or create a [Recovery Services vault](#create-a-recovery-services-vault) in the same region and subscription as the storage account that hosts the file share. * In case you have restricted access to your storage account, check the firewall settings of the account to ensure that the exception "Allow Azure services on the trusted services list to access this storage account" is granted. You can refer to [this](../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) link for the steps to grant an exception.+>[!Important] +>To perform [Cross Subscription Backup (CSB) for protecting Azure File share (preview)](azure-file-share-backup-overview.md#how-cross-subscription-backup-for-azure-file-share-preview-works) in another subscription, ensure you register `Microsoft.RecoveryServices` in the **subscription of the file share** in addition to the above prerequisites. Learn about the [supported regions for Cross Subscription Backup (preview)](azure-file-share-support-matrix.md#supported-regions-for-cross-subscription-backup-preview). + [!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)] ## Configure the backup -You can configure *snapshot backup* and *vaulted backup (preview)* for Azure File share from *Backup center* or *File share blade*. +You can configure *snapshot backup* and *vaulted backup (preview)* for Azure File share from the *Recovery Services vault* or *File share blade*. **Choose an entry point** -# [Backup center](#tab/backup-center) +# [Recovery Services vault](#tab/recovery-services-vault) -To configure backup for multiple file shares from the Backup center, follow these steps: +To configure backup for multiple file shares from the Recovery Services vault, follow these steps: -1. In the [Azure portal](https://portal.azure.com/), go to **Backup center** and select **+Backup**. +1. In the [Azure portal](https://portal.azure.com/), go to the **Recovery Services vault** and select **+Backup**. - :::image type="content" source="./media/backup-afs/backup-center-configure-inline.png" alt-text="Screenshot showing to configure Backup for Azure File." lightbox="./media/backup-afs/backup-center-configure-expanded.png"::: + :::image type="content" source="./media/backup-afs/azure-file-configure-backup.png" alt-text="Screenshot showing to configure Backup for Azure File." lightbox="./media/backup-afs/azure-file-configure-backup.png"::: 1. On the **Start: Configure Backup** blade, select **Azure Files (Azure Storage)** as the datasource type, select the vault that you want to protect the file shares with, and then select **Continue**. :::image type="content" source="./media/backup-afs/azure-file-share-select-vault.png" alt-text="Screenshot showing to select Azure Files."::: -1. Click **Select** to select the storage account that contains the file shares to be backed-up. +1. Click **Select** to select the storage account that contains the file shares to be backed up. - The **Select storage account** blade opens on the right, which lists a set of discovered supported storage accounts. They're either associated with this vault or present in the same region as the vault, but not yet associated to any Recovery Services vault. + The **Select storage account** blade opens on the right, which lists a set of discovered supported storage accounts. They're either associated with this vault or present in the same region as the vault, but not yet associated with any Recovery Services vault. :::image type="content" source="./media/backup-azure-files/azure-file-share-select-storage-account.png" alt-text="Screenshot showing to select a storage account." lightbox="./media/backup-azure-files/azure-file-share-select-storage-account.png"::: -1. On the **Select storage account** blade, from the list of discovered storage accounts, select an account, and select **OK**. +1. On the **Select storage account** blade, by default it list the storage accounts from the current subscription. Select an account, and select **OK**. ++ If you want to configure the backup operation with a storage account in a different subscription ([Cross Subscription Backup - preview](azure-file-share-backup-overview.md#how-cross-subscription-backup-for-azure-file-share-preview-works)), choose the other subscription from the **Subscription** filter. The storage accounts from the selected subscription appear. :::image type="content" source="./media/backup-azure-files/azure-file-share-confirm-storage-account.png" alt-text="Screenshot showing to select one of the discovered storage accounts." lightbox="./media/backup-azure-files/azure-file-share-confirm-storage-account.png"::: To configure backup for multiple file shares from the Backup center, follow thes - **Snapshot**: Enables only snapshot-based backups that are stored locally and can only provide protection in case of accidental deletions. - **Vault-Standard (Preview)**: Provides comprehensive data protection. - 1. Configure the *backup schedule* as per the requirement. You can configure up to *six backups* a day. The snapshots are taken as per the schedule defined in the policy. In case of vaulted backup, the data from the last snapshot of the day is transferred to the vault. + 1. Configure the *backup schedule* as per the requirement. You can configure up to *six backups* per day. The snapshots are taken as per the schedule defined in the policy. In case of vaulted backup, the data from the last snapshot of the day is transferred to the vault. 1. Configure the *Snapshot retention* and *Vault retention (preview)* duration to determine the expiry date of the recovery points. The following steps explain how you can configure backup for individual file sha - **Snapshot**: Enables only snapshot-based backups that are stored locally and can only provide protection in case of accidental deletions. - **Vault-Standard (Preview)**: Provides comprehensive data protection. - 1. Configure the *backup schedule* as per the requirement. You can configure up to *six backups* a day. The snapshots are taken as per the schedule defined in the policy. In case of vaulted backup, the data from the last snapshot of the day is transferred to the vault. + 1. Configure the *backup schedule* as per the requirement. You can configure up to *six backups* per day. The snapshots are taken as per the schedule defined in the policy. In case of vaulted backup, the data from the last snapshot of the day is transferred to the vault. 1. Configure the *Snapshot retention* and *Vault retention (preview)* duration to determine the expiry date of the recovery points. Occasionally, you might want to generate a backup snapshot, or recovery point, o **Choose an entry point** -# [Backup center](#tab/backup-center) +# [Recovery Services vault](#tab/recovery-services-vault) To run an on-demand backup, follow these steps: -1. Go to **Backup center** and select **Backup Instances** from the menu. -- Filter for **Azure Files (Azure Storage)** as the datasource type. +1. Go to the **Recovery Services vault** and select **Backup items** from the menu. - :::image type="content" source="./media/backup-afs/azure-file-share-backup-instances-inline.png" alt-text="Screenshot showing to select Backup instances." lightbox="./media/backup-afs/azure-file-share-backup-instances-expanded.png"::: +1. On the **Backup items** blade, select the **Backup Management Type** as **Azure Storage (Azure Files)**. 1. Select the item for which you want to run an on-demand backup job. To run an on-demand backup, follow these steps: 1. Select **OK** to confirm the on-demand backup job that runs. -1. Monitor the portal notifications to keep a track of backup job run completion. +1. Monitor the portal notifications to keep track of backup job run completion. - To monitor the job progress in the **Backup center** dashboard, select **Backup center** -> **Backup Jobs** -> **In progress**. + To monitor the job progress in the **Recovery Services vault** dashboard, go to **Recovery Services vault** > **Backup Jobs** > **In progress**. # [File share blade](#tab/file-share-pane) |
backup | Backup Azure Immutable Vault Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md | description: This article explains about the concept of Immutable vault for Azur Previously updated : 11/11/2024 Last updated : 11/20/2024 -## Before you start +## Supported scenarios for WORM storage -- Use of WORM storage for immutable vaults in locked state is currently in GA for Recovery Services Vaults in the following regions: West Central US, West Europe, East US, North Europe, Australia East.+- Use of WORM storage for immutable vaults in locked state is currently in GA for Recovery Services Vaults in the following regions: Australia Central 2, Switzerland West, South Africa West, Korea Central, Germany North, Korea South, Spain Central. - Use of WORM storage for immutable vaults in locked state is applicable for the following workloads: Azure Virtual machines, SQL in Azure VM, SAP HANA in Azure VM, Azure Backup Server, Azure Backup Agent, DPM.++## Before you start + - Immutable vault is available in all Azure public and US Government regions. - Immutable vault is supported for Recovery Services vaults and Backup vaults. - Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations). |
backup | Backup Azure Private Endpoints Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md | The private IP addresses for the FQDNs can be found in **DNS configuration** pan The following diagram shows how the resolution works when using a private DNS zone to resolve these private service FQDNs. The workload extension running on Azure VM requires connection to at least two storage accounts endpoints - the first one is used as communication channel (via queue messages) and second one for storing backup data. The MARS agent requires access to at least one storage account endpoint that is used for storing backup data. In addition to the Azure Backup cloud services, the workload extension and agent The following diagram shows how the name resolution works for storage accounts using a private DNS zone. The following diagram shows how you can do Cross Region Restore over Private Endpoint by replicating the Private Endpoint in a secondary region. Learn [how to do Cross Region Restore to a private endpoint enabled vault](backup-azure-private-endpoints-configure-manage.md#cross-region-restore-to-a-private-endpoint-enabled-vault). |
backup | Blob Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md | Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 11/11/2024 Last updated : 11/20/2024 Operational backup of blobs uses blob point-in-time restore, blob versioning, so - Operational backup supports block blobs in standard general-purpose v2 storage accounts only. Storage accounts with hierarchical namespace enabled (that is, ADLS Gen2 accounts) aren't supported. <br><br> Also, any page blobs, append blobs, and premium blobs in your storage account won't be restored and only block blobs will be restored. - Blob backup is also supported when the storage account has private endpoints.-- The backup operation isn't supported for blobs that are uploaded by using [Data Lake Storage APIs](/rest/api/storageservices/data-lake-storage-gen2). **Other limitations**: Operational backup of blobs uses blob point-in-time restore, blob versioning, so - Currently, you can perform only *one backup* per day (that includes scheduled and on-demand backups). Backup fails if you attempt to perform more than one backup operation a day. - If you stop protection (vaulted backup) on a storage account, it doesn't delete the object replication policy created on the storage account. In these scenarios, you need to manually delete the *OR policies*. - Cool and archived blobs are currently not supported.-+- The backup operation isn't supported for blobs that are uploaded by using [Data Lake Storage APIs](/rest/api/storageservices/data-lake-storage-gen2). ## Next steps |
backup | Private Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md | Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 10/01/2024 Last updated : 11/20/2024 If you've configured a DNS proxy server, using third-party proxy servers or fir The following example shows Azure firewall used as DNS proxy to redirect the domain name queries for Recovery Services vault, blob, queues and Microsoft Entra ID to *168.63.129.16*. For more information, see [Creating and using private endpoints](private-endpoints.md). The private IP addresses for the FQDNs can be found in the private endpoint blad The following diagram shows how the resolution works when using a private DNS zone to resolve these private service FQDNs. The workload extension running on Azure VM requires connection to at least two storage accounts - the first one is used as communication channel (via queue messages) and second one for storing backup data. The MARS agent requires access to one storage account used for storing backup data. As a pre-requisite, Recovery Services vault requires permissions for creating ad The following diagram shows how the name resolution works for storage accounts using a private DNS zone. ## Next steps |
backup | Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md | Title: Create and use private endpoints for Azure Backup description: Understand the process to creating private endpoints for Azure Backup where using private endpoints helps maintain the security of your resources. Previously updated : 04/16/2024 Last updated : 11/20/2024 To configure a proxy server for Azure VM or on-premises machine, follow these st The following diagram shows a setup (while using the Azure Private DNS zones) with a proxy server, whose VNet is linked to a private DNS zone with required DNS entries. The proxy server can also have its own custom DNS server, and the above domains can be conditionally forwarded to 168.63.129.16. If you're using a custom DNS server/host file for DNS resolution, see the sections on [managing DNS entries](#manage-dns-records) and [configuring protection](#configure-backup). ### Create DNS entries when the DNS server/DNS zone is present in another subscription |
backup | Secure By Default | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/secure-by-default.md | Title: Secure by Default with Azure Backup (Preview) description: Learn how to Secure by Default with Azure Backup (Preview). Previously updated : 11/11/2024 Last updated : 11/20/2024 Secure by default with soft delete for Azure Backup enables you to recover your *Soft delete* and *Enhanced Soft delete* are Generally available for Recovery Services vaults for a while; with enabling soft delete at vault level, we're now providing secure by default promise for all customers where all the backup data will be recoverable by default for 14 days. >[!Note]-> Secure by default and soft delete for vaults is currently in limited public preview in the following regions: East Asia<br> -> Since this is a preview feature, disabling soft delete is allowed from REST API, PS, CLI commands. A complete secure by default experience will be available from the GA of this feature. +>Secure by default and soft delete for vaults is currently in limited preview in the following region: East Asia. +> +>Since this is a preview feature, disabling soft delete is allowed from REST API, PS, CLI commands. A complete secure by default experience will be available from the GA of this feature. ## What's soft delete? |
backup | Troubleshoot Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/troubleshoot-azure-files.md | Title: Troubleshoot Azure file share backup description: This article is troubleshooting information about issues occurring when protecting your Azure file shares. Previously updated : 07/18/2024 Last updated : 11/20/2024 |
backup | Tutorial Restore Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-files.md | Title: Tutorial - Restore files to a VM with Azure Backup description: Learn how to perform file-level restores on an Azure VM with Backup and Recovery Services. Previously updated : 03/20/2024 Last updated : 11/20/2024 |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in the Azure Backup service description: Learn about the new features in the Azure Backup service. Previously updated : 11/19/2024 Last updated : 11/21/2024 - ignite-2023 Azure Backup is constantly improving and releasing new features that enhance the You can learn more about the new releases by bookmarking this page or by [subscribing to updates here](https://azure.microsoft.com/updates/?query=backup). ## Updates summary+ - November 2024- - [Back up SAP ASE (Sybase) database (preview)](#back-up-sap-ase-sybase-database-preview) - - [Vaulted backup and Cross Region Restore support for AKS is now generally available](#vaulted-backup-and-cross-region-restore-support-for-aks-is-now-generally-available) + - [Secure by Default with Vault soft delete (preview)](#secure-by-default-with-vault-soft-delete-preview) + - [WORM enabled Immutable Storage for Recovery Services vaults is now generally available](#worm-enabled-immutable-storage-for-recovery-services-vaults-is-now-generally-available) + - [Cross Subscription Backup support for Azure File Share (preview)](#cross-subscription-backup-support-for-azure-file-share-preview) + - [Back up SAP ASE (Sybase) database (preview)](#back-up-sap-ase-sybase-database-preview) + - [Vaulted backup and Cross Region Restore support for AKS is now generally available](#vaulted-backup-and-cross-region-restore-support-for-aks-is-now-generally-available) - October 2024 - [GRS and CRR support for Azure VMs using Premium SSD v2 and Ultra Disk is now generally available.](#grs-and-crr-support-for-azure-vms-using-premium-ssd-v2-and-ultra-disk-is-now-generally-available) - [Back up Azure VMs with Extended Zones](#back-up-azure-vms-with-extended-zones-preview) You can learn more about the new releases by bookmarking this page or by [subscr - February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## Secure by Default with Vault soft delete (preview) + +Azure Backup now provides the **Secure By default with Vault soft delete (preview)** feature that applies soft delete by default at all granularities - vaults, recovery points, containers and backup items. Azure Backup now ensures that all the backup data is recoverable against ransomware attacks by default and has no cost for *14 days*. You don't need to opt in to get *fair* security level for your backup data. You can update the soft delete retention period as per your preference up to *180 days*. ++Soft delete provides data recoverability from malicious or accidental deletions and is enabled by default for all vaults. To make soft delete irreversible, you can use **always-on** soft delete. ++For more information, see [Secure by Default with Azure Backup (Preview)](secure-by-default.md). ++## WORM enabled Immutable Storage for Recovery Services vaults is now generally available ++Azure Backup now provides immutable WORM storage for your backups when immutability is enabled and locked on a Recovery Services vault. When immutability is enabled, Azure Backup ensures that a Recovery Point, once created, can't be deleted or have its retention period reduced before its intended expiry. ++When immutability is locked, Azure Backup also uses WORM-enabled immutable storage to meet any compliance requirements. This feature is applicable to both existing and new vaults with locked immutability.WORM immutability is available in [these regions](backup-azure-immutable-vault-concept.md#supported-scenarios-for-worm-storage). ++For more information, see [About Immutable vault for Azure Backup](backup-azure-immutable-vault-concept.md). ++## Cross Subscription Backup support for Azure File Share (preview) ++Azure Backup now supports Cross Subscription Backup (CSB) for Azure File Shares (preview), allowing you to back up data across different subscriptions within the same tenant or Microsoft Entra ID. This capability offers greater flexibility and control, essentially for enterprises managing multiple subscriptions with varying purposes and security policies. ++For more information, see [About Azure File share backup](azure-file-share-backup-overview.md#how-cross-subscription-backup-for-azure-file-share-preview-works). + ## Back up SAP ASE (Sybase) database (preview) Azure Backup now allows you backing up SAP Adaptive Server Enterprise (ASE) (Sybase) databases running on Azure VMs. All backups are streamed directly to the Azure Backup managed recovery services vault that provides security capabilities like Immutability, Soft Delete and Multiuser Authorization. The vaulted backup data is stored in Microsoft-managed Azure subscription, thus isolating the backups from user's environment. These features ensure that the SAP ASE backup data is always secure and can be recovered safely even if the source machines are compromised. Azure Backup now allows you backing up SAP Adaptive Server Enterprise (ASE) (Syb For stream-based backup, Azure Backup can stream log backups in every **15 minutes**. You can enable this feature in addition to the database backup, which provides **Point-In-Time recovery** capability. Azure Backup also offers **Multiple Database Restore** capabilities such as **Alternate Location Restore** (System refresh), **Original Location Restore**, and **Restore as Files**. Azure Backup also offers cost-effective Backup policies (Weekly full + daily differential backups), which result in lower storage cost.+ For more information, see [Back up SAP ASE (Sybase) database (preview)](sap-ase-database-about.md). ## Vaulted backup and Cross Region Restore support for AKS is now generally available |
batch | Create Pool Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md | For example, you could create your pool with zonal policy in an Azure region whi ## Regional support and other requirements -Batch maintains parity with Azure on supporting Availability Zones. To use the zonal option, your pool must be created in a [supported Azure region](../availability-zones/az-region.md). +Batch maintains parity with Azure on supporting Availability Zones. To use the zonal option, your pool must be created in a [supported Azure region](../reliability/availability-zones-region-support.md). In order for your Batch pool to be allocated across availability zones, the Azure region in which the pool is created must support the requested VM SKU in more than one zone. You can validate this by calling the [Resource Skus List API](/rest/api/compute/resourceskus/list) and check the **locationInfo** field of [resourceSku](/rest/api/compute/resourceskus/list#resourcesku). Be sure that more than one zone is supported for the requested VM SKU. |
certification | Edge Secured Core Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/edge-secured-core-devices.md | This page contains a list of devices that have successfully passed the Edge Secu ||| |AAEON|[SRG-TG01](https://newdata.aaeon.com.tw/DOWNLOAD/2014%20datasheet/Systems/SRG-TG01.pdf)|Windows 10 IoT Enterprise|2022-06-14| |Advantech|[ITA-580](https://www.advantech.com/en-eu/products/5130beef-2b81-41f7-a89b-2c43c1f2b6e9/ita-580/mod_bf7b0383-e6b2-49d7-9181-b6fc752e188b)|Windows 10 IoT Enterprise|2024-07-08|+|Asus|[NUC14RVH-B (u5)](https://www.asus.com/us/displays-desktops/nucs/nuc-mini-pcs/asus-nuc-14-pro/)|Windows 11 IoT Enterprise|2024-11-20| +|Asus|[NUC14RVH-B (u7)](https://www.asus.com/us/displays-desktops/nucs/nuc-mini-pcs/asus-nuc-14-pro/)|Windows 10 IoT Enterprise|2024-11-20| |Asus|[NUC13L5K-B (i5)](https://www.asus.com/us/displays-desktops/nucs/nuc-mini-pcs/asus-nuc-13-pro/)|Windows 10 IoT Enterprise|2024-08-01| |Asus|[NUC13L5K-B (i7)](https://www.asus.com/us/displays-desktops/nucs/nuc-mini-pcs/asus-nuc-13-pro/)|Windows 10 IoT Enterprise|2024-08-01| |Asus|[PN64-E1 vPro](https://www.asus.com/ca-en/displays-desktops/mini-pcs/pn-series/asus-expertcenter-pn64-e1/)|Windows 10 IoT Enterprise|2023-08-08| |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | Azure Communication Services supports three levels of user access control, using ### Chat Data Azure Communication Services stores chat threads according to the [data retention policy](/purview/create-retention-policies) in effect when the thread is created. You can update the retention policy if needed during the retention time period you set. After you delete a chat thread (by policy or by a Delete API request), it can't be retrieved. You can choose between indefinite thread retention, automatic deletion between 30 and 90 days via the retention policy on the [Create Chat Thread API](/rest/api/communication/chat/chat/create-chat-thread), or immediate deletion using the APIs [Delete Chat Message](/rest/api/communication/chat/chat-thread/delete-chat-message) or [Delete Chat Thread](/rest/api/communication/chat/chat/delete-chat-thread). |
communication-services | Monitor Logs Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/monitor-logs-metrics.md | -In this article, you will learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/logs/voice-and-video-logs.md) and [Call Diagnostic](../../analytics/logs/voice-and-video-logs.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant. +In this article, you learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/logs/voice-and-video-logs.md) and [Call Diagnostic](../../analytics/logs/voice-and-video-logs.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant. ## Azure logs & metrics -Authentication API metrics emit records for every operation called on the Identity SDK or API (for example, creating a user `CreateIdentity` or issue of a token `CreateToken`). Chat API metrics emit records for every chat API call made via chat SDKs or APIs (for example, creating a thread or sending a message). +Authentication API metrics emit records for every operation called on the Identity SDK or API (for example, creating a user `CreateIdentity` or issue of a token `CreateToken`). Chat API metrics emit records for every chat API call made via chat SDK or APIs (for example, creating a thread or sending a message). Call summary and call diagnostics logs are emitted only for the following participants of the meeting: - Organizer of the meeting if actively joined the meeting.-- Azure Communication Services users joining the meeting from the same tenant. This includes users rejected in the lobby and Azure Communication Services users from different resources but in the same tenant.-- Additional Teams users, phone users and bots joining the meeting only if the organizer and current Azure Communication Services resource are in the same tenant.+- Azure Communication Services users joining the meeting from the same tenant. Users rejected in the lobby and Azure Communication Services users from different resources but in the same tenant are included. +- Microsoft 365 users, phone users, and bots joining the meeting only if the organizer and current Azure Communication Services resource are in the same tenant. -If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/logs/voice-and-video-logs.md). Bots indicate service logic provided during the meeting. Here is a list of frequently used bots: +If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/logs/voice-and-video-logs.md). Bots indicate service logic provided during the meeting. Here's a list of frequently used bots: - b1902c3e-b9f7-4650-9b23-5772bd429747 - Teams convenient recording ## Microsoft Teams logs-Teams administrator can see Teams external users in the overview of the meeting (section `Manage users` -> `Select user` -> `Meetings & calls` -> `Select meeting`). The summary logs can be found when selecting individual Teams external users (continue `Participant details` -> `Anonymous user`). For more details about the call legs, proceed with [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com). You can learn more about the call quality dashboard [here](/microsoftteams/cqd-what-is-call-quality-dashboard). +Teams administrator can see Teams external users in the overview of the meeting (section `Manage users` -> `Select user` -> `Meetings & calls` -> `Select meeting`). The summary logs can be found when selecting individual Teams external users (continue `Participant details` -> `Anonymous user`). For more details about the call logs, proceed with [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com). You can learn more about the call quality dashboard [here](/microsoftteams/cqd-what-is-call-quality-dashboard). ## Next steps |
communication-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md | For more information on the SMS SDK and service, see [SMS SDK overview](./sms/sd ## Email -You can send a limited number of email messages. If you exceed the following limits for your subscription, your requests are rejected. You can attempt these requests again, after the `Retry-After` time passes. Take action before you reach the limit. Request to raise your sending volume limits, if needed. +You can send a limited number of email messages. If you exceed the [email rate limits](#rate-limits-for-email) for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed. -The Azure Communication Services email service is designed to support high throughput. The service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. We recommend that you use Azure Communication Services email over a period of two to four weeks to gradually increase your email volume. During this time, closely monitor the delivery status of your emails. This gradual increase enables third-party email service providers to adapt to the change in IP for your domain's email traffic. The gradual change gives you time to protect your sender reputation and maintain the reliability of your email delivery. +The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. -### Rate limits for email +We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase enables third-party email service providers to adapt to the change in IP for your domain's email traffic. The gradual change gives you time to protect your sender reputation and maintain the reliability of your email delivery. -We approve higher limits for customers based on use case requirements, domain reputation, traffic patterns, and failure rates. To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Higher quotas are available only for verified custom domains, not Azure managed domains. +Azure Communication Services email service supports high volume up to 1-2 million messages per hour. High throughput can be enabled based on several factors, including: +- Customer peak traffic +- Business needs +- Ability to manage failure rates +- Domain reputation -The following table lists limits for [Custom domains](../quickstarts/email/add-custom-verified-domains.md). +### Failure Rate Requirements -| Operation | Scope | Time frame (minutes) | Limit (number of emails) | -||--|-|-| -|Send email|Per subscription|1|30| -|Send email|Per subscription|60|100| -|Get email status|Per subscription|1|60| -|Get email status|Per subscription|60|200| +To enable a high email quota, your email failure rate must be less than one percent (1%). If your failure rate is high, you must resolve the issues before requesting a quota increase. +Customers are expected to actively monitor their failure rates. ++If the failure rate increases after a quota increase, Azure Communication Services will contact the customer for immediate action and a resolution timeline. In extreme cases, if the failure rate isn't managed within the specified timeline, Azure Communication Services may reduce or suspend service until the issue is resolved. ++#### Related articles ++Azure Communication Services provides rich logs and analytics to help monitor and manage failure rates. For more information, see the following articles: ++- [Improve sender reputation in Azure Communication Services email](./email/sender-reputation-managed-suppression-list.md) +- [Email Insights](./analytics/insights/email-insights.md) +- [Enable logs via Diagnostic Settings in Azure Monitor](./analytics/enable-logging.md) +- [Quickstart: Handle Email events](../quickstarts/email/handle-email-events.md) +- [Quickstart: Manage domain suppression lists in Azure Communication Services using the management client libraries](../quickstarts/email/manage-suppression-list-management-sdks.md) ++> [!NOTE] +> To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Higher quotas are only available for verified custom domains, not Azure-managed domains. ++### Rate Limits for Email ++[Custom Domains](../quickstarts/email/add-custom-verified-domains.md) ++| Operation | Scope | Timeframe (minutes) | Limit (number of emails) | Higher limits available | +| | | | | | +| Send Email | Per Subscription | 1 | 30 | Yes | +| Send Email | Per Subscription | 60 | 100 | Yes | +| Get Email Status | Per Subscription | 1 | 60 | Yes | +| Get Email Status | Per Subscription | 60 | 200 | Yes | The following table lists limits for [Azure managed domains](../quickstarts/email/add-azure-managed-domains.md). -| Operation | Scope | Time frame (minutes) | Limit (number of emails) | -||--|-|-| -|Send email|Per subscription|1|5| -|Send email|Per subscription|60|10| -|Get email status|Per subscription|1|10| -|Get email status|Per subscription|60|20| +| Operation | Scope | Timeframe (minutes) | Limit (number of emails) | Higher limits available | +| | | | | | +| Send Email | Per Subscription | 1 | 5 | No | +| Send Email | Per Subscription | 60 | 10 | No | +| Get Email Status | Per Subscription | 1 |10 | No | +| Get Email Status | Per Subscription | 60 |20 | No | ### Size limits for email You can find more information about Microsoft Graph [throttling](/graph/throttli ## Related content -See the [help and support](../support.md) options. +- [Help and support options](../support.md) |
communication-services | Add Azure Managed Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-azure-managed-domains.md | ms.devlang: azurecli # Quickstart: How to add Azure Managed Domains to Email Communication Service -In this quick start, you learn how to provision the Azure Managed Domain to Email Communication Service in Azure Communication Services. +This article describes how to provision an Azure Managed Domain for Email Communication Service in Azure Communication Services. ::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/create-azure-managed-domain-resource-az-portal.md)] Before provisioning an Azure Managed Domain, review the following table to decid |**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain | |**Cons:** | - Sender domain is not personalized and cannot be changed<br/>- Sender usernames can't be personalized<br/>- Very limited sending volume<br />- User Engagement Tracking can't be enabled <br /> | - Requires verification of domain records <br /> - Longer setup for verification | +### Service limits ++Both Azure managed domains and Custom domains are subject to service limits. Service limits include failure, rate, and size limits. For more informations, see [Service limits for Azure Communication Services > Email](../../concepts/service-limits.md#email). ## Sender authentication for Azure Managed Domain Azure Communication Services automatically configures the required email authent ## Related articles -* Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md) -* Learn how to send emails with custom verified domains in [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md) +* Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md). +* Review email failure limits, rate limits, and size limits in [Service limits for Azure Communication Services > Email](../../concepts/service-limits.md#email). +* Learn how to send emails with custom verified domains in [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md). |
communication-services | Add Custom Verified Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md | Before provisioning a custom email domain, review the following table to decide |**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain | |**Cons:** | - Sender domain isn't personalized and can't be changed<br/>- Sender usernames can't be personalized<br/>- Limited sending volume<br />- User Engagement Tracking can't be enabled<br /> | - Requires verification of domain records<br /> - Longer setup for verification | +### Service limits ++Both Azure managed domains and Custom domains are subject to service limits. Service limits include failure, rate, and size limits. For more informations, see [Service limits for Azure Communication Services > Email](../../concepts/service-limits.md#email). + ## Change MailFrom and FROM display names for custom domains You can optionally configure your `MailFrom` address to be something other than the default `DoNotReply` and add more than one sender username to your domain. For more information about how to configure your sender address, see [Quickstart: How to add multiple sender addresses](add-multiple-senders.md). The following links provide more information about how to add a CNAME record usi ## Related articles * Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)+* Review email failure limits, rate limits, and size limits in [Service limits for Azure Communication Services > Email](../../concepts/service-limits.md#email). +* Learn how to send emails with Azure Managed Domains in [Quickstart: How to add Azure Managed Domains to Email Communication Service](../../quickstarts/email/add-azure-managed-domains.md). |
confidential-computing | Confidential Clean Rooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-clean-rooms.md | + + Title: Perform secure multiparty data collaboration on Azure +description: Learn how Azure Confidential Clean Rooms enables multiparty collaborations while keeping your data safe from other collaborators. ++++ Last updated : 10/28/2024++++# Azure Confidential Clean Rooms ++> [!NOTE] +> Azure Confidential Clean Rooms is currently in Gated Preview. Please fill the form at https://aka.ms/ACCRPreview and we will reach out to you with next steps. ++Azure Confidential Clean Rooms, aka ACCR, offers a secure and compliant environment that helps organizations overcome the challenges of using privacy-sensitive data for AI model development, inferencing, and data analytics. Built on top of [Confidential containers or C-ACI](../confidential-computing/confidential-containers.md), this service secures the data and the model from exfiltration outside the clean room boundary. +Organizations can safely collaborate and analyze sensitive data, within the sandbox, without violating compliance standards or risking data breaches by using advanced privacy-enhancing technologies like secure governance & audit, secure collaboration (TEE), verifiable trust, differential privacy, and controlled access. ++## Who should use Azure Confidential Clean Rooms? +Azure Confidential Clean Rooms could be a great choice for you if you have these scenarios: ++- Data analytics and inferencing: Organizations looking to build insights on second-party data while ensuring data privacy can use ACCR. ACCR is useful when data providers are concerned about data exfiltration. ACCR ensures that data is only used for agreed purposes and safeguards against unauthorized access or egress (as it's a sandboxed environment). +- Data privacy ISVs: Independent Software Vendors (ISVs) who provide secure multiparty data collaboration services can use ACCR as an extensible platform. It allows them to add enforceable tamperproof contracts with governance and audit capabilities, and uses [Confidential containers or C-ACI](../confidential-computing/confidential-containers.md) underneath to ensure data is encrypted during processing so that their customers' data remains secure. +- ML fine tuning: ACCR provides a solution to organizations that require data from various sources to train or fine-tune machine learning models but face data sharing regulations. It allows any party to audit and confirm that data is being used only for the agreed purpose, such as ML modeling. +- ML inferencing: Organizations can use ACCR in machine learning (ML) inferencing to enable secure, collaborative data analysis without compromising privacy or data ownership. ACCR acts as secure environment where multiple parties can combine sensitive data and apply ML models for inferencing while keeping raw data inaccessible to others. ++### Industries that can successfully utilize ACCR +- Healthcare- In the healthcare industry, Azure Confidential Clean Rooms enable secure collaboration on sensitive patient data. For example, healthcare providers can use clean rooms to train and fine-tune AI/ML models for predictive diagnostics, personalized medicine, and clinical decision support. By using confidential computing, healthcare organizations can protect patient privacy while collaborating with other institutions to improve healthcare outcomes. +ACCR can also be used for ML inferencing where partner hospitals can utilize power of these models for early detection. +- Advertising- In the advertising industry, Azure Confidential Clean Rooms facilitates secure data sharing between advertisers and publishers. ACCR enables targeted advertising and campaign effectiveness measurement without exposing sensitive user data. +- Banking, Financial Services and Insurance (BFSI) - The BFSI sector can use Azure Confidential Clean Rooms to securely collaborate on financial data, ensuring compliance with regulatory requirements. This enables financial institutions to perform joint data analysis and develop risk models, fraud detection models, lending scenarios among others without exposing sensitive customer information. +- Retail- In the retail industry, Azure Confidential Clean Rooms enables secure collaboration on customer data to enhance personalized marketing and inventory management. Retailers can use clean rooms to analyze customer behavior and preferences to create personalized marketing campaigns without compromising data privacy. ++## Benefits +++Azure Confidential Clean Rooms (ACCR) provides a secure and compliant environment for multi-party data collaboration. Built on [Confidential containers or C-ACI](../confidential-computing/confidential-containers.md), ACCR ensures that sensitive data remains protected throughout the collaboration process. Here are some key benefits of using Azure Confidential Clean Rooms: ++- Secure collaboration and governance: +ACCR allows collaborators to create tamper-proof contracts. ACCR also enforces all the constraints which are part of the contract. Governance ensures validity of constraints before allowing data to be released into clean rooms and drives transparency among collaborators by generating tamper-proof audit trails. ACCR uses the open-sourced [confidential consortium framework](https://microsoft.github.io/CCF/main/overview/what_is_ccf.html) to enable these capabilities. +- Compliance: +Confidential computing can address some of the regulatory and privacy concerns by providing a secure environment for data collaboration. This capability is beneficial for industries such as financial services, healthcare, and telecom, which deal with highly sensitive data and personally identifiable information (PII). +- Enhanced data security: +ACCR is built using confidential computing to provide a hardware-based, trusted execution environment (TEE). This environment is sandboxed and allows only authorized workloads to execute and prevents unauthorized access to data or code during processing, ensuring that sensitive information remains secure. +- Verifiable trust at each step with the help of cryptographic remote attestation forms the cornerstone of Azure Confidential Clean Rooms. ++- Cost-effective: +By providing a secure and compliant environment for data collaboration, ACCR reduces the need for costly and complex data protection measures. This makes it a cost-effective solution for organizations looking to use sensitive data for analysis and insights. ++++## Onboarding to Azure Confidential Clean Rooms +ACCR is currently in Gated Preview. To express your interest in joining the gated preview, follow these steps: +- Fill and submit the form at https://aka.ms/ACCR-Preview-Onboarding. +- Once you submit, further steps will be shared with you on onboarding. +- For further questions on onboarding reach out to CleanRoomPMTeam@microsoft.com. +- After reviewing details, we'll reach out to you with detailed steps for onboarding. ++## Frequently asked questions ++- Question: Where is the location Microsoft published side cars? + Answer: The Microsoft published side cars are available at: mcr.microsoft.com/cleanroom. The code repository for the sidecars is present [here](https://github.com/Azure/azure-cleanroom/). ++- Question: Is there a sampleclean room application to try out? + Answer: You can find the clean room sample application [here](https://github.com/Azure-Samples/azure-cleanroom-samples). Please feel free to try out the sample after signing up for the Preview and receiving our response. ++- Question: Can more than two collaborators participate in a collaboration? + Answer: Yes, more than two collaborators can become part of collaboration. This allows multiple data providers to share data in the clean room. ++If you have questions about Azure Confidential Clean Rooms, reach out to <accrsupport@microsoft.com>. ++## Next steps ++- [Deploy Confidential container group with Azure Container Instances](/azure/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm) +- [Microsoft Azure Attestation](/azure/attestation/overview) |
confidential-computing | Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md | Pricing depends on your confidential VM size. For more information, see the [Pri Confidential VMs *don't support*: -- Azure Batch - Azure Backup - Azure Site Recovery - Limited Azure Compute Gallery support |
container-apps | Dapr Component Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-resiliency.md | Click **Run** to run the query and view the result with the log message indicati :::image type="content" source="media/dapr-component-resiliency/dapr-resiliency-query-results-loading.png" alt-text="Screenshot showing resiliency query results based on provided query example for checking if resiliency policy has loaded."::: -Or, you can find the actual resiliency policy by enabling debugging on your component and using a query similar to the following example: +Or, you can find the actual resiliency policy by enabling debug logs on your container app and querying to see if a resiliency resource is loaded. +++Once debug logs are enabled, use a query similar to the following: ``` ContainerAppConsoleLogs_CL |
container-apps | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md | With Azure Container Apps, you can: ## Introductory video -> [!VIDEO https://www.youtube.com/embed/b3dopSTnSRg] +> [!VIDEO https://www.youtube.com/embed/OxmVds31qL8] ### Next steps |
cost-management-billing | Grant Access To Create Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md | As an Azure customer with an [Enterprise Agreement (EA)](https://azure.microsoft ## Grant access -To [create subscriptions under an enrollment account](programmatically-create-subscription-enterprise-agreement.md), users must have the Azure RBAC [Owner role](../../role-based-access-control/built-in-roles.md#owner) on that account. You can grant a user or a group of users the Azure RBAC Owner role on an enrollment account by following these steps: +To [create subscriptions under an enrollment account](programmatically-create-subscription-preview.md), users must have the Azure RBAC [Owner role](../../role-based-access-control/built-in-roles.md#owner) on that account. You can grant a user or a group of users the Azure RBAC Owner role on an enrollment account by following these steps: 1. Get the object ID of the enrollment account you want to grant access to |
databox-online | Azure Stack Edge Gpu Connect Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md | Set the Azure Resource Manager environment and verify that your device to client ```powershell $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force; $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)- Connect-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred + Connect-AzAccount -EnvironmentName AzASE -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee -credential $cred ``` - Use the tenant ID c0257de7-538f-415c-993a-1b87a031879d as in this instance it's hard coded. + Use the tenant ID aaaabbbb-0000-cccc-1111-dddd2222eeee as in this instance it's hard coded. Use the following username and password. - **Username** - *EdgeArmUser* Set the Azure Resource Manager environment and verify that your device to client ```output PS C:\windows\system32> $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force; PS C:\windows\system32> $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)- PS C:\windows\system32> Connect-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred + PS C:\windows\system32> Connect-AzAccount -EnvironmentName AzASE -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee -credential $cred Account SubscriptionName TenantId Environment - - -- --- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzASE + EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee AzASE PS C:\windows\system32> ``` An alternative way to sign in is to use the `login-AzAccount` cmdlet. - `login-AzAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d` + `login-AzAccount -EnvironmentName <Environment Name> -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee` Here's an example output. ```output- PS C:\WINDOWS\system32> login-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d + PS C:\WINDOWS\system32> login-AzAccount -EnvironmentName AzASE -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee Account SubscriptionName TenantId - - -- Set the Azure Resource Manager environment and verify that your device to client 2. You can connect via `login-AzureRMAccount` or via `Connect-AzureRMAccount` command. - 1. To sign in, type the following command. The tenant ID in this instance is hard coded - c0257de7-538f-415c-993a-1b87a031879d. Use the following username and password. + 1. To sign in, type the following command. The tenant ID in this instance is hard coded - aaaabbbb-0000-cccc-1111-dddd2222eeee. Use the following username and password. - **Username** - *EdgeArmUser* Set the Azure Resource Manager environment and verify that your device to client ```output PS C:\windows\system32> $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force; PS C:\windows\system32> $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)- PS C:\windows\system32> Connect-AzureRmAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred + PS C:\windows\system32> Connect-AzureRmAccount -EnvironmentName AzDBE -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee -credential $cred Account SubscriptionName TenantId Environment - - -- --- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE + EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee AzDBE PS C:\windows\system32> ``` Set the Azure Resource Manager environment and verify that your device to client An alternative way to sign in is to use the `login-AzureRmAccount` cmdlet. - `login-AzureRMAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d` + `login-AzureRMAccount -EnvironmentName <Environment Name> -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee` Here's a sample output of the command. ```output- PS C:\Users\Administrator> login-AzureRMAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d + PS C:\Users\Administrator> login-AzureRMAccount -EnvironmentName AzDBE -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee Account SubscriptionName TenantId Environment - - -- -- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE + EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee AzDBE PS C:\Users\Administrator> ``` Name : Default Provider Subscription (...) - EdgeArmUser@localhost Account : EdgeArmUser@localhostΓÇï Environment : AzDBE2ΓÇï Subscription : ...ΓÇï-Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï +Tenant : aaaabbbb-0000-cccc-1111-dddd2222eeeeΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï ExtendedProperties : {}ΓÇï PS C:\WINDOWS\system32> Disconnect-AzAccountΓÇï ΓÇïΓÇï Id : EdgeArmUser@localhostΓÇï Type : UserΓÇï-Tenants : {c0257de7-538f-415c-993a-1b87a031879d}ΓÇï +Tenants : {aaaabbbb-0000-cccc-1111-dddd2222eeee}ΓÇï AccessToken :ΓÇï Credential :ΓÇï TenantMap : {}ΓÇï CertificateThumbprint :ΓÇï-ExtendedProperties : {[Subscriptions, ...], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]} +ExtendedProperties : {[Subscriptions, ...], [Tenants, aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e]} ``` Sign into the other environment. The sample output is shown below. PS C:\WINDOWS\system32> Login-AzAccount -Environment "AzDBE1" -TenantId $ArmTena ΓÇï Account SubscriptionName TenantId EnvironmentΓÇï - - -- --ΓÇï-EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE1 +EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee AzDBE1 ``` ΓÇï Run this cmdlet to confirm which environment you're connected to. Name : Default Provider Subscription (...) - EdgeArmUser@localhost Account : EdgeArmUser@localhostΓÇï Environment : AzDBE1ΓÇï Subscription : ...-Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï +Tenant : aaaabbbb-0000-cccc-1111-dddd2222eeeeΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï ExtendedProperties : {} Name : Default Provider Subscription (A4257FDE-B946-4E01-ADE7-6747 Account : EdgeArmUser@localhostΓÇï Environment : AzDBE2ΓÇï Subscription : ...ΓÇï-Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï +Tenant : aaaabbbb-0000-cccc-1111-dddd2222eeeeΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï ExtendedProperties : {}ΓÇï PS C:\WINDOWS\system32> Disconnect-AzureRmAccountΓÇï ΓÇïΓÇï Id : EdgeArmUser@localhostΓÇï Type : UserΓÇï-Tenants : {c0257de7-538f-415c-993a-1b87a031879d}ΓÇï +Tenants : {aaaabbbb-0000-cccc-1111-dddd2222eeee}ΓÇï AccessToken :ΓÇï Credential :ΓÇï TenantMap : {}ΓÇï CertificateThumbprint :ΓÇï-ExtendedProperties : {[Subscriptions, ...], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]} +ExtendedProperties : {[Subscriptions, ...], [Tenants, aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e]} ``` Sign into the other environment. The sample output is shown below. PS C:\WINDOWS\system32> Login-AzureRmAccount -Environment "AzDBE1" -TenantId $Ar ΓÇï Account SubscriptionName TenantId EnvironmentΓÇï - - -- --ΓÇï-EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE1 +EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee AzDBE1 ``` ΓÇï Run this cmdlet to confirm which environment you're connected to. Name : Default Provider Subscription (...) - EdgeArmUser@localhost Account : EdgeArmUser@localhostΓÇï Environment : AzDBE1ΓÇï Subscription : ...ΓÇï-Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï +Tenant : aaaabbbb-0000-cccc-1111-dddd2222eeeeΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï ExtendedProperties : {} |
databox-online | Azure Stack Edge Gpu Create Virtual Machine Marketplace Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md | PS /home/user> az disk create -g $diskRG -n $diskName --image-reference $urn "createOption": "FromImage", "galleryImageReference": null, "imageReference": {- "id": "/Subscriptions/db4e2fdb-6d80-4e6e-b7cd-736098270664/Providers/Microsoft.Compute/Locations/eastus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2019-Datacenter/Versions/17763.1935.2105080716", + "id": "/Subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/Providers/Microsoft.Compute/Locations/eastus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2019-Datacenter/Versions/17763.1935.2105080716", "lun": null }, "logicalSectorSize": null, PS /home/user> az disk create -g $diskRG -n $diskName --image-reference $urn "encryptionSettingsCollection": null, "extendedLocation": null, "hyperVGeneration": "V1",- "id": "/subscriptions/db4e2fdb-6d80-4e6e-b7cd-736098270664/resourceGroups/newrgmd1/providers/Microsoft.Compute/disks/NewManagedDisk1", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/newrgmd1/providers/Microsoft.Compute/disks/NewManagedDisk1", "location": "eastus", "managedBy": null, "managedByExtended": null, |
databox-online | Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md | You can also register resource providers via the `az cli`. For more information, PS /home/user> az role assignment create --role 34e09817-6cbe-4d01-b1a2-e0eac5743d41 --assignee xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myaserg1 { "canDelegate": null,- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myaserg1/providers/Microsoft.Authorization/roleAssignments/59272f92-e5ce-4aeb-9c0c-62532d8caf25", + "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myaserg1/providers/Microsoft.Authorization/roleAssignments/00000000-0000-0000-0000-000000000000", "name": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "principalType": "ServicePrincipal", |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Cli Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md | Before you begin creating and managing a VM on your Azure Stack Edge Pro device "isDefault": true, "name": "Default Provider Subscription", "state": "Enabled",- "tenantId": "c0257de7-538f-415c-993a-1b87a031879d", + "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee", "user": { "name": "EdgeArmUser@localhost", "type": "user" Before you begin creating and managing a VM on your Azure Stack Edge Pro device The following environment variables need to be set to work as *service principal*: ```- $ENV:ARM_TENANT_ID = "c0257de7-538f-415c-993a-1b87a031879d" + $ENV:ARM_TENANT_ID = "aaaabbbb-0000-cccc-1111-dddd2222eeee" $ENV:ARM_CLIENT_ID = "cbd868c5-7207-431f-8d16-1cb144b50971" $ENV:ARM_CLIENT_SECRET - "<Your Azure Resource Manager password>" $ENV:ARM_SUBSCRIPTION_ID = "<Your subscription ID>" |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Custom Script Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md | Etag : null Publisher : Microsoft.Compute ExtensionType : CustomScriptExtension TypeHandlerVersion : 1.10-Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension PublicSettings : { "commandToExecute": "md C:\\Users\\Public\\Documents\\test" } Etag : null Publisher : Microsoft.Compute ExtensionType : CustomScriptExtension TypeHandlerVersion : 1.10-Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension PublicSettings : { "commandToExecute": "md C:\\Users\\Public\\Documents\\test" } |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md | Etag : null Publisher : Microsoft.HpcCompute ExtensionType : NvidiaGpuDriverWindows TypeHandlerVersion : 1.3-Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM2/extensions/windowsgpuext +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM2/extensions/windowsgpuext PublicSettings : { "DriverURL": "http://us.download.nvidia.com/tesla/442.50/442.50-tesla-desktop-winserver-2019-2016-international.exe", "DriverCertificateUrl": "https://go.microsoft.com/fwlink/?linkid=871664", Etag : null Publisher : Microsoft.HpcCompute ExtensionType : NvidiaGpuDriverLinux TypeHandlerVersion : 1.3-Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg2/providers/Microsoft.Compute/virtualMachines/VM1/extensions/gpuLinux +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg2/providers/Microsoft.Compute/virtualMachines/VM1/extensions/gpuLinux PublicSettings : { "DRIVER_URL": "https://go.microsoft.com/fwlink/?linkid=874271", "PUBKEY_URL": "http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub", |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Install Password Reset Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md | Etag : null Publisher : Microsoft.Compute ExtensionType : VMAccessAgent TypeHandlerVersion : 2.4 -Id : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups/myasepro2rg/provi +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasepro2rg/provi ders/Microsoft.Compute/virtualMachines/mywindowsvm/extensions/windowsVMAccessExt PublicSettings : { "username": "azureuser" Etag : null Publisher : Microsoft.OSTCExtensions ExtensionType : VMAccessForLinux TypeHandlerVersion : 1.5 -Id : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups +Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups /myasepro2rg/providers/Microsoft.Compute/virtualMachines/mylinuxvm 5/extensions/linuxVMAccessExt PublicSettings : {} |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md | Before you begin creating and managing a VM on your Azure Stack Edge Pro device 1. Before you run the script, make sure you are still connected to the local Azure Resource Manager of the device and the connection has not expired. ```powershell- PS C:\windows\system32> login-AzureRMAccount -EnvironmentName aztest1 -TenantId c0257de7-538f-415c-993a-1b87a031879d + PS C:\windows\system32> login-AzureRMAccount -EnvironmentName aztest1 -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee Account SubscriptionName TenantId Environment - - -- --- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d aztest1 + EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee aztest1 PS C:\windows\system32> cd C:\Users\v2 PS C:\Users\v2> Before you begin creating and managing a VM on your Azure Stack Edge Pro device DiskSizeGB : 13 EncryptionSettings : ProvisioningState : Succeeded- Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/disks/ld201221071831 + Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Compute/disks/ld201221071831 Name : ld201221071831 Type : Microsoft.Compute/disks Location : DBELocal Before you begin creating and managing a VM on your Azure Stack Edge Pro device SourceVirtualMachine : StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile ProvisioningState : Succeeded- Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831 + Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831 Name : ig201221071831 Type : Microsoft.Compute/images Location : dbelocal Before you begin creating and managing a VM on your Azure Stack Edge Pro device Created a new Image - Using Vnet /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET + Using Vnet /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET Creating a new Newtork Interface WARNING: The output object type of this cmdlet will be modified in a future release. Before you begin creating and managing a VM on your Azure Stack Edge Pro device { "Name": "ip201221071831", "Etag": "W/\"27785dd5-d12a-4d73-9495-ffad7847261a\"",- "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831/ipConfigurations/ip201221071831", + "Id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831/ipConfigurations/ip201221071831", "PrivateIpAddress": "10.57.51.61", "PrivateIpAllocationMethod": "Dynamic", "Subnet": {- "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet", + "Id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet", "ResourceNavigationLinks": [], "ServiceEndpoints": [] }, Before you begin creating and managing a VM on your Azure Stack Edge Pro device TagsTable : Name : nic201221071831 Etag : W/"27785dd5-d12a-4d73-9495-ffad7847261a"- Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831 + Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831 Created Network Interface Before you begin creating and managing a VM on your Azure Stack Edge Pro device Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine = Set-AzureRmVMOSDisk -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Name osld201221071831 -Caching ReadWrite -CreateOption FromImage -Windows -StorageAccountType StandardLRS - Add-AzureRmVMNetworkInterface -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831.Id + Add-AzureRmVMNetworkInterface -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831.Id - Set-AzureRmVMSourceImage -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831 + Set-AzureRmVMSourceImage -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831 New-AzureRmVM -ResourceGroupName rg201221071831 -Location DBELocal -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Verbose WARNING: Since the VM is created using premium storage or managed disk, existing standard storage account, myasesa1, is used for boot |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md | ResourceGroupName : myaserg1 Location : dbelocal ProvisioningState : Succeeded Tags :-ResourceId : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups/myaserg1 +ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myaserg1 PS C:\WINDOWS\system32> ``` ResourceGroupName : myasegpurgvm Location : dbelocal ProvisioningState : Succeeded Tags :-ResourceId : /subscriptions/DDF9FC44-E990-42F8-9A91-5A6A5CC472DB/resourceGroups/myasegpurgvm +ResourceId : /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/myasegpurgvm PS C:\windows\system32> ``` Deploy the template `CreateImage.json`. This template deploys the image resource Here's a sample output of a successfully created image. ```powershell- PS C:\WINDOWS\system32> login-AzureRMAccount -EnvironmentName aztest -TenantId c0257de7-538f-415c-993a-1b87a031879d + PS C:\WINDOWS\system32> login-AzureRMAccount -EnvironmentName aztest -TenantId aaaabbbb-0000-cccc-1111-dddd2222eeee Account SubscriptionName TenantId Environment - - -- --- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d aztest + EdgeArmUser@localhost Default Provider Subscription aaaabbbb-0000-cccc-1111-dddd2222eeee aztest PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateImage\CreateImage.json" PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateImage\CreateImage.parameters.json" Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack Name : ASEVNET ResourceGroupName : ASERG Location : dbelocal- Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft + Id : /subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/ASERG/providers/Microsoft .Network/virtualNetworks/ASEVNET Etag : W/"990b306d-18b6-41ea-a456-b275efe21105" ResourceGuid : f8309d81-19e9-42fc-b4ed-d573f00e61ed Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack { "Name": "ASEVNETsubNet", "Etag": "W/\"990b306d-18b6-41ea-a456-b275efe21105\"",- "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/provider + "Id": "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/ASERG/provider s/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet", "AddressPrefix": "10.57.48.0/21", "IpConfigurations": [], Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack Name : ASEVNET ResourceGroupName : ASERG Location : dbelocal- Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft + Id : /subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/ASERG/providers/Microsoft .Network/virtualNetworks/ASEVNET Etag : W/"990b306d-18b6-41ea-a456-b275efe21105" ResourceGuid : f8309d81-19e9-42fc-b4ed-d573f00e61ed Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack { "Name": "ASEVNETsubNet", "Etag": "W/\"990b306d-18b6-41ea-a456-b275efe21105\"",- "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/provider + "Id": "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/ASERG/provider s/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet", "AddressPrefix": "10.57.48.0/21", "IpConfigurations": [], |
databox-online | Azure Stack Edge Gpu Deploy Vm Specialized Image Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md | This article used only one resource group to create all the VM resource. Deletin ResourceGroupName : myasevm1rg ResourceType : Microsoft.Compute/disks Location : dbelocal- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Compute/disk + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasevm1rg/providers/Microsoft.Compute/disk s/myasemd1 Name : myasetestvm1 ResourceGroupName : myasevm1rg ResourceType : Microsoft.Compute/virtualMachines Location : dbelocal- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Compute/virt + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasevm1rg/providers/Microsoft.Compute/virt ualMachines/myasetestvm1 Name : myasevmnic1 ResourceGroupName : myasevm1rg ResourceType : Microsoft.Network/networkInterfaces Location : dbelocal- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Network/netw + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasevm1rg/providers/Microsoft.Network/netw orkInterfaces/myasevmnic1 Name : myasevmsa ResourceGroupName : myasevm1rg ResourceType : Microsoft.Storage/storageaccounts Location : dbelocal- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Storage/stor + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myasevm1rg/providers/Microsoft.Storage/stor ageaccounts/myasevmsa PS C:\WINDOWS\system32> This article used only one resource group to create all the VM resource. Deletin Location : dbelocal ProvisioningState : Succeeded Tags :- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/ase-image-resourcegroup + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ase-image-resourcegroup ResourceGroupName : ASERG Location : dbelocal ProvisioningState : Succeeded Tags :- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/ASERG + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ASERG ResourceGroupName : myaserg Location : dbelocal ProvisioningState : Succeeded Tags :- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myaserg PS C:\WINDOWS\system32> ``` |
databox-online | Azure Stack Edge Gpu Enable Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md | Take the following steps to enable Container Insights on your workspace. "contentVersion": "1.0.0.0", "parameters": { "workspaceResourceId": {- "value": "/subscriptions/fa68082f-8ff7-4a25-95c7-ce9da541242f/resourcegroups/myaserg/providers/microsoft.operationalinsights/workspaces/myaseloganalyticsws" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/myaserg/providers/microsoft.operationalinsights/workspaces/myaseloganalyticsws" }, "workspaceRegion": { "value": "westus" Take the following steps to enable Container Insights on your workspace. VERBOSE: Authenticating to Azure ... VERBOSE: Building your Azure drive ... - PS /home/myaccount> az account set -s fa68082f-8ff7-4a25-95c7-ce9da541242f + PS /home/myaccount> az account set -s aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e PS /home/myaccount> ls clouddrive containerSolution.json PS /home/myaccount> ls clouddrive containerSolution.json containerSolutionParams.json PS /home/myaccount> az deployment group create --resource-group myaserg --name Testdeployment1 --template-file containerSolution.json --parameters containerSolutionParams.json {- Finished ..- "id": "/subscriptions/fa68082f-8ff7-4a25-95c7-ce9da541242f/resourceGroups/myaserg/providers/Microsoft.Resources/deployments/Testdeployment1", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myaserg/providers/Microsoft.Resources/deployments/Testdeployment1", "location": null, "name": "Testdeployment1", "properties": {- "correlationId": "3a9045fe-2de0-428c-b17b-057508a8c575", + "correlationId": "aaaa0000-bb11-2222-33cc-444444dddddd", "debugSetting": null, "dependencies": [], "duration": "PT11.1588316S", Take the following steps to enable Container Insights on your workspace. "onErrorDeployment": null, "outputResources": [ {- "id": "/subscriptions/fa68082f-8ff7-4a25-95c7-ce9da541242f/resourceGroups/myaserg/providers/Microsoft.OperationsManagement/solutions/ContainerInsights(myaseloganalyticsws)", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myaserg/providers/Microsoft.OperationsManagement/solutions/ContainerInsights(myaseloganalyticsws)", "resourceGroup": "myaserg" } ], Take the following steps to enable Container Insights on your workspace. }, "workspaceResourceId": { "type": "String",- "value": "/subscriptions/fa68082f-8ff7-4a25-95c7-ce9da541242f/resourcegroups/myaserg/providers/microsoft.operationalinsights/workspaces/myaseloganalyticsws" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/myaserg/providers/microsoft.operationalinsights/workspaces/myaseloganalyticsws" } }, "parametersLink": null, |
databox-online | Azure Stack Edge Gpu Iot Edge Api Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-iot-edge-api-update.md | If you're currently performing IoT Edge role management via API, you should use "ioTDeviceDetails": { "deviceId": "iotdevice", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": { "connectionString": { If you're currently performing IoT Edge role management via API, you should use "ioTEdgeDeviceDetails": { "deviceId": "iotEdge", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": { "connectionString": { If you're currently performing IoT Edge role management via API, you should use "ioTDeviceDetails": { "deviceId": "iotdevice", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": { "connectionString": { If you're currently performing IoT Edge role management via API, you should use "ioTEdgeDeviceDetails": { "deviceId": "iotEdge", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": { "connectionString": { If you're currently performing IoT Edge role management via API, you should use "ioTDeviceDetails": { "deviceId": "iotdevice", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": {} } If you're currently performing IoT Edge role management via API, you should use "ioTEdgeDeviceDetails": { "deviceId": "iotEdge", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": {} } If you're currently performing IoT Edge role management via API, you should use "shareMappings": [], "roleStatus": "Enabled" },- "id": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/testedgedevice/roles/IoTRole1", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/testedgedevice/roles/IoTRole1", "name": "IoTRole1", "type": "dataBoxEdgeDevices/roles" } If you're currently performing IoT Edge role management via API, you should use "ioTDeviceDetails": { "deviceId": "iotdevice", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": {} } If you're currently performing IoT Edge role management via API, you should use "ioTEdgeDeviceDetails": { "deviceId": "iotEdge", "ioTHostHub": "iothub.azure-devices.net",- "ioTHostHubId": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", + "ioTHostHubId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/Microsoft.Devices/IotHubs/testrxiothub", "authentication": { "symmetricKey": {} } }, "version": "0.1.0-beta10" },- "id": "/subscriptions/4385cf00-2d3a-425a-832f-f4285b1c9dce/resourceGroups/GroupForEdgeAutomation/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/res1/roles/kubernetesRole/addons/iotName", + "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GroupForEdgeAutomation/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/res1/roles/kubernetesRole/addons/iotName", "name": " iotName", "type": "Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/addon", } If you're using the SDK, after you've installed the January 2021 update, you'll ```csharp var iotRoleStatus = "Enabled"; var iotHostPlatform = "Linux";-var id = $@"/subscriptions/546ec571-2d7f-426f-9cd8-0d695fa7edba/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/iotrole"; +var id = $@"/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/iotrole"; var name = "iotrole"; var type = "Microsoft.DataBoxEdge/dataBoxEdgeDevices/role"; var iotRoleName = "iotrole"; DataBoxEdgeManagementClient.Roles.CreateOrUpdate(deviceName, iotRoleName, role, ```csharp var k8sRoleStatus = "Enabled"; var k8sHostPlatform = "Linux";-var k8sId = $@"/subscriptions/546ec571-2d7f-426f-9cd8-0d695fa7edba/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/KubernetesRole"; +var k8sId = $@"/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/KubernetesRole"; var k8sRoleName = "KubernetesRole"; var k8sClusterVersion = "v1.17.3"; //Final values will be updated here around January 2021 var k8sVmProfile = "DS1_v2"; //Final values will be updated here around January 2021 var k8sRole = new KubernetesRole( ); DataBoxEdgeManagementClient.Roles.CreateOrUpdate(deviceName, k8sRoleName, k8sRole, resourceGroup); //Final usage will be updated here around January 2021 -var ioTId = $@"/subscriptions/546ec571-2d7f-426f-9cd8-0d695fa7edba/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/KubernetesRole/addons/iotaddon"; +var ioTId = $@"/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/resourceGroup/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/deviceName/roles/KubernetesRole/addons/iotaddon"; var ioTAddonName = "iotaddon"; var ioTAddonType = "Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/addons"; var addon = new IoTAddon( IoT Edge is an add-on under the Kubernetes role, which implies that you'll first ## Next steps - [Learn how to apply updates](azure-stack-edge-gpu-install-update.md)- |
databox-online | Azure Stack Edge Gpu Manage Virtual Machine Tags Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-tags-powershell.md | Before you can deploy a VM on your device via PowerShell, make sure that: PS C:\WINDOWS\system32> Set-AzResource -ResourceID $VirtualMachine.ID -Tag $tags -Force Name : myazvm- ResourceId : /subscriptions/d64617ad-6266-4b19-45af-81112d213322/resourceGroups/myas + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myas eazrg/providers/Microsoft.Compute/virtualMachines/myazvm ResourceName : myazvm ResourceType : Microsoft.Compute/virtualMachines ResourceGroupName : myaseazrg Location : dbelocal- SubscriptionId : d64617ad-6266-4b19-45af-81112d213322 + SubscriptionId : aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e Tags : {Organization} Properties : @{vmId=568a264f-c5d3-477f-a16c-4c5549eafa8c; hardwareProfile=; storageProfile=; osProfile=; networkProfile=; diagnosticsProfile=; Before you can deploy a VM on your device via PowerShell, make sure that: PS C:\WINDOWS\system32> Set-AzureRmResource -ResourceID $VirtualMachine.ID -Tag $tags -Force Name : myasetestvm1- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua + ResourceId : /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua lMachines/myasetestvm1 ResourceName : myasetestvm1 ResourceType : Microsoft.Compute/virtualMachines ResourceGroupName : myaserg2 Location : dbelocal- SubscriptionId : 992601bc-b03d-4d72-598e-d24eac232122 + SubscriptionId : bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f Tags : {Organization} Properties : @{vmId=958c0baa-e143-4d8a-82bd-9c6b1ba45e86; hardwareProfile=; storageProfile=; osProfile=; networkProfile=; provisioningState=Succeeded} You can view the tags applied to a specific virtual machine running on your devi PS C:\WINDOWS\system32> $VirtualMachine ResourceGroupName : myaserg2- Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua + Id : /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua lMachines/myasetestvm1 VmId : 958c0baa-e143-4d8a-82bd-9c6b1ba45e86 Name : myasetestvm1 The preceding output indicates that out of the three tags, 2 VMs are tagged as ` PS C:\WINDOWS\system32> $VirtualMachine ResourceGroupName : myaseazrg- Id : /subscriptions/d64617ad-6266-4b19-45af-81112d213322/resourceGroups/mya + Id : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/mya seazrg/providers/Microsoft.Compute/virtualMachines/myazvm VmId : 568a264f-c5d3-477f-a16c-4c5549eafa8c Name : myazvm The preceding output indicates that out of the three tags, 2 VMs are tagged as ` PS C:\WINDOWS\system32> Set-AzResource -ResourceId $VirtualMachine.Id -Tag $tags -Force Name : myazvm- ResourceId : /subscriptions/d64617ad-6266-4b19-45af-81112d213322/resourceGroups/myas + ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myas eazrg/providers/Microsoft.Compute/virtualMachines/myazvm ResourceName : myazvm ResourceType : Microsoft.Compute/virtualMachines ResourceGroupName : myaseazrg Location : dbelocal- SubscriptionId : d64617ad-6266-4b19-45af-81112d213322 + SubscriptionId : aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e Tags : {} Properties : @{vmId=568a264f-c5d3-477f-a16c-4c5549eafa8c; hardwareProfile=; storageProfile=; osProfile=; networkProfile=; diagnosticsProfile=; The preceding output indicates that out of the three tags, 2 VMs are tagged as ` ```output PS C:\WINDOWS\system32> $VirtualMachine = Get-AzureRMVM -ResourceGroupName $VMRG -Name $VMName ResourceGroupName : myaserg1- Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg1/providers/Microsoft.Compute/virtualMachines/myaselinuxvm1 + Id : /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/myaserg1/providers/Microsoft.Compute/virtualMachines/myaselinuxvm1 VmId : 290b3fdd-0c99-4905-9ea1-cf93cd6f25ee Name : myaselinuxvm1 Type : Microsoft.Compute/virtualMachines The preceding output indicates that out of the three tags, 2 VMs are tagged as ` True PS C:\WINDOWS\system32> Set-AzureRMResource -ResourceID $VirtualMachine.ID -Tag $tags -Force Name : myaselinuxvm1- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGrou + ResourceId : /subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGrou ps/myaserg1/providers/Microsoft.Compute/virtualMachines/myaselin uxvm1 ResourceName : myaselinuxvm1 ResourceType : Microsoft.Compute/virtualMachines ResourceGroupName : myaserg1 Location : dbelocal- SubscriptionId : 992601bc-b03d-4d72-598e-d24eac232122 + SubscriptionId : bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f Tags : {} Properties : @{vmId=290b3fdd-0c99-4905-9ea1-cf93cd6f25ee; hardwareProfile=; storageProfile=; osProfile=; networkProfile=; |
databox-online | Azure Stack Edge Gpu Set Azure Resource Manager Password | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password.md | This article describes how to set your Azure Resource Manager password. You need ```azurepowershell- PS Azure:\> Set-AzContext -SubscriptionId 8eb87630-972c-4c36-a270-f330e6c063df + PS Azure:\> Set-AzContext -SubscriptionId aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e Name Account SubscriptionName Environment TenantId - - - -- -- |
databox-online | Azure Stack Edge Gpu Troubleshoot Azure Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md | The following errors may indicate an issue with your Azure Resource Manager conf |Add-AzureRmEnvironment: An error occurred while sending the request.<br>At line:1 char:1<br>+ Add-AzureRmEnvironment -Name Az3 -ARMEndpoint "https://management.dbe ...|Your device isn't reachable or isn't configured properly. Verify that the device and the client are configured correctly. For guidance, see the **General issues** row in this table.| |Service returned error. Check InnerException for more details: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. | There was an error in the creation and installation of the certificate on your device. For more information, see [Create and install certificates](azure-stack-edge-gpu-connect-resource-manager.md#step-2-create-and-install-certificates). | |Operation returned an invalid status code 'ServiceUnavailable' <br> Response status code does not indicate success: 503 (Service Unavailable). | This error could be the result of any of these conditions:<li>ArmStsPool is in stopped state.</li><li>Either Azure Resource Manager is down or the website for the Security Token Service is down.</li><li>The Azure Resource Manager cluster resource is down.</li><br>Restarting the device may fix the issue. To debug further, [collect a Support package](azure-stack-edge-gpu-troubleshoot.md#collect-support-package).|-|AADSTS50126: Invalid username or password.<br>Trace ID: 29317da9-52fc-4ba0-9778-446ae5625e5a<br>Correlation ID: 1b9752c4-8cbf-4304-a714-8a16527410f4<br>Timestamp: 2019-11-15 09:21:57Z: The remote server returned an error: (400) Bad Request.<br>At line:1 char:1 |This error could be the result of any of these conditions:<li>For an invalid username and password, make sure that you have [reset the Azure Storage Manager password from the Azure portal](./azure-stack-edge-gpu-set-azure-resource-manager-password.md), and then use the correct password.<li>For an invalid tenant ID, make sure the tenant ID is set to `c0257de7-538f-415c-993a-1b87a031879d`</li>| -|connect-AzureRmAccount: AADSTS90056: The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access.<br>Trace ID: e19bdbc9-5dc8-4a74-85c3-ac6abdfda115<br>Correlation ID: 75c8ef5a-830e-48b5-b039-595a96488ff9 Timestamp: 2019-11-18 07:00:51Z: The remote server returned an error: (400) Bad |The Azure Resource Manager endpoints used in the `Add-AzureRmEnvironment` command are incorrect.<br>To find the Azure Resource Manager endpoints, check **Device endpoints** on the **Device** page of your device's local web UI.<br>For PowerShell instructions, see [Set Azure Resource Manager environment](azure-stack-edge-gpu-connect-resource-manager.md#step-7-set-azure-resource-manager-environment). | +|AADSTS50126: Invalid username or password.<br>Trace ID: 0000aaaa-11bb-cccc-dd22-eeeeee333333<br>Correlation ID: aaaa0000-bb11-2222-33cc-444444dddddd<br>Timestamp: 2019-11-15 09:21:57Z: The remote server returned an error: (400) Bad Request.<br>At line:1 char:1 |This error could be the result of any of these conditions:<li>For an invalid username and password, make sure that you have [reset the Azure Storage Manager password from the Azure portal](./azure-stack-edge-gpu-set-azure-resource-manager-password.md), and then use the correct password.<li>For an invalid tenant ID, make sure the tenant ID is set to `aaaabbbb-0000-cccc-1111-dddd2222eeee`</li>| +|connect-AzureRmAccount: AADSTS90056: The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access.<br>Trace ID: 3333dddd-44ee-ffff-aa55-bbbbbb666666<br>Correlation ID: cccc2222-dd33-4444-55ee-666666ffffff Timestamp: 2019-11-18 07:00:51Z: The remote server returned an error: (400) Bad |The Azure Resource Manager endpoints used in the `Add-AzureRmEnvironment` command are incorrect.<br>To find the Azure Resource Manager endpoints, check **Device endpoints** on the **Device** page of your device's local web UI.<br>For PowerShell instructions, see [Set Azure Resource Manager environment](azure-stack-edge-gpu-connect-resource-manager.md#step-7-set-azure-resource-manager-environment). | |Unable to get endpoints from the cloud.<br>Ensure you have network connection. Error detail: HTTPSConnectionPool(host='management.dbg-of4k6suvm.microsoftdatabox.com', port=30005): Max retries exceeded with url: /metadata/endpoints?api-version=2015-01-01 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) |This error appears mostly in a Mac or Linux environment. The error occurs because a PEM format certificate wasn't added to the Python certificate store. | |
defender-for-iot | Detect Windows Endpoints Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md | The script described in this article is supported for the following Windows oper - Windows XP - Windows 7 - Windows 10-- Windows Server 2003/2008/2012+- Windows 11 +- Windows Server 2003/2008/2012/2016/2019/2022 ## Download and run the script |
devtest-labs | Devtest Lab Create Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md | This quickstart walks you through creating a lab in Azure DevTest Labs by using - **Artifacts storage account access**: You can configure whether the lab uses a User-assigned Managed Identity or a Shared Key to access the lab storage account. To use a User-assigned Managed Identity, select the appropriate managed identity from the list, otherwise select the Storage Account Shared Key option from the list. - **Public environments**: Leave **On** for access to the [DevTest Labs public environment repository](https://github.com/Azure/azure-devtestlab/tree/master/Environments). Set to **Off** to disable access. For more information, see [Enable public environments when you create a lab](devtest-lab-create-environment-from-arm.md#set-public-environment-access-for-new-lab). - :::image type="content" source="./media/devtest-lab-create-lab/portal-create-basic-settings.png" alt-text="Screenshot of the Basic Settings tab in the Create DevTest Labs form."::: + :::image type="content" source="./media/devtest-lab-create-lab/portal-create-basic-settings-managed-identity.png" alt-text="Screenshot of the Basic Settings tab in the Create DevTest Labs form."::: 1. Optionally, select each tab at the top of the page, and customize those settings - [**Auto-shutdown**](#auto-shutdown-tab) |
devtest-labs | Tutorial Create Custom Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md | To create a lab in Azure DevTest Labs, follow these steps. |**Artifacts storage account access**|You can configure whether the lab uses a User-assigned Managed Identity or a Shared Key to access the lab storage account. To use a User-assigned Managed Identity, select the appropriate managed identity from the list, otherwise select the Storage Account Shared Key option from the list.| |**Public environments**|Leave **On** for access to the [DevTest Labs public environment repository](https://github.com/Azure/azure-devtestlab/tree/master/Environments). Set to **Off** to disable access. For more information, see [Enable public environments when you create a lab](devtest-lab-create-environment-from-arm.md#set-public-environment-access-for-new-lab).| - :::image type="content" source="./media/tutorial-create-custom-lab/create-custom-lab-blade.png" alt-text="Screenshot of the Basic Settings tab of the Create DevTest Labs form."::: + :::image type="content" source="./media/tutorial-create-custom-lab/portal-create-basic-settings-managed-identity.png" alt-text="Screenshot of the Basic Settings tab of the Create DevTest Labs form."::: 1. Optionally, select the [Auto-shutdown](devtest-lab-create-lab.md#auto-shutdown-tab), [Networking](devtest-lab-create-lab.md#networking-tab), or [Tags](devtest-lab-create-lab.md#tags-tab) tabs at the top of the page, and customize those settings. You can also apply or change most of these settings after lab creation. |
dns | Private Dns Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-resiliency.md | In this example: The example shown here doesn't illustrate a disaster recovery scenario, however the global nature of private zones also makes it possible to recreate VM1 in another VNet and assume its workload. > [!NOTE]-> Azure Private DNS is an availability zone foundational, zone-reduntant service. For more information, see [Azure services with availability zone support](/azure/reliability/availability-zones-service-support#azure-services-with-availability-zone-support). +> Azure Private DNS is a zone-redundant service. For more information, see [Azure services with availability zone support](/azure/reliability/availability-zones-service-support). ## Next steps - To learn more about Private DNS zones, see [Using Azure DNS for private domains](private-dns-overview.md). |
dns | Private Resolver Reliability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-reliability.md | For more information about availability zones, see [Regions and availability zon ### Prerequisites -For a list of regions that support availability zones, see [Azure regions with availability zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). If your Azure DNS Private Resolver is located in one of the regions listed, you don't need to take any other action beyond provisioning the service. +For a list of regions that support availability zones, see [Azure regions with availability zones](../reliability/availability-zones-region-support.md). If your Azure DNS Private Resolver is located in one of the regions listed, you don't need to take any other action beyond provisioning the service. #### Enabling availability zones with private resolver |
energy-data-services | How To Deploy Gcz | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-deploy-gcz.md | zone_pivot_groups: gcz-aks-or-windows # Deploy Geospatial Consumption Zone -This guide shows you how to deploy the Geospatial Consumption Zone (GCZ) service integrated with Azure Data Manager for Energy (ADME). --> [!IMPORTANT] -> While the Geospatial Consumption Zone (GCZ) service is a graduated service in the OSDU Forum, it has limitations in terms of security and usage. We will deploy some additional services and policies to secure the environment, but encourage you to follow the service's development on the [OSDU Gitlab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/wikis/home). --## Description - The OSDU Geospatial Consumption Zone (GCZ) is a service that enables enhanced management and utilization of geospatial data. The GCZ streamlines the handling of location-based information. It abstracts away technical complexities, allowing software applications to access geospatial data without needing to deal with intricate details. By providing ready-to-use map services, the GCZ facilitates seamless integration with OSDU-enabled applications. +This guide shows you how to deploy the Geospatial Consumption Zone (GCZ) service integrated with Azure Data Manager for Energy (ADME). + ## Create an App Registration in Microsoft Entra ID -To deploy the GCZ, you need to create an App Registration in Microsoft Entra ID. The App Registration is to authenticate the GCZ APIs with Azure Data Manager for Energy to be able to generate the cache of the geospatial data. +To deploy the GCZ, you need to create an App Registration in Microsoft Entra ID. The App Registration is used to authenticate the GCZ APIs with Azure Data Manager for Energy to be able to generate the cache of the geospatial data. 1. See [Create an App Registration in Microsoft Entra ID](/azure/active-directory/develop/quickstart-register-app) for instructions on how to create an App Registration. 1. Grant the App Registration permission to read the relevant data in Azure Data Manager for Energy. See [How to add members to an OSDU group](./how-to-manage-users.md#add-members-to-an-osdu-group-in-a-data-partition) for further instructions. To deploy the GCZ, you need to create an App Registration in Microsoft Entra ID. ## Setup There are two main deployment options for the GCZ service:-- **Azure Kubernetes Service (AKS)**: Deploy the GCZ service on an AKS cluster. This deployment option is recommended for production environments. It requires more setup, configuration, and maintenance. It also has some limitations in the provided container images.-- **Windows**: Deploy the GCZ service on a Windows. This deployment option recommended for development and testing environments, as it's easier to set up and configure, and requires less maintenance.++- **Azure Kubernetes Service (AKS)**: Deploy the GCZ service on an AKS cluster. This deployment option is recommended for production environments. It requires more effort to set up, configure, and maintain. +- **Windows**: Deploy the GCZ service on a Windows. This deployment option recommended for development and testing environments. ::: zone pivot="gcz-aks" Through APIM we can add policies to secure, monitor, and manage the APIs. #### Download the GCZ OpenAPI specifications 1. Download the two OpenAPI specification to your local computer.- - [GCZ Provider](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-provider.yaml) - - [GCZ Transformer](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-transformer.yaml) -1. Open each OpenAPI specification file in a text editor and replace the `servers` section with the corresponding IPs of the AKS GCZ Services' Load Balancer (External IP). + - [GCZ Provider](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-provider.yaml) + - [GCZ Transformer](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-transformer.yaml) +1. Open each OpenAPI specification file in a text editor and replace the `servers` section with the corresponding IPs of the AKS GCZ Services' Load Balancer. - ```yaml - servers: - - url: "http://<GCZ-Service-External-IP>/ignite-provider" - ``` + ```yaml + servers: + - url: "http://<GCZ-Service-LoadBalancer-IP>/ignite-provider" + ``` ##### [Azure portal](#tab/portal) Through APIM we can add policies to secure, monitor, and manage the APIs. ## Testing the GCZ service -1. Download the API client collection from the [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/blob/master/docs/test-assets/postman/Geospatial%20Consumption%20Zone%20-%20Provider%20Postman%20Tests.postman_collection.json?ref_type=heads) and import it into your API client of choice (for example, Postman). +1. Download the API client collection from the [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/blob/master/docs/test-assets/postman/Geospatial%20Consumption%20Zone%20-%20Provider%20Postman%20Tests.postman_collection.json?ref_type=heads) and import it into your API client of choice (that is, Bruno, Postman). +1. 1. Add the following environment variables to your API client:- - `PROVIDER_URL` - The URL to the GCZ Provider API. - - `AMBASSADOR_URL` - The URL to the GCZ Transformer API. - - `access_token` - A valid ADME access token. ++ - `PROVIDER_URL` - The URL to the GCZ Provider API. + - `AMBASSADOR_URL` - The URL to the GCZ Transformer API. + - `access_token` - A valid ADME access token. 1. To verify that the GCZ is working as expected, run the API calls in the collection. ## Next steps+ After you have a successful deployment of GCZ, you can: - Visualize your GCZ data using the GCZ WebApps from the [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/tree/master/docs/test-assets/webapps?ref_type=heads). You can also ingest data into your Azure Data Manager for Energy instance: - [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md). - [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md).- + ## References - For information about Geospatial Consumption Zone, see [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/). |
external-attack-surface-management | Easm Copilot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/easm-copilot.md | # required metadata Title: Copilot for Security and Defender EASM -description: You can use Copilot for Security to get information about your EASM data. + Title: Microsoft Security Copilot in Defender EASM +description: You can use Microsoft Security Copilot to get information about your EASM data. Previously updated : 10/25/2023 Last updated : 11/20/2024 ms.localizationpriority: high -# Microsoft Copilot for Security and Defender EASM +# Microsoft Security Copilot in Defender EASM Microsoft Defender External Attack Surface Management (Defender EASM) continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall. Attack Surface Insights are generated by analyzing vulnerability and infrastructure data to showcase the key areas of concern for your organization. -Defender EASMΓÇÖs integration with Copilot for Security enables users to interact with MicrosoftΓÇÖs discovered attack surfaces. These attack surfaces allow users to quickly understand their externally facing infrastructure and relevant, critical risks to their organization. They provide insight into specific areas of risk, including vulnerabilities, compliance, and security hygiene. For more information about Copilot for Security, go to [What is Microsoft Copilot for Security](/security-copilot/microsoft-security-copilot). For more information on the embedded Copilot for Security experience, refer to [Query your attack surface with Defender EASM using Microsoft Copilot in Azure](/azure/copilot/query-attack-surface). +Defender EASMΓÇÖs integration with Microsoft Security Copilot enables users to interact with MicrosoftΓÇÖs discovered attack surfaces. These attack surfaces allow users to quickly understand their externally facing infrastructure and relevant, critical risks to their organization. They provide insight into specific areas of risk, including vulnerabilities, compliance, and security hygiene. For more information about Microsoft Security Copilot, go to [What is Microsoft Security Copilot](/security-copilot/microsoft-security-copilot). For more information on the embedded Microsoft Security Copilot experience, refer to [Query your attack surface with Defender EASM using Microsoft Copilot in Azure](/azure/copilot/query-attack-surface). -**Copilot for Security integrates with Defender EASM**. +## Know before you begin -Copilot for Security can surface insights from Defender EASM about an organization's attack surface. You can use the system features built into Copilot for Security, and use prompts to get more information. This information can help you understand your security posture and mitigate vulnerabilities. +If you're new to Microsoft Security Copilot, you should familiarize yourself with it by reading these articles: +- [What is Microsoft Security Copilot?](/security-copilot/microsoft-security-copilot) +- [Microsoft Security Copilot experiences](/security-copilot/experiences-security-copilot) +- [Get started with Microsoft Security Copilot](/security-copilot/get-started-security-copilot) +- [Understand authentication in Microsoft Security Copilot](/security-copilot/authentication) +- [Prompting in Microsoft Security Copilot](/security-copilot/prompting-security-copilot) -This article introduces you to Copilot for Security and includes sample prompts that can help Defender EASM users. +## Microsoft Security Copilot integration in Defender EASM -## Connect Copilot to Defender EASM +Microsoft Security Copilot can surface insights from Defender EASM about an organization's attack surface. You can use the system features built into Microsoft Security Copilot, and use prompts to get more information. This information can help you understand your security posture and mitigate vulnerabilities. ++This article introduces you to Microsoft Security Copilot and includes sample prompts that can help Defender EASM users. ++## Key features ++The EASM Security Copilot integration can help you with: ++- Providing a snapshot of your external attack surface and generating insights into potential risks ++ This allows users to get a quick view of their external attack surface by analyzing internet-available information combined with Microsoft's proprietary discovery algorithm. It provides an easy-to-understand natural language explanation of the organization's externally facing assets, such as hosts, domains, webpages, and IP addresses, and highlights the critical risks associated with them. ++- Prioritizing remediation efforts based on asset risk and CVEs ++ EASM allows security teams to prioritize their remediation efforts by understanding which assets and Common Vulnerabilities and Exposures (CVEs) pose the greatest risk in their environment. It does this by analyzing vulnerability and infrastructure data to showcase key areas of concern, providing a natural language explanation of the risks and recommended actions. ++- Leveraging Security Copilot to surface insights ++ Users can leverage Security Copilot to ask about insights in natural language to extract insights from Defender EASM about their organization's attack surface. This includes querying details such as the number of insecure SSL certificates, ports detected, and specific vulnerabilities impacting the attack surface. ++- Expediting Attack Surface Curation ++ Utilize Security Copilot to curate your attack surface with labels, external IDs, and state modifications for a set of assets. This process speeds up curation, allowing you to organize your inventory faster and more efficiently. +++## Enable the Microsoft Security Copilot integration in Defender EASM ### Prerequisites -* Access to Copilot for Security, with permissions to activate new connections. +* Access to Microsoft Security Copilot, with permissions to activate new connections. ### Copilot for Security connection -1. Access [Copilot for Security](https://securitycopilot.microsoft.com/) and ensure you're authenticated. +1. Access [Microsoft Security Copilot](https://securitycopilot.microsoft.com/) and ensure you're authenticated. 1. Select the plugins icon on the upper-right side of the prompt input bar. ![Screenshot that shows the plugins icon.](media/copilot-2.png) This article introduces you to Copilot for Security and includes sample prompts ![Screenshot that shows Defender EASM activated in Copilot.](media/copilot-4.png) -4. If you would like Copilot for Security to pull data from your Microsoft Defender External Attack Surface Resource, click on the gear to open the plugin settings, and fill out the fields from your resourceΓÇÖs ΓÇ£EssentialsΓÇ¥ section on the Overview blade. +4. If you would like Microsoft Security Copilot to pull data from your Microsoft Defender External Attack Surface Resource, click on the gear to open the plugin settings, and fill out the fields from your resourceΓÇÖs ΓÇ£EssentialsΓÇ¥ section on the Overview blade. [ ![Screenshot that shows the Defender EASM fields that must be configured in Copilot.](media/copilot-6.png) ](media/copilot-6.png#lightbox) This article introduces you to Copilot for Security and includes sample prompts -## Getting started +## Sample Defender EASM prompts -Copilot for Security operates primarily with natural language prompts. When querying information from Defender EASM, you submit a prompt that guides Copilot for Security to select the Defender EASM plugin and invoke the relevant capability. +Microsoft Security Copilot operates primarily with natural language prompts. When querying information from Defender EASM, you submit a prompt that guides Microsoft Security Copilot to select the Defender EASM plugin and invoke the relevant capability. For success with Copilot prompts, we recommend the following: - Ensure that you reference the company name in your first prompt. Unless otherwise specified, all future prompts will provide data about the initially specified company. For success with Copilot prompts, we recommend the following: - Experiment with different prompts and variations to see what works best for your use case. Chat AI models vary, so iterate and refine your prompts based on the results you receive. -- Copilot for Security saves your prompt sessions. To see the previous sessions, in Copilot for Security, go to the menu > **My sessions**. +- Microsoft Security Copilot saves your prompt sessions. To see the previous sessions, in Microsoft Security Copilot, go to the menu > **My sessions**. - For a walkthrough on Copilot for Security, including the pin and share feature, go to [Navigating Microsoft Copilot for Security](/security-copilot/navigating-security-copilot). + For a walkthrough on Microsoft Security Copilot, including the pin and share feature, go to [Navigating Microsoft Security Copilot](/security-copilot/navigating-security-copilot). -For more information on writing Copilot for Security prompts, go to [Microsoft Copilot for Security prompting tips](/security-copilot/prompting-tips). +For more information on writing Microsoft Security Copilot prompts, go to [Microsoft Security Copilot prompting tips](/security-copilot/prompting-tips). -## Plugin capabilities reference +### Plugin capabilities reference | Capability | Description | Inputs | Behaviors | | -- | - | | -- | For more information on writing Copilot for Security prompts, go to [Microsoft C -## Switching between resource and company data +### Switching between resource and company data -Even though we have added resource integration for our skills, we still support pulling data from prebuilt attack surfaces for specific companies. To improve Copilot for SecurityΓÇÖs accuracy in determining when a customer wants to pull from their attack surface or a prebuilt, company attack surface, we recommend using ΓÇ£myΓÇ¥, ΓÇ£my attack surfaceΓÇ¥, etc. to convey they want to use their resource and ΓÇ£theirΓÇ¥, ΓÇ£{specific company name}ΓÇ¥, etc. to convey they want a prebuilt attack surface. While this does improve the experience in a single session, we strongly recommend having two separate sessions to avoid any confusion. +Even though we have added resource integration for our skills, we still support pulling data from prebuilt attack surfaces for specific companies. To improve Security CopilotΓÇÖs accuracy in determining when a customer wants to pull from their attack surface or a prebuilt, company attack surface, we recommend using ΓÇ£myΓÇ¥, ΓÇ£my attack surfaceΓÇ¥, etc. to convey they want to use their resource and ΓÇ£theirΓÇ¥, ΓÇ£{specific company name}ΓÇ¥, etc. to convey they want a prebuilt attack surface. While this does improve the experience in a single session, we strongly recommend having two separate sessions to avoid any confusion. ## Provide feedback -Your feedback on Copilot for Security generally, and the Defender EASM plugin specifically, is vital to guide current and planned development on the product. The optimal way to provide this feedback is directly in the product, using the feedback buttons at the bottom of each completed prompt. Select "Looks right," "Needs improvement" or "Inappropriate". We recommend ΓÇ£Looks rightΓÇ¥ when the result matches expectations, ΓÇ£Needs improvementΓÇ¥ when it doesn't, and ΓÇ£InappropriateΓÇ¥ when the result is harmful in some way. +Your feedback on Microsoft Security Copilot generally, and the Defender EASM plugin specifically, is vital to guide current and planned development on the product. The optimal way to provide this feedback is directly in the product, using the feedback buttons at the bottom of each completed prompt. Select "Looks right," "Needs improvement" or "Inappropriate". We recommend ΓÇ£Looks rightΓÇ¥ when the result matches expectations, ΓÇ£Needs improvementΓÇ¥ when it doesn't, and ΓÇ£InappropriateΓÇ¥ when the result is harmful in some way. -Whenever possible, and especially when the result is ΓÇ£Needs improvement,ΓÇ¥ please write a few words explaining what we can do to improve the outcome. This also applies when you expected Copilot for Security to invoke the Defender EASM plugin, but another plugin was selected instead. +Whenever possible, and especially when the result is ΓÇ£Needs improvement,ΓÇ¥ please write a few words explaining what we can do to improve the outcome. This also applies when you expected Microsoft Security Copilot to invoke the Defender EASM plugin, but another plugin was selected instead. -## Data processing and privacy +## Privacy and data security in Microsoft Security Copilot -When you interact with Copilot for Security to get Defender EASM data, Copilot pulls that data from Defender EASM. The prompts, the data that's retrieved, and the output shown in the prompt results is processed and stored within the Copilot for Security service. +When you interact with Microsoft Security Copilot to get Defender EASM data, Copilot pulls that data from Defender EASM. The prompts, the data that's retrieved, and the output shown in the prompt results is processed and stored within the Microsoft Security Copilot service. -For more information about data privacy in Copilot for Security, go to [Privacy and data security in Microsoft Copilot for Security](/security-copilot/privacy-data-security). +For more information about data privacy in Microsoft Security Copilot, go to [Privacy and data security in Microsoft Security Copilot](/security-copilot/privacy-data-security). ## Related articles -- [What is Microsoft Copilot for Security?](/security-copilot/microsoft-security-copilot)-- [Privacy and data security in Microsoft Copilot for Security](/security-copilot/privacy-data-security)+- [What is Microsoft Security Copilot?](/security-copilot/microsoft-security-copilot) +- [Privacy and data security in Microsoft Security Copilot](/security-copilot/privacy-data-security) - [Query your attack surface with Defender EASM using Microsoft Copilot in Azure](/azure/copilot/query-attack-surface) |
firewall | Basic Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/basic-features.md | Azure Firewall can be configured during deployment to span multiple Availability There's no extra cost for a firewall deployed in more than one Availability Zone. However, there are added costs for inbound and outbound data transfers associated with Availability Zones. For more information, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). -Azure Firewall Availability Zones are available in regions that support Availability Zones. For more information, see [Regions that support Availability Zones in Azure](../reliability/availability-zones-service-support.md). +Azure Firewall Availability Zones are available in regions that support availability zones. For more information, see [Regions with availability zone support](../reliability/availability-zones-region-support.md). ## Application FQDN filtering rules |
firewall | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md | There's no extra cost for a firewall deployed in more than one Availability Zone As the firewall scales, it creates instances in the zones it's in. So, if the firewall is in Zone 1 only, new instances are created in Zone 1. If the firewall is in all three zones, then it creates instances across the three zones as it scales. -Azure Firewall Availability Zones are available in regions that support Availability Zones. For more information, see [Regions that support Availability Zones in Azure](../availability-zones/az-region.md). +Azure Firewall Availability Zones are available in regions that support Availability Zones. For more information, see [Azure regions with availability zones](../reliability/availability-zones-region-support.md). > [!NOTE] > Availability Zones can only be configured during deployment. You can't configure an existing firewall to include Availability Zones. |
iot-operations | Howto Configure Adlsv2 Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md | If you select **Create new**, enter the following settings: | Set activation date | If turned on, the date when the secret becomes active. | | Set expiration date | If turned on, the date when the secret expires. | -To learn more about secrets, see [Create and manage secrets in Azure IoT Operations Preview](../secure-iot-ops/howto-manage-secrets.md). +To learn more about secrets, see [Create and manage secrets in Azure IoT Operations](../secure-iot-ops/howto-manage-secrets.md). # [Bicep](#tab/bicep) |
iot-operations | Howto Configure Kafka Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md | After you select **Add reference**, if you select **Create new**, enter the foll | Set activation date | If turned on, the date when the secret becomes active. | | Set expiration date | If turned on, the date when the secret expires. | -To learn more about secrets, see [Create and manage secrets in Azure IoT Operations Preview](../secure-iot-ops/howto-manage-secrets.md). +To learn more about secrets, see [Create and manage secrets in Azure IoT Operations](../secure-iot-ops/howto-manage-secrets.md). # [Bicep](#tab/bicep) |
iot-operations | Howto Configure Mqtt Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md | If you select **Create new**, enter the following settings: | Set activation date | If turned on, the date when the secret becomes active. | | Set expiration date | If turned on, the date when the secret expires. | -To learn more about secrets, see [Create and manage secrets in Azure IoT Operations Preview](../secure-iot-ops/howto-manage-secrets.md). +To learn more about secrets, see [Create and manage secrets in Azure IoT Operations](../secure-iot-ops/howto-manage-secrets.md). # [Bicep](#tab/bicep) |
iot-operations | Howto Enable Secure Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-enable-secure-settings.md | This article provides instructions for enabling secure settings if you didn't do ## Prerequisites -* An Azure IoT Operations instance deployed with test settings. For example, follow the instructions in [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md). +* An Azure IoT Operations instance deployed with test settings. For example, follow the instructions in [Quickstart: Run Azure IoT Operations in GitHub Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md). * Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or later. Use `az --version` to check your version and `az upgrade` to update, if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). To set up secrets management: -Now that secret synchronization setup is complete, you can refer to [Manage secrets for your Azure IoT Operations Preview deployment](./howto-manage-secrets.md) to learn how to use secrets with Azure IoT Operations. +Now that secret synchronization setup is complete, you can refer to [Manage secrets for your Azure IoT Operations deployment](./howto-manage-secrets.md) to learn how to use secrets with Azure IoT Operations. ## Set up a user-assigned managed identity for cloud connections |
iot-operations | Howto Prepare Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md | Connect your cluster to Azure Arc so that it can be managed remotely. To prevent unplanned updates to Azure Arc and the system Arc extensions that Azure IoT Operations uses as dependencies, this command disables auto-upgrade. Instead, [manually upgrade agents](/azure/azure-arc/kubernetes/agent-upgrade#manually-upgrade-agents) as needed. + >[!IMPORTANT] + >If your environment uses a proxy server or Azure Arc Gateway, modify the `az connectedk8s connect` command with your proxy information: + > + >1. Follow the instructions in either [Connect using an outbound proxy server](/azure/azure-arc/kubernetes/quickstart-connect-cluster#connect-using-an-outbound-proxy-server) or [Onboard Kubernetes clusters to Azure Arc with Azure Arc Gateway](/azure/azure-arc/kubernetes/arc-gateway-simplify-networking#onboard-kubernetes-clusters-to-azure-arc-with-your-arc-gateway-resource). + >1. Add `169.254.169.254` to the `--proxy-skip-range` parameter of the `az connectedk8s connect` command. [Azure Device Registry](../discover-manage-assets/overview-manage-assets.md#store-assets-as-azure-resources-in-a-centralized-registry) uses this local endpoint to get access tokens for authorization. + > + >Azure IoT Operations doesn't support proxy servers that require a trusted certificate. + 1. Get the cluster's issuer URL. ```azurecli |
iot-operations | Overview Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/overview-deploy.md | To install Azure IoT Operations, have the following hardware requirements availa | Spec | Minimum | Recommended | |||-|-| RAM | 16-GB | 32-GB | +| Hardware memory capacity (RAM) | 16-GB | 32-GB | +| Available memory for Azure IoT Operations (RAM) | 10-GB | Depends on usage | | CPU | 4 vCPUs | 8 vCPUs | ## Choose your features |
iot-operations | Howto Use Media Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-use-media-connector.md | The media connector: ## Prerequisites -A deployed instance of Azure IoT Operations. If you don't already have an instance, see [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md). +A deployed instance of Azure IoT Operations. If you don't already have an instance, see [Quickstart: Run Azure IoT Operations in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md). ## Deploy the media server |
iot-operations | Howto Use Onvif Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-use-onvif-connector.md | Last updated 11/06/2024 # Configure the connector for ONVIF (preview) -In Azure IoT Operations, the connector for ONVIF (preview) enables you to control an ONVIF compliant camera that's connected to your Azure IoT Operations Preview cluster. This article explains how to configure and use the connector for ONVIF to perform tasks such as reading and writing properties to control a camera or discovering the media streams a camera supports. +In Azure IoT Operations, the connector for ONVIF (preview) enables you to control an ONVIF compliant camera that's connected to your Azure IoT Operations cluster. This article explains how to configure and use the connector for ONVIF to perform tasks such as reading and writing properties to control a camera or discovering the media streams a camera supports. ## Prerequisites -A deployed instance of Azure IoT Operations. If you don't already have an instance, see [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md). +A deployed instance of Azure IoT Operations. If you don't already have an instance, see [Quickstart: Run Azure IoT Operations in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md). An ONVIF compliant camera connected to your Azure IoT Operations cluster. If you don't have a camera, you can use a simulator such as the [ONVIF RTSP simulator](https://arcjumpstart.com/simulate_an_onvif_camera_with_rtsp). |
iot-operations | Tutorial Add Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/end-to-end-tutorials/tutorial-add-assets.md | This configuration deploys a new asset called `thermostat` to the cluster. You c kubectl get assets -n azure-iot-operations ``` +## View resources in the Azure portal ++To view the asset endpoint and asset you created in the Azure portal, go to the resource group that contains your Azure IoT Operations instance. You can see the thermostat asset in the **Azure IoT Operations** resource group. If you select **Show hidden types**, you can also see the asset endpoint: +++The portal enables you to view the asset details. Select **JSON View** for more details: ++ ## Verify data is flowing [!INCLUDE [deploy-mqttui](../includes/deploy-mqttui.md)] |
iot-operations | Howto Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authentication.md | The following rules apply to the relationship between BrokerListener and *Broker * Each BrokerListener can have multiple ports. Each port can be linked to a *BrokerAuthentication* resource. * Each *BrokerAuthentication* can support multiple authentication methods at once.-* Ports that do not link a *BrokerAuthentication* resource have authentication disabled. +* Ports that don't link a *BrokerAuthentication* resource have authentication disabled. To link a BrokerListener port to a *BrokerAuthentication* resource, specify the `authenticationRef` field in the `ports` setting of the BrokerListener resource. To learn more, see [BrokerListener resource](./howto-configure-brokerlistener.md). ## Default BrokerAuthentication resource -Azure IoT Operations deploys a default *BrokerAuthentication* resource named `default` linked with the *default* listener in the `azure-iot-operations` namespace. It's configured to only use [Kubernetes Service Account Tokens (SATs)](#kubernetes-service-account-tokens) for authentication. +Azure IoT Operations deploys a default *BrokerAuthentication* resource named `default` linked with the *default* listener in the `azure-iot-operations` namespace. It only uses [Kubernetes Service Account Tokens (SATs)](#kubernetes-service-account-tokens) for authentication. > [!IMPORTANT] > The service account token (SAT) authentication method in the default *BrokerAuthentication* resource is required for components in the Azure IoT Operations to function correctly. Avoid updating or deleting the default *BrokerAuthentication* resource. For more information about enabling secure settings by configuring an Azure Key ## X.509 -In X.509 authentication, MQTT broker uses a trusted CA certificate to validate client certificates. Clients present a certificate rooted in this CA for MQTT broker to authenticate them. Both EC and RSA keys are supported, but all certificates in the chain must use the same key algorithm. Since X.509 relies on TLS client certificates, TLS must be enabled for ports using X.509 authentication. +With X.509 authentication, the MQTT broker uses a **trusted CA certificate** to validate client certificates. This trusted CA can be a root or intermediate CA. The broker checks the client certificate chain against the trusted CA certificate. If the chain is valid, the client is authenticated. -To import a root certificate that can be used to validate client certificates, store the certificate PEM in a *ConfigMap*. For example: +To use X.509 authentication with a trusted CA certificate, the following requirements must be met: ++- **TLS**: Since X.509 relies on TLS client certificates, [TLS must be enabled for ports using X.509 authentication](./howto-configure-brokerlistener.md). +- **Key algorithms**: Both EC and RSA keys are supported, but all certificates in the chain must use the same key algorithm. +- **Format**: The CA certificate must be in PEM format. ++> [!TIP] +> PEM format is a common format for certificates and keys. PEM files are base64-encoded ASCII files with headers like `--BEGIN CERTIFICATE--` and `--BEGIN EC PRIVATE KEY--`. +> +> If you have a certificate in another format, you can convert it to PEM using OpenSSL. For more information, see [How to convert a certificate into the appropriate format](https://knowledge.digicert.com/solution/how-to-convert-a-certificate-into-the-appropriate-format). ++### Get a trusted CA certificate ++In a production setup, the CA certificate is provided by an organization's public key infrastructure (PKI) or a public certificate authority. ++For testing, create a self-signed CA certificate with OpenSSL. For example, run the following command to generate a self-signed CA certificate with an RSA key, a distinguished name `CN=Contoso Root CA Cert`, and a validity of 365 days: ```bash-kubectl create configmap client-ca --from-file=client_ca.pem -n azure-iot-operations +openssl req -x509 -newkey rsa:4096 -keyout ca-key.pem -out ca.pem -days 365 -nodes -subj "/CN=Contoso Root CA Cert" ``` -In this example, the CA certificate is imported under the key `client_ca.pem`. MQTT broker trusts all CA certificates in the ConfigMap, so the name of the key can be anything. +The same command with [Step CLI](https://smallstep.com/docs/step-cli/installation/), which is a convenient tool for managing certificates, is: ++```bash +step certificate create "Contoso Root CA Cert" ca.pem ca-key.pem --profile root-ca --kty RSA --size 4096 --no-password --insecure + --not-after 8760h +``` ++These commands create a CA certificate `ca.pem` and a private key `ca-key.pem` in PEM format. The CA certificate `ca.pem` can be imported into the MQTT broker for X.509 authentication. ++### Import a trusted CA certificate ++To get started with X.509 authentication, import the trusted CA certificate into a ConfigMap in the `azure-iot-operations` namespace. To import a trusted CA certificate `ca.pem` into a ConfigMap named `client-ca`, run: ++```bash +kubectl create configmap client-ca --from-file=ca.pem -n azure-iot-operations +``` ++In this example, the CA certificate is imported under the key `ca.pem`. MQTT broker trusts all CA certificates in the ConfigMap, so the name of the key can be anything. To check the root CA certificate is properly imported, run `kubectl describe configmap`. The result shows the same base64 encoding of the PEM certificate file. Namespace: azure-iot-operations Data ====-client_ca.pem: +ca.pem: - --BEGIN CERTIFICATE---<Certificate> +MIIFDjCCAvagAwIBAgIRAKQWo1+S13GTwqZSUYLPemswDQYJKoZIhvcNAQELBQAw +... --END CERTIFICATE-- BinaryData ==== ``` -Once the trusted CA certificate is imported, enable X.509 client authentication by adding it as one of the authentication methods in a *BrokerAuthentication* resource linked to a TLS-enabled listener port. --### Certificate attributes for authorization +### Configure X.509 authentication method -X.509 attributes can be specified in the *BrokerAuthentication* resource for authorizing clients based on their certificate properties. The attributes are defined in the `authorizationAttributes` field. +Once the trusted CA certificate is imported, enable X.509 client authentication by adding it as an authentication method in a *BrokerAuthentication* resource. Ensure this resource is linked to a TLS-enabled listener port. # [Portal](#tab/portal) X.509 attributes can be specified in the *BrokerAuthentication* resource for aut 1. Choose an existing authentication policy or create a new one. 1. Add a new method by selecting **Add method**. 1. Choose the method type **X.509** from the dropdown list then select **Add details** to configure the method.+1. In the **X.509 authentication details** pane, specify the trusted CA certificate ConfigMap name using JSON format. ++ ```json + { + "trustedClientCaCert": "<TRUSTED_CA_CONFIGMAP>" + } + ``` + + Replace `<TRUSTED_CA_CONFIGMAP>` with the name of the ConfigMap that contains the trusted CA certificate. For example, `"trustedClientCaCert": "client-ca"`. ++ :::image type="content" source="media/howto-configure-authentication/x509-method.png" alt-text="Screenshot using Azure portal to set MQTT broker X.509 authentication method." lightbox="media/howto-configure-authentication/x509-method.png"::: - :::image type="content" source="media/howto-configure-authentication/x509-method.png" alt-text="Screenshot using Azure portal to set MQTT broker X.509 authentication method." lightbox="media/howto-configure-authentication/x509-method.png"::: +1. Optionally, add authorization attributes for clients using X.509 certificates. To learn more, see [Certificate attributes for authorization](#optional-certificate-attributes-for-authorization). +1. Select **Apply** to save the changes. # [Bicep](#tab/bicep) X.509 attributes can be specified in the *BrokerAuthentication* resource for aut param aioInstanceName string = '<AIO_INSTANCE_NAME>' param customLocationName string = '<CUSTOM_LOCATION_NAME>' param policyName string = '<POLICY_NAME>'+param trustedCaConfigMap string = '<TRUSTED_CA_CONFIGMAP>' resource aioInstance 'Microsoft.IoTOperations/instances@2024-11-01' existing = { name: aioInstanceName resource myBrokerAuthentication 'Microsoft.IoTOperations/instances/brokers/authe { method: 'X509' x509Settings: {- authorizationAttributes: { - root: { - subject: 'CN = Contoso Root CA Cert, OU = Engineering, C = US' - attributes: { - organization: 'contoso' - } - } - intermediate: { - subject: 'CN = Contoso Intermediate CA' - attributes: { - city: 'seattle' - foo: 'bar' - } - } - smartfan: { - subject: 'CN = smart-fan' - attributes: { - building: '17' - } - } - } + trustedClientCaCert: trustedCaConfigMap + // authorizationAttributes: { + //// Optional authorization attributes + //// See the next section for more information + // } } } ] resource myBrokerAuthentication 'Microsoft.IoTOperations/instances/brokers/authe ``` +Replace `<TRUSTED_CA_CONFIGMAP>` with the name of the ConfigMap that contains the trusted CA certificate. For example, `client-ca`. + Deploy the Bicep file using Azure CLI. ```azurecli spec: authenticationMethods: - method: X509 x509Settings:- authorizationAttributes: - root: - subject = "CN = Contoso Root CA Cert, OU = Engineering, C = US" - attributes: - organization = contoso - intermediate: - subject = "CN = Contoso Intermediate CA" - attributes: - city = seattle - foo = bar - smart-fan: - subject = "CN = smart-fan" - attributes: - building = 17 + trustedClientCaCert: <TRUSTED_CA_CONFIGMAP> + # authorizationAttributes: + ## Optional authorization attributes + ## See the next section for more information +``` ++Replace `<TRUSTED_CA_CONFIGMAP>` with the name of the ConfigMap that contains the trusted CA certificate. For example, `client-ca`. ++++#### Optional: Certificate attributes for authorization ++X.509 attributes can be specified in the *BrokerAuthentication* resource for authorizing clients based on their certificate properties. The attributes are defined in the `authorizationAttributes` field. ++For example: ++# [Portal](#tab/portal) ++In the Azure portal, when configuring the X.509 authentication method, add the authorization attributes in the **X.509 authentication details** pane in JSON format. ++```json +{ + "trustedClientCaCert": "<TRUSTED_CA_CONFIGMAP>", + "authorizationAttributes": { + "root": { + "subject": "CN = Contoso Root CA Cert, OU = Engineering, C = US", + "attributes": { + "organization": "contoso" + } + }, + "intermediate": { + "subject": "CN = Contoso Intermediate CA", + "attributes": { + "city": "seattle", + "foo": "bar" + } + }, + "smartfan": { + "subject": "CN = smart-fan", + "attributes": { + "building": "17" + } + } + } +} +``` + ++# [Bicep](#tab/bicep) ++```bicep +x509Settings: { + trustedClientCaCert: '<TRUSTED_CA_CONFIGMAP>' + authorizationAttributes: { + root: { + subject: 'CN = Contoso Root CA Cert, OU = Engineering, C = US' + attributes: { + organization: 'contoso' + } + } + intermediate: { + subject: 'CN = Contoso Intermediate CA' + attributes: { + city: 'seattle' + foo: 'bar' + } + } + smartfan: { + subject: 'CN = smart-fan' + attributes: { + building: '17' + } + } + } +} +``` ++# [Kubernetes (preview)](#tab/kubernetes) ++```yaml +x509Settings: + trustedClientCaCert: <TRUSTED_CA_CONFIGMAP> + authorizationAttributes: + root: + subject = "CN = Contoso Root CA Cert, OU = Engineering, C = US" + attributes: + organization = contoso + intermediate: + subject = "CN = Contoso Intermediate CA" + attributes: + city = seattle + foo = bar + smart-fan: + subject = "CN = smart-fan" + attributes: + building = 17 ``` In this example, every client that has a certificate issued by the root CA with The matching for attributes always starts from the leaf client certificate and then goes along the chain. The attribute assignment stops after the first match. In previous example, even if `smart-fan` has the intermediate certificate `CN = Contoso Intermediate CA`, it doesn't get the associated attributes. -Authorization rules can be applied to clients using X.509 certificates with these attributes. To learn more, see [Authorize clients that use X.509 authentication](./howto-configure-authorization.md). +Authorization rules can be applied to clients using X.509 certificates with these attributes. To learn more, see [Authorize clients that use X.509 authentication](./howto-configure-authorization.md#authorize-clients-that-use-x509-authentication). ++### Enable X.509 authentication for a listener port ++After importing the trusted CA certificate and configuring the *BrokerAuthentication* resource, link it to a TLS-enabled listener port. For more details, see [Enable TLS manual certificate management for a port](./howto-configure-brokerlistener.md#enable-tls-manual-certificate-management-for-a-port) and [Enable TLS automatic certificate management for a port](./howto-configure-brokerlistener.md#enable-tls-automatic-certificate-management-for-a-port). + ### Connect mosquitto client to MQTT broker with X.509 client certificate The following are the steps for client authentication: 1. The TLS channel is open, but the client authentication or authorization isn't finished yet. 1. The client then sends a CONNECT packet to MQTT broker. 1. The CONNECT packet is routed to a frontend again.-1. The frontend collects all credentials the client presented so far, like username and password fields, authentication data from the CONNECT packet, and the client certificate chain presented during the TLS handshake. +1. The frontend collects all credentials the client presented so far, like authentication data from the CONNECT packet, and the client certificate chain presented during the TLS handshake. 1. The frontend sends these credentials to the authentication service. The authentication service checks the certificate chain once again and collects the subject names of all the certificates in the chain. 1. The authentication service uses its [configured authorization rules](./howto-configure-authorization.md) to determine what attributes the connecting clients has. These attributes determine what operations the client can execute, including the CONNECT packet itself. 1. Authentication service returns decision to frontend broker. For example, if the client is a pod that uses the token mounted as a volume, lik Extend client authentication beyond the provided authentication methods with custom authentication. It's *pluggable* since the service can be anything as long as it adheres to the API. -When a client connects to MQTT broker and custom authentication is enabled, MQTT broker delegates the verification of client credentials to a custom authentication server with an HTTPS request along with all credentials the client presents. The custom authentication server responds with approval or denial for the client with the client's [attributes for authorization](./howto-configure-authorization.md). +When a client connects to the MQTT broker with custom authentication enabled, the broker sends an HTTPS request to a custom authentication server with the client's credentials. The server then responds with either approval or denial, including the client's [authorization attributes](./howto-configure-authorization.md). ### Create custom authentication service MQTT broker disconnects clients when their credentials expire. Disconnect after - Clients authenticated with X.509 disconnect when their client certificate expires - Clients authenticated with custom authentication disconnect based on the expiry time returned from the custom authentication server. -On disconnect, the client's network connection is closed. The client won't receive an MQTT DISCONNECT packet, but the broker logs a message that it disconnected the client. +On disconnect, the client's network connection is closed. The client doesn't receive an MQTT DISCONNECT packet, but the broker logs a message that it disconnected the client. MQTT v5 clients authenticated with SATs and custom authentication can reauthenticate with a new credential before their initial credential expires. X.509 clients can't reauthenticate and must re-establish the connection since authentication is done at the TLS layer. |
iot-operations | Howto Configure Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authorization.md | Clients that use [X.509 certificates for authentication](./howto-configure-authe ### Using attributes -To create rules based on properties from a client's certificate, its root CA, or intermediate CA, define the X.509 attributes in the *BrokerAuthorization* resource. For more information, see [Certificate attributes](howto-configure-authentication.md#certificate-attributes-for-authorization). +To create rules based on properties from a client's certificate, its root CA, or intermediate CA, define the X.509 attributes in the *BrokerAuthorization* resource. For more information, see [Certificate attributes](howto-configure-authentication.md#optional-certificate-attributes-for-authorization). ### With client certificate subject common name as username |
iot-operations | Howto Configure Brokerlistener | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener.md | Be careful when modifying the default listener using Bicep. Don't change the exi param aioInstanceName string = '<AIO_INSTANCE_NAME>' param customLocationName string = '<CUSTOM_LOCATION_NAME>' -resource aioInstance 'Microsoft.IoTOperations/instances@2024-09-15-preview' existing = { +resource aioInstance 'Microsoft.IoTOperations/instances@2024-11-01' existing = { name: aioInstanceName } resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-p name: customLocationName } -resource defaultBroker 'Microsoft.IoTOperations/instances/brokers@2024-09-15-preview' existing = { +resource defaultBroker 'Microsoft.IoTOperations/instances/brokers@2024-11-01' existing = { parent: aioInstance name: 'default' } -resource defaultListener 'Microsoft.IoTOperations/instances/brokers/listeners@2024-09-15-preview' = { +resource defaultListener 'Microsoft.IoTOperations/instances/brokers/listeners@2024-11-01' = { parent: defaultBroker name: 'default' extendedLocation: { To enable TLS with automatic certificate management, specify the TLS settings on ### Verify cert-manager installation -With automatic certificate management, you use cert-manager to manage the TLS server certificate. By default, cert-manager is installed alongside Azure IoT Operations Preview in the `cert-manager` namespace already. Verify the installation before proceeding. +With automatic certificate management, you use cert-manager to manage the TLS server certificate. By default, cert-manager is installed alongside Azure IoT Operations in the `cert-manager` namespace already. Verify the installation before proceeding. 1. Use `kubectl` to check for the pods matching the cert-manager app labels. |
iot-operations | Overview Broker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/overview-broker.md | -Azure IoT Operations features an enterprise-grade, standards-compliant MQTT broker that is scalable, highly available, and Kubernetes-native. It provides the messaging plane for Azure IoT Operations Preview, enables bi-directional edge/cloud communication and powers [event-driven applications](/azure/architecture/guide/architecture-styles/event-driven) at the edge. +Azure IoT Operations features an enterprise-grade, standards-compliant MQTT broker that is scalable, highly available, and Kubernetes-native. It provides the messaging plane for Azure IoT Operations, enables bi-directional edge/cloud communication and powers [event-driven applications](/azure/architecture/guide/architecture-styles/event-driven) at the edge. ## MQTT compliance |
iot-operations | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md | Use Wireshark to open the trace file. Look for connection failures or unresponsi 1. Filter the packets with the *ip.addr == [IP address]* parameter. Input the IP address of your custom DNS service address. 1. Review the DNS query and response, check if there's a domain name that isn't on the allowlist of Layered Network Management.++## Operations experience ++To sign in to the [operations experience](https://iotoperations.azure.com) web UI, you need a Microsoft Entra ID account with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. ++If you receive one of the following error messages: ++- A problem occurred getting unassigned instances +- Message: The request is not authorized +- Code: PermissionDenied ++Verify your Microsoft Entra ID account meets the requirements in the [prerequisites](../discover-manage-assets/howto-manage-assets-remotely.md#prerequisites) section for operations experience access. |
logic-apps | Quickstart Create Example Consumption Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md | To create a Standard logic app workflow that runs in single-tenant Azure Logic A > [!NOTE] > > Availability zones are automatically enabled for new and existing Consumption logic app workflows in - > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). + > [Azure regions that support availability zones](../reliability/availability-zones-region-support.md). > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
logic-apps | Set Up Zone Redundancy Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md | To provide resiliency and distributed availability, at least three separate avai For more information, see the following documentation: * [What are availability zones](../reliability/availability-zones-overview.md)?-* [Azure regions with availability zone support](../reliability/availability-zones-service-support.md) +* [Azure regions with availability zone support](../reliability/availability-zones-region-support.md) This guide provides a brief overview, considerations, and information about how to enable availability zones in Azure Logic Apps. Availability zones are supported with Standard logic app workflows, which run in ### [Consumption](#tab/consumption) -Availability zones are supported with Consumption logic app workflows, which run in multitenant Azure Logic Apps. This capability is automatically enabled for new and existing Consumption logic app workflows in [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). +Availability zones are supported with Consumption logic app workflows, which run in multitenant Azure Logic Apps. This capability is automatically enabled for new and existing Consumption logic app workflows in [Azure regions that support availability zones](../reliability/availability-zones-region-support.md). |
logic-apps | Tutorial Build Schedule Recurring Logic App Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md | You can create a similar workflow with a Standard logic app resource. However, t > [!NOTE] > > Availability zones are automatically enabled for new and existing Consumption logic app workflows in - > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). + > [Azure regions that support availability zones](../reliability/availability-zones-region-support.md). > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
logic-apps | Tutorial Process Email Attachments Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-email-attachments-workflow.md | After you confirm that your function works, create your logic app resource and w > [!NOTE] > > Availability zones are automatically enabled for new and existing Consumption logic app workflows in - > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). + > [Azure regions that support availability zones](../reliability/availability-zones-region-support.md). > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
logic-apps | Tutorial Process Mailing List Subscriptions Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md | You can create a similar workflow with a Standard logic app resource where some > [!NOTE] > > Availability zones are automatically enabled for new and existing Consumption logic app workflows in - > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). + > [Azure regions that support availability zones](../reliability/availability-zones-region-support.md). > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md | To view the remaining duration until end of support, that is, the number of mont The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right, which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.- ++## Onboard to Azure Local (optional) ++> [!Note] +> Perform this step only if you are migrating to [Azure Local](/azure-stack/hci/overview). ++Provide the Azure Local instance information and the credentials to connect to the system. For more information, see [Download the Azure Local software](/azure-stack/hci/deploy/download-azure-stack-hci-software). ++ ## Next steps |
operational-excellence | Relocation Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-virtual-machine-scale-sets.md | This article covers the recommended approach, guidelines, and practices for relo Before you begin, ensure that you have the following prerequisites: -- If the source VM supports availability zones, then the target region must also support availability zones. To see which regions support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support).+- If the source VM supports availability zones, then the target region must also support availability zones. To see which regions support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-region-support.md). - The subscription in the destination region needs enough quota to create the resources. If you exceeded the quota, request an increase. For more information, see [Azure subscription and service limits, quotas, and constraints](..//azure-resource-manager/management/azure-subscription-service-limits.md). |
operator-nexus | Concepts Cluster Upgrade Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-cluster-upgrade-overview.md | The runtime upgrade starts by upgrading the three management servers designated Once all management servers are upgraded, the upgrade progresses to the compute servers. Each rack is upgraded in alphanumeric order, and there are various configurations customers can use to dictate how the computes are upgrade to best limit disruption. As each rack progresses, there are various health checks performed in order to ensure the release successfully upgrades and a sufficient number of computes in a rack returns to operational status. When a rack completes, a customer defined waits time starts to provide extra time for workloads to come online. Once each rack upgrades, the upgrade completes and the cluster returns to `Running` status. +The steps to run a cluster runtime upgrade is located [here](./howto-cluster-runtime-upgrade.md). + ## Runtime upgrade strategies Each of the strategies explained provide users various controls for how and when compute racks are upgraded. These values are applicable only to the compute servers and not the management servers. Each strategy uses a `thresholdType` and `thresholdValue` to define the number or percent of successfully upgraded compute servers in a rack before proceeding to the next rack. |
private-5g-core | Reliability Private 5G Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reliability-private-5g-core.md | You can also deploy Azure Private 5G Core as a Highly Available (HA) service on [!INCLUDE [Availability zone description](../reliability/includes/reliability-availability-zone-description-include.md)] -The Azure Private 5G Core service is automatically deployed as zone-redundant in Azure regions that support availability zones, as listed in [Availability zone service and regional support](../reliability/availability-zones-service-support.md). If a region supports availability zones then all Azure Private 5G Core resources created in a region can be managed from any of the availability zones. +The Azure Private 5G Core service is automatically deployed as zone-redundant in Azure regions that support availability zones, as listed in [Azure regions with availability zone support](../reliability/availability-zones-region-support.md). If a region supports availability zones then all Azure Private 5G Core resources created in a region can be managed from any of the availability zones. No further work is required to configure or manage availability zones. Failover between availability zones is automatic. |
private-link | Network Security Perimeter Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/network-security-perimeter-concepts.md | A network security perimeter-aware private link resource is a PaaS resource that | Private link resource name | Resource type | Resources | |||--|-| Azure Monitor | Microsoft.Insights/dataCollectionEndpoints</br>Microsoft.Insights/ScheduledQueryRules</br>Microsoft.Insights/actionGroups</br>Microsoft.OperationalInsights/workspaces | Log Analytics Workspace, Application Insights, Alerts, Notification Service | -| Azure AI Search | Microsoft.Search/searchServices | - | -| Cosmos DB | Microsoft.DocumentDB/databaseAccounts | - | +| [Azure Monitor](/azure/azure-monitor/essentials/network-security-perimeter) | Microsoft.Insights/dataCollectionEndpoints</br>Microsoft.Insights/ScheduledQueryRules</br>Microsoft.Insights/actionGroups</br>Microsoft.OperationalInsights/workspaces | Log Analytics Workspace, Application Insights, Alerts, Notification Service | +| [Azure AI Search](/azure/search/search-security-network-security-perimiter) | Microsoft.Search/searchServices | - | +| [Cosmos DB](/azure/cosmos-db/how-to-configure-nsp) | Microsoft.DocumentDB/databaseAccounts | - | | Event Hubs | Microsoft.EventHub/namespaces | - |-| Key Vault | Microsoft.KeyVault/vaults | - | -| SQL DB | Microsoft.Sql/servers | - | +| [Key Vault](/azure/key-vault/general/network-security#network-security-perimeter-preview) | Microsoft.KeyVault/vaults | - | +| [SQL DB](/azure/azure-sql/database/network-security-perimeter) | Microsoft.Sql/servers | - | | [Storage](/azure/storage/common/storage-network-security) | Microsoft.Storage/storageAccounts | - | > [!NOTE] |
reliability | Availability Zones Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-baseline.md | -This article shows you how to assess the availability-zone readiness of your application for the purposes of migrating from non-availability zone to availability zone support. We'll take you through the steps you'll need to determine how you can take advantage of availability zone support in alignment with your application and regional requirements. For more detailed information on availability zones and the regions that support them, see [What are Azure regions and availability zones](availability-zones-overview.md). +This article shows you how to assess the availability-zone readiness of your application for the purposes of migrating from non-availability zone to availability zone support. Understand how you can take advantage of availability zone support, and how to meet your application and resiliency requirements. For more detailed information on availability zones and the regions that support them, see [What are Azure regions and availability zones](availability-zones-overview.md). When creating reliable workloads, you can choose at least one of the following availability zone configurations: There are a number of possible ways to create a reliable Azure application with ### Step 1: Check if the Azure region supports availability zones -In this first step, you'll need to [validate](availability-zones-service-support.md) that your selected Azure region support availability zones as well as the required Azure services for your application. +In this first step, you'll need to [validate](availability-zones-region-support.md) that your selected Azure region support availability zones as well as the required Azure services for your application. If your region supports availability zones, we highly recommended that you configure your workload for availability zones. If your region doesn't support availability zones, you'll need to use [Azure Resource Mover guidance](/azure/resource-mover/move-region-availability-zone) to migrate to a region that offers availability zone support. To list the available VM SKUs by Azure region and zone, see [Check VM SKU availa If your region doesn't support the services and SKUs that your application requires, you'll need to go back to [Step 1: Check the product availability in the Azure region](#step-1-check-if-the-azure-region-supports-availability-zones) to find a new region that supports the services and SKUs that your application requires. We highly recommended that you configure your workload with zone-redundancy. -For zonal high availability of Azure IaaS Virtual Machines, use [Virtual Machine Scale Sets (VMSS) Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) to spread VMs across multiple availability zones. +For multi-zone high availability of Azure IaaS Virtual Machines, use [Virtual Machine Scale Sets Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) to spread VMs across multiple availability zones. ### Step 3: Consider your application requirements For critical application components that require physical proximity and low late #### Does your application code have the readiness to handle a distributed model? -For a [distributed microservices model](/azure/architecture/guide/architecture-styles/microservices) and depending on your application, there's the possibility of ongoing data exchange between microservices across zones. This continual data exchange through APIs, could affect performance. To improve performance and maintain a reliable architecture, you can choose zonal deployment. +For a [distributed microservices model](/azure/architecture/guide/architecture-styles/microservices) and depending on your application, there's the possibility of ongoing data exchange between microservices across zones. This continual data exchange through APIs could affect performance. To improve performance and maintain a reliable architecture, you can choose zonal deployment. With a zonal deployment, you must: With a zonal deployment, you must: If the Azure service supports availability zones, we highly recommend that you use zone-redundancy by spreading nodes across the zones to get higher uptime SLA and protection against zonal outages. -For a 3-tier application it is important to understand the application, business, and data tiers; as well as their state (stateful or stateless) to architect in alignment with the best practices and guidance according to the type of workload. +For a 3-tier application it's important to understand the application, business, and data tiers; as well as their state (stateful or stateless) to architect in alignment with the best practices and guidance according to the type of workload. For specialized workloads on Azure as below examples, please refer to the respective landing zone architecture guidance and best practices. |
reliability | Availability Zones Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md | The table below lists each product that offers migration guidance and/or informa > [Azure availability zone migration baseline](availability-zones-baseline.md) > [!div class="nextstepaction"]-> [Azure services and regions with availability zones](availability-zones-service-support.md) +> [Azure services with availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions with availability zones](availability-zones-region-support.md) > [!div class="nextstepaction"] > [Availability of service by category](availability-service-by-category.md) |
reliability | Availability Zones Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-overview.md | The following diagram shows several example Azure regions. Regions 1 and 2 suppo :::image type="content" source="media/regions-availability-zones.png" alt-text="Screenshot of physically separate availability zone locations within an Azure region."::: -To see which regions support availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +To see which regions support availability zones, see [Azure regions with availability zone support](availability-zones-region-support.md). > [!NOTE] > You need to deploy two or more Virtual Machines to different availability zones in the same region to get the highest possible [SLA connectivity percentage](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1). There are two ways that Azure services use availability zones: - **Zone-redundant** resources are spread across multiple availability zones. Microsoft manages spreading requests across zones and the replication of data across zones. If an outage occurs in a single availability zone, Microsoft manages failover automatically. -Azure services support one or both of these approaches. Platform as a service (PaaS) services typically support zone-redundant deployments. Infrastructure as a service (IaaS) services typically support zonal deployments. For more information about how Azure services work with availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +Azure services support one or both of these approaches. Platform as a service (PaaS) services typically support zone-redundant deployments. Infrastructure as a service (IaaS) services typically support zonal deployments. For more information about how Azure services work with availability zones, see [Azure regions with availability zone support](availability-zones-region-support.md). For information on service-specific reliability support using availability zones as well as recommended disaster recovery guidance see [Reliability guidance overview](./reliability-guidance-overview.md). For more detailed information on how to use regions and availability zones in a ## Next steps -- [Azure services and regions with availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) ++- [Azure regions with availability zones](availability-zones-region-support.md) - [Availability zone migration guidance](availability-zones-migration-overview.md) |
reliability | Availability Zones Region Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-region-support.md | + + Title: Azure regions with availability zone support +description: Learn which Azure regions offer availability zone support +++ Last updated : 11/20/2024++++++# Azure regions with availability zone support ++Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones. Azure supports availability zones in every geographic area that has an Azure datacenter. For more information on availability zones and regions, see [What are Azure regions and availability zones?](availability-zones-overview.md) ++The following regions currently support availability zones: ++| Americas | Europe | Middle East | Africa | Asia Pacific | +|||||| +| Brazil South | France Central | Qatar Central | South Africa North | Australia East | +| Canada Central | Italy North | UAE North | | Central India | +| Central US | Germany West Central | Israel Central | | Japan East | +| East US | Norway East | | | *Japan West | +| East US 2 | North Europe | | | Southeast Asia | +| South Central US | UK South | | | East Asia | +| US Gov Virginia | West Europe | | | China North 3 | +| West US 2 | Sweden Central | | |Korea Central | +| West US 3 | Switzerland North | | | *New Zealand North | +| Mexico Central | Poland Central |||| +||Spain Central |||| ++\* To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. For upcoming regions that support availability zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/). ++## Next steps ++> [!div class="nextstepaction"] +> [Azure services with availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/) |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | Title: Azure services that support availability zones -description: Learn which services offer availability zone support and understand resiliency across all Azure services. + Title: Azure services with availability zone support +description: Learn which Azure services offer availability zone support. Previously updated : 04/15/2024 Last updated : 11/20/2024 --+ -# Availability zone service and regional support +# Azure services with availability zone support -Azure availability zones are physically separate locations within each Azure region. This article shows you which regions and services support availability zones. +Azure availability zones are physically separate locations within each Azure region. This article shows you which services support availability zones. For more information on availability zones and regions, see [What are Azure regions and availability zones?](availability-zones-overview.md), -## Azure regions with availability zone support --Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones. Azure has availability zones in every country/region in which Azure operates a datacenter region. -->[!IMPORTANT] ->Some services may have limited support for availability zones. For example, some may only support availability zones for certain tiers, regions, or SKUs. To get more information on service limitations for availability zone support, select that service listed in the [Azure services with availability zone support](#azure-services-with-availability-zone-support) section of this document. --The following regions currently support availability zones: --| Americas | Europe | Middle East | Africa | Asia Pacific | -|||||| -| Brazil South | France Central | Qatar Central | South Africa North | Australia East | -| Canada Central | Italy North | UAE North | | Central India | -| Central US | Germany West Central | Israel Central | | Japan East | -| East US | Norway East | | | *Japan West | -| East US 2 | North Europe | | | Southeast Asia | -| South Central US | UK South | | | East Asia | -| US Gov Virginia | West Europe | | | China North 3 | -| West US 2 | Sweden Central | | |Korea Central | -| West US 3 | Switzerland North | | | *New Zealand North | -| Mexico Central | Poland Central |||| -||Spain Central |||| -+Azure is continually expanding the number of services that support availability zones, including zonal and zone-redundant offerings. +## Types of availability zone support +Azure services can provide two types of availability zone support: *zonal* and *zone-redundant*. Each service supports either one or both types. When designing your reliability strategy, make sure that you understand which availability zone types are supported in each service of your workload. +- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources are aligned to a selected zone. For example, virtual machines, managed disks, or standard IP addresses can be aligned to a same zone, which allows for increased resiliency by having multiple instances of resources deployed to different zones. -\* To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. For upcoming regions that support availability zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/). +- **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across multiple zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ» -## Azure services with availability zone support -Azure services that support availability zones, including zonal and zone-redundant offerings, are continually expanding. +>[!IMPORTANT] +>Some services may have limited support for availability zones. For example, some may only support availability zones for certain tiers, regions, or SKUs. To get more information on service limitations for availability zone support, select that service in the table. -Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can combine all three of these approaches to architecture when you design your reliability strategy. +## Always-available services -- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources are aligned to a selected zone. For example, virtual machines, managed disks, or standard IP addresses can be aligned to a same zone, which allows for increased resiliency by having multiple instances of resources deployed to different zones.+Some Azure services don't support availability zones because they are: -- **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across multiple zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ»+- Available across multiple Azure regions within a geographic area, or even across all Azure regions globally. +- Resilient to zone-wide outages. +- Resilient to region-wide outages. -- **Always-available services**: Always available across all Azure geographies and are resilient to zone-wide outages and region-wide outages. For a complete list of always-available services, also called non-regional services, in Azure, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).+For a complete list of always-available services, also called non-regional services, in Azure, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). For more information on older-generation virtual machines, see [Previous generations of virtual machine sizes](/azure/virtual-machines/sizes-previous-gen). +## Azure services with availability zone support The following tables provide a summary of the current offering of zonal, zone-redundant, and always-available Azure services. They list Azure offerings according to the regional availability of each. >[!IMPORTANT] >To learn more about availability zones support and available services in your region, contact your Microsoft sales or customer representative. ++ ##### Legend+ ![Legend containing icons and meaning of each with respect to service category and regional availability of each service in the table.](media/legend.png) In the Product Catalog, always-available services are listed as "non-regional" services. Azure offerings are grouped into three categories that reflect their _regional_ | [Azure App Service](./reliability-app-service.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure App Service: App Service Environment](./reliability-app-service.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Backup](reliability-backup.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |-| [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | +| [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Batch](./reliability-batch.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](./migrate-cache-redis.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure AI Search](/azure/search/search-reliability#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | You can access Azure availability zones by using your Azure subscription. To lea ## Next steps -> [!div class="nextstepaction"] -> [Azure services and regions with availability zones](availability-zones-service-support.md) --> [!div class="nextstepaction"] -> [Availability zone migration guidance overview](availability-zones-migration-overview.md) --> [!div class="nextstepaction"] -> [Availability of service by category](availability-service-by-category.md) +- [Azure regions with availability zones](availability-zones-region-support.md) -> [!div class="nextstepaction"] -> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/) +- [Availability zone migration guidance overview](availability-zones-migration-overview.md) +- [Availability of service by category](availability-service-by-category.md) -> [!div class="nextstepaction"] -> [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) +- [Well-architected Framework: Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) |
reliability | Cross Region Replication Azure No Pair | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure-no-pair.md | To achieve geo-replication in nonpaired regions, use [Azure Site Recovery](/azur ## Next steps -- [Azure services and regions that support availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) - [Disaster recovery guidance by service](disaster-recovery-guidance-overview.md) - [Reliability guidance](./reliability-guidance-overview.md) - [Business continuity management program in Azure](./business-continuity-management-program.md) |
reliability | Cross Region Replication Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md | Some Azure services support cross-region replication to ensure business continui Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these scenarios, you are responsible for recovery and replication. These examples are illustrations of the *shared responsibility model*. It's a fundamental pillar in your disaster recovery strategy. For more information about the shared responsibility model and to learn about business continuity and disaster recovery in Azure, see [Business continuity management in Azure](business-continuity-management-program.md). -Shared responsibility becomes the crux of your strategic decision-making when it comes to disaster recovery. Azure doesn't require you to use cross-region replication, and you can use services to build resiliency without cross-replicating to another enabled region. But we strongly recommend that you configure your essential services across regions to benefit from [isolation](../security/fundamentals/isolation-choices.md) and improve [availability](availability-zones-service-support.md). +Shared responsibility becomes the crux of your strategic decision-making when it comes to disaster recovery. Azure doesn't require you to use cross-region replication, and you can use services to build resiliency without cross-replicating to another enabled region. But we strongly recommend that you configure your essential services across regions to benefit from [isolation](../security/fundamentals/isolation-choices.md) and improve [availability](availability-zones-overview.md). For applications that support multiple active regions, we recommend that you use available multiple enabled regions. This practice ensures optimal availability for applications and minimized recovery time if an event affects availability. Whenever possible, design your application for [maximum resiliency](/azure/architecture/framework/resiliency/overview) and ease of [disaster recovery](/azure/architecture/framework/resiliency/backup-and-recovery). Many regions have a paired region to support cross-region replication based on p Azure continues to expand globally in regions without a regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). However, [some services offer alternative options for cross-region replication](./cross-region-replication-azure-no-pair.md). -Non-paired regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines to allow for the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf). +Non-paired regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines to allow for the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf). The table below lists Azure regions without a region pair: The table below lists Azure regions without a region pair: ## Next steps - [Azure cross-region replication for non-paired regions](./cross-region-replication-azure-no-pair.md)-- [Azure services and regions that support availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) - [Disaster recovery guidance by service](disaster-recovery-guidance-overview.md) - [Reliability guidance](./reliability-guidance-overview.md) - [Business continuity management program in Azure](./business-continuity-management-program.md) |
reliability | Migrate Api Mgt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md | This article describes four scenarios for migrating an API Management instance t ## Prerequisites -* To configure availability zones for API Management, your instance must be in one of the [Azure regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +* To configure availability zones for API Management, your instance must be in one of the [Azure regions that support availability zones](availability-zones-region-support.md). * If you don't have an API Management instance, create one by following the [Create a new Azure API Management instance by using the Azure portal](../api-management/get-started-create-service-instance.md) quickstart. Select the Premium service tier. To add a new location to your API Management instance and enable availability zo * [Deploy an Azure API Management instance to multiple Azure regions](../api-management/api-management-howto-deploy-multi-region.md) * [Design review checklist for reliability](/azure/architecture/framework/resiliency/app-design)-* [Azure services and regions that support availability zones](availability-zones-service-support.md) +- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) |
reliability | Migrate App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-configuration.md | The following steps walk you through the process of creating a new target store > [Building for reliability](/azure/architecture/framework/resiliency/app-design) in Azure. > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) +> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate App Gateway V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-gateway-v2.md | -[Application Gateway Standard v2](../application-gateway/overview-v2.md) and Application Gateway with [WAF v2](../web-application-firewall/ag/ag-overview.md) supports zonal and zone redundant deployments. For more information about zone redundancy, see [Azure services and regions that support availability zones](availability-zones-service-support.md). +[Application Gateway Standard v2](../application-gateway/overview-v2.md) and Application Gateway with [WAF v2](../web-application-firewall/ag/ag-overview.md) supports zonal and zone redundant deployments. For more information about zone redundancy, see [What are availability zones?](availability-zones-overview.md). If you previously deployed **Azure Application Gateway Standard v2** or **Azure Application Gateway Standard v2 + WAF v2** without zonal support, you must redeploy these services to enable zone redundancy. Two migration options to redeploy these services are described in this article. Learn more about: > [Scaling and Zone-redundant Application Gateway v2](../application-gateway/application-gateway-autoscaling-zone-redundant.md) > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate Cache Redis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-cache-redis.md | Running multiple caches simultaneously as you convert your data to the new cache Learn more about: > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate Container Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-container-instances.md | To delete and redeploy a container group: ## Next steps > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate Cosmos Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-cosmos-nosql.md | Enabling availability zones is a great way to increase resilience of your Cosmos - Serverless accounts can use availability zones, but this choice is only available during account creation. Existing accounts without availability zones cannot be converted to an availability zone configuration. For mission critical workloads, provisioned throughput is the recommended choice. -- Understand that enabling availability zones is not an account-wide choice. A single Cosmos DB account can span an arbitrary number of Azure regions, each of which can independently be configured to leverage availability zones and some regional pairs may not have availability zone support. This is important, as some regions do not yet support availability zones, but adding them to a Cosmos DB account will not prevent enabling availability zones in other regions configured for that account. The billing model also reflects this possibility. For more information on SLA for Cosmos DB, see [Reliability in Cosmos DB for NoSQL](./reliability-cosmos-db-nosql.md#sla-improvements). To see which regions support availability zones, see [Azure regions with availability zone support](./availability-zones-service-support.md#azure-regions-with-availability-zone-support)+- Understand that enabling availability zones is not an account-wide choice. A single Cosmos DB account can span an arbitrary number of Azure regions, each of which can independently be configured to leverage availability zones and some regional pairs may not have availability zone support. This is important, as some regions do not yet support availability zones, but adding them to a Cosmos DB account will not prevent enabling availability zones in other regions configured for that account. The billing model also reflects this possibility. For more information on SLA for Cosmos DB, see [Reliability in Cosmos DB for NoSQL](./reliability-cosmos-db-nosql.md#sla-improvements). To see which regions support availability zones, see [Azure regions with availability zone support](./availability-zones-region-support.md). ## Downtime requirements Follow the steps below to enable availability zones for your account in select r ## Related content - [Move an Azure Cosmos DB account to another region](/azure/cosmos-db/how-to-move-regions)-- [Azure services and regions that support availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) |
reliability | Migrate Monitor Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-monitor-log-analytics.md | Learn more about: - [Azure Monitor Logs Dedicated Clusters](/azure/azure-monitor/logs/logs-dedicated-clusters) -- [Azure Services that support Availability Zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) ++- [Azure regions with availability zones](availability-zones-region-support.md) |
reliability | Migrate Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-recovery-services-vault.md | Follow these steps: ## Next steps -- [Reliability for Azure Backup](./reliability-backup.md)-- [Azure services and regions that support availability zones](availability-zones-service-support.md)+- [Reliability for Azure Backup](./reliability-backup.md) +- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) |
reliability | Migrate Service Fabric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-service-fabric.md | Sample templates are available at [Service Fabric cross availability zone templa Required: * Standard SKU cluster.-* Three [availability zones in the region](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +* Three [availability zones in the region](availability-zones-region-support.md). Recommended: |
reliability | Migrate Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md | Enabling zone redundancy for Azure SQL Database guarantees high availability as ## Prerequisites -Before you migrate to availability zone support, refer to the following table to ensure that your Azure SQL Database is in a supported service tier and deployment model. Make sure that your tier and model is offered in a [region that supports availability zones](/azure/reliability/availability-zones-service-support). +Before you migrate to availability zone support, refer to the following table to ensure that your Azure SQL Database is in a supported service tier and deployment model. Make sure that your tier and model is offered in a [region that supports availability zones](availability-zones-region-support.md). | Service tier | Deployment model | Zone redundancy availability | |--|||-| Premium | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support)| -| Business Critical | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) | +| Premium | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-region-support.md)| +| Business Critical | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-region-support.md) | | General Purpose | Single database or Elastic Pool | [Selected regions that support availability zones](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#general-purpose-service-tier-zone-redundant-availability)|-| Hyperscale | Single database | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) | +| Hyperscale | Single database | [All regions that support availability zones](availability-zones-region-support.md) | ## Downtime requirements To disable zone-redundancy for Hyperscale service tier, you can reverse the step ## Next steps > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-managed-instance.md | ->Zone redundancy for SQL Managed Instance is currently in Preview. To learn which regions support SQL Instance zone redundancy, see [Services support by region](availability-zones-service-support.md). +>Zone redundancy for SQL Managed Instance is currently in Preview. To learn which regions support SQL Instance zone redundancy, see [Services support by region](availability-zones-region-support.md). SQL Managed Instance offers a zone redundant configuration that uses [Azure availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) to replicate your instances across multiple physical locations within an Azure region. With zone redundancy enabled, your Business Critical managed instances become resilient to a larger set of failures, such as catastrophic datacenter outages, without any changes to application logic. For more information on the availability model for SQL Database, see [Business Critical service tier zone redundant availability section in the Azure SQL documentation](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#premium-and-business-critical-service-tier-zone-redundant-availability). |
reliability | Migrate Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-storage.md | Learn more about: > [Azure Storage redundancy](../storage/common/storage-redundancy.md) > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services that support availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Migrate Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md | The following table describes the support matrix for moving virtual machines fro | VMs within an Availability Set | Not supported | | | VMs inside Virtual Machine Scale Sets with uniform orchestration | Not supported | | | VMs inside Virtual Machine Scale Sets with flexible orchestration | Not supported | |-| Supported regions | Supported | Only availability zone supported regions are supported. Learn [more](../reliability/availability-zones-service-support.md) to learn about the region details. | +| Supported regions | Supported | Only availability zone supported regions are supported. Learn [more](availability-zones-region-support.md) to learn about the region details. | | VMs already located in an availability zone | Not supported | Cross-zone move isn't supported. Only VMs that are within the same region can be moved to another availability zone. | | VM extensions | Not Supported | VM move is supported, but extensions aren't copied to target zonal VM. | | VMs with trusted launch | Supported | Re-enable the **Integrity Monitoring** option in the portal and save the configuration after the move. | The following requirements should be part of a disaster recovery strategy that h ## Next Steps -- [Azure services and regions that support availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) - [Reliability in Virtual Machines](./reliability-virtual-machines.md) - [Reliability in Virtual Machine Scale Sets](./reliability-virtual-machine-scale-sets.md) - [Move single instance Azure VMs from regional to zonal configuration using PowerShell](/azure/virtual-machines/move-virtual-machines-regional-zonal-powershell) |
reliability | Migrate Workload Aks Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-workload-aks-mysql.md | This migration guidance focuses mainly on the infrastructure and availability co To provide full workload support for availability zones, each service dependency in the workload must support availability zones. -There are two approaches types of availability zone supported +There are two approaches types of availability zone supported The AKS and MySQL workload architecture consists of the following component dependencies: For your application tier, please review the business continuity and disaster re ## Next Steps Learn more about:++> [!div class="nextstepaction"] +> [Azure services that support availability zones](availability-zones-service-support.md) + > [!div class="nextstepaction"]-> [Azure Services that support Availability Zones](availability-zones-service-support.md#azure-services-with-availability-zone-support) +> [Azure regions that support availability zones](availability-zones-region-support.md) |
reliability | Overview Reliability Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md | Azure Media Services| [High Availability with Media Services and Video on Demand ## Related content -- [Azure services and regions with availability zones](availability-zones-service-support.md)+- [Azure services with availability zones](availability-zones-service-support.md) +- [Azure regions with availability zones](availability-zones-region-support.md) - [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability) |
reliability | Reliability App Gateway Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-gateway-containers.md | Application Gateway for Containers (AGC) is always deployed in a highly availabl ### Prerequisites -To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-region-support.md). ## Next steps |
reliability | Reliability App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md | For App Service plans that aren't configured as zone redundant, VM instances are ::: zone pivot="free-shared-basic,premium" -Zone-redundant App Service plans can be deployed in [any region that supports availability zones](./availability-zones-service-support.md#azure-regions-with-availability-zone-support). +Zone-redundant App Service plans can be deployed in [any region that supports availability zones](./availability-zones-region-support.md). ::: zone-end |
reliability | Reliability Azure Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md | By enabling Container Apps' zone redundancy feature, replicas are automatically Azure Container Apps offers the same reliability support regardless of your plan type. -Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available. For a list of regions that support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md). +Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available. For a list of regions that support availability zones, see [Azure regions with availability zones](availability-zones-region-support.md). ### SLA improvements |
reliability | Reliability Azure Storage Mover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-storage-mover.md | If the region supports availability zones, the instance metadata is automaticall ### Prerequisites -- To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions supports availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +- To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions supports availability zones, see the [list of supported regions](availability-zones-region-support.md). - (Optional) If your target storage account doesn't support availability zones, and you would like to migrate the account to AZ support, see [Migrate Azure Storage accounts to availability zone support](migrate-storage.md). |
reliability | Reliability Bastion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bastion.md | If transient faults affect your virtual machine or Azure Bastion host, clients u [!INCLUDE [AZ support description](includes/reliability-availability-zone-description-include.md)] -You can configure Azure Bastion to be *zone redundant* so that your resources are spread across multiple [availability zones](../reliability/availability-zones-overview.md). When you spread resources across availability zones, you can achieve resiliency and reliability for your production workloads. +Azure Bastion supports availability zones in both zonal and zone-redundant configurations: -You can specify which availability zone or zones an Azure Bastion resource should be deployed to. Azure Bastion spreads your instances across those zones. The following diagram shows Azure Bastion instances spread across three zones: +- *Zonal:* You can select a single availability zone for an Azure Bastion resource. + > [!NOTE] + > Pinning to a single zone doesnΓÇÖt increase resiliency. To improve resiliency, you need to either use a zone-redundant configuration or explicitly deploy resources into multiple zones. ++- *Zone-redundant:* Enabling zone redundancy for an Azure Bastion resource spreads your instances across multiple [availability zones](../reliability/availability-zones-overview.md). When you spread resources across availability zones, you can achieve resiliency and reliability for your production workloads. ++The following diagram shows a zone-redundant Azure Bastion resource, with its instances spread across three zones: + > [!NOTE] > If you specify more availability zones than you have instances, Azure Bastion spreads instances across as many zones as it can. If an availability zone is unavailable, the instance in the faulty zone is replaced with another instance in a healthy zone. ### Regions supported -Zone-redundant Azure Bastion resources can be deployed into the following regions: +Zonal and zone-redundant Azure Bastion resources can be deployed into the following regions: | Americas | Europe | Middle East | Africa | Asia Pacific | |||||| Zone-redundant Azure Bastion resources can be deployed into the following region | East US 2 EUAP | Italy North | | | | Mexico Central| Spain Central | | | - ### Requirements -To configure Azure Bastion resources with zone redundancy, you must deploy with the Basic, Standard, or Premium SKUs. --Bastion requires a Standard SKU zone-redundant Public IP. -+- To configure Azure Bastion resources to be zonal or zone redundant, you must deploy with the Basic, Standard, or Premium SKUs. +- Azure Bastion requires a Standard SKU zone-redundant Public IP address. ### Cost There's no additional cost to use zone redundancy for Azure Bastion. >[!IMPORTANT] > You can't change the availability zone setting after you deploy your Azure Bastion resource. -When you select which availability zones to use, you're actually selecting the *logical availability zone*. If you deploy other workload components in a different Azure subscription, they might use a different logical availability zone number to access the same physical availability zone. For more information, see [Physical and logical availability zones](./availability-zones-overview.md#physical-and-logical-availability-zones). -**Migration:** It's not possible to add availability zone support to an existing resource that doesn't have it. Instead, you need to create an Azure Bastion resource in the new region and delete the old one. +**Migration:** It's not possible to change the availability zone configuration of an existing Azure Bastion resource. Instead, you need to create an Azure Bastion resource with the new configuration and delete the old one. ### Traffic routing between zones When you initiate an SSH or RDP session, it can be routed to an Azure Bastion instance in any of the availability zones you selected. -A session might be sent to an Azure Bastion instance in an availability zone that's different from the virtual machine you're connecting to. In the following diagram, a request from the user is sent to an Azure Bastion instance in zone 2, although the virtual machine is in zone 1: +If you configure zone redundancy on Azure Bastion, a session might be sent to an Azure Bastion instance in an availability zone that's different from the virtual machine you're connecting to. In the following diagram, a request from the user is sent to an Azure Bastion instance in zone 2, although the virtual machine is in zone 1: :::image type="content" source="./media/reliability-bastion/bastion-cross-zone.png" alt-text="Diagram that shows Azure Bastion with three instances. A user request goes to an Azure Bastion instance in zone 2 and is sent to a VM in zone 1." border="false"::: In most scenarios, the small amount of cross-zone latency isn't significant. How ### Zone-down experience -**Detection and response:** Azure Bastion detects and responds to failures in an availability zone. You don't need to do anything to initiate an availability zone failover. +**Detection and response:** When you use zone redundancy, Azure Bastion detects and responds to failures in an availability zone. You don't need to do anything to initiate an availability zone failover. **Active requests:** When an availability zone is unavailable, any RDP or SSH connections in progress that use an Azure Bastion instance in the faulty availability zone are terminated and need to be retried. If the virtual machine you're connecting to isn't in the affected availability zone, the virtual machine continues to be accessible. See [Reliability in virtual machines: Zone down experience](./reliability-virtual-machines.md#zone-down-experience) for more information on the VM zone down experience. -**Traffic rerouting:** New connections use Azure Bastion instances in the surviving availability zones. Overall, Azure Bastion remains operational. +**Traffic rerouting:** When you use zone redundancy, new connections use Azure Bastion instances in the surviving availability zones. Overall, Azure Bastion remains operational. ### Failback |
reliability | Reliability Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md | Batch maintains parity with Azure on supporting availability zones. - Because InfiniBand doesn't support inter-zone communication, you can't create a pool with a zonal policy if it has inter-node communication enabled and uses a [VM SKU that supports InfiniBand](/azure/virtual-machines/workloads/hpc/enable-infiniband). -- Batch maintains parity with Azure on supporting availability zones. To use the zonal option, your pool must be created in an [Azure region with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).+- Batch maintains parity with Azure on supporting availability zones. To use the zonal option, your pool must be created in an [Azure region with availability zone support](availability-zones-region-support.md). - To allocate your Batch pool across availability zones, the Azure region in which the pool was created must support the requested VM SKU in more than one zone. To validate that the region supports the requested VM SKU in more than one zone, call the [Resource Skus List API](/rest/api/compute/resource-skus/list?tabs=HTTP) and check the `locationInfo` field of `resourceSku`. Ensure that more than one zone is supported for the requested VM SKU. You can also use the [Azure CLI](/rest/api/compute/resource-skus/list?tabs=CLI) to list all available Resource SKUs with the following command: |
reliability | Reliability Cosmos Db Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-cosmos-db-nosql.md | With availability zones enabled, Azure Cosmos DB for NoSQL supports a *zone-redu ### Prerequisites -- Your replicas must be deployed in an Azure region that supports availability zones. To see if your region supports availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +- Your replicas must be deployed in an Azure region that supports availability zones. To see if your region supports availability zones, see the [list of supported regions](availability-zones-region-support.md). - Determine whether or not availability zones add enough value to your current configuration in [Impact of using availability zones](#impact-of-using-availability-zones). |
reliability | Reliability Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-event-grid.md | Event Grid resource definitions for topics, system topics, domains, and event su ### Prerequisites -For availability zone support, your Event Grid resources must be in a region that supports availability zones. To review which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +For availability zone support, your Event Grid resources must be in a region that supports availability zones. To review which regions support availability zones, see the [list of supported regions](availability-zones-region-support.md). ### Pricing |
reliability | Reliability Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-event-hubs.md | Event Hubs implements transparent failure detection and failover mechanisms so t ### Prerequisites -Availability zone support is only available in [Azure regions with availability zones](./availability-zones-service-support.md). +Availability zone support is only available in [Azure regions with availability zones](./availability-zones-region-support.md). ### Create a resource with availability zones enabled |
reliability | Reliability Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md | Availability zone support for Azure Functions is available on both Premium (Elas [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] -Azure Functions supports a [zone-redundant deployment](availability-zones-service-support.md#azure-services-with-availability-zone-support). +Azure Functions supports a [zone-redundant deployment](availability-zones-service-support.md). When you configure Functions as zone redundant, the platform automatically spreads the function app instances across three zones in the selected region. |
reliability | Reliability Hdinsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-hdinsight.md | This article describes reliability support in [Azure HDInsight](../hdinsight/hdi [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] -Azure HDInsight supports a [zonal deployment configuration](availability-zones-service-support.md#azure-services-with-availability-zone-support). Azure HDInsight cluster nodes are placed in a single zone that you select in the selected region. A zonal HDInsight cluster is isolated from any outages that occur in other zones. However, if an outage impacts the specific zone chosen for the HDInsight cluster, the cluster won't be available. This deployment model provides inexpensive, low latency network connectivity within the cluster. Replicating this deployment model into multiple availability zones can provide a higher level of availability to protect against hardware failure. +Azure HDInsight supports a [zonal deployment configuration](availability-zones-service-support.md). Azure HDInsight cluster nodes are placed in a single zone that you select in the selected region. A zonal HDInsight cluster is isolated from any outages that occur in other zones. However, if an outage impacts the specific zone chosen for the HDInsight cluster, the cluster won't be available. This deployment model provides inexpensive, low latency network connectivity within the cluster. Replicating this deployment model into multiple availability zones can provide a higher level of availability to protect against hardware failure. >[!IMPORTANT] >For deployments where users don't specify a specific zone, node types are not zone resilient and can experience downtime during an outage in any zone in that region. |
reliability | Reliability Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-load-balancer.md | Although it's recommended that you deploy Load Balancer with zone-redundancy, a ### Prerequisites -- To use availability zones with Load Balancer, you need to create your load balancer in a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +- To use availability zones with Load Balancer, you need to create your load balancer in a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-region-support.md). - Use Standard SKU for load balancer and Public IP for availability zones support. |
reliability | Reliability Notification Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-notification-hubs.md | In a region that supports availability zones, Notification Hubs supports a zone- ### Prerequisites -- Azure Notification Hubs uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available. For a list of regions that support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md).+- Azure Notification Hubs uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available. For a list of regions that support availability zones, see [Azure regions with availability zones](availability-zones-region-support.md). - Availability zones are supported by default only in specific tiers. To learn which tiers support availability zone deployments, see [Notification Hubs pricing](https://azure.microsoft.com/pricing/details/notification-hubs. |
reliability | Reliability Postgresql Flexible Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md | Azure Database for PostgreSQL - Flexible Server offers high availability support [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] -Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events. +Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events. - **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable. |
reliability | Reliability Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md | Virtual Machine Scale Sets supports both zonal and zone-redundant deployments wi ### Prerequisites -1. To use availability zones, your scale set must be created in a [supported Azure region](./availability-zones-service-support.md). +1. To use availability zones, your scale set must be created in a [supported Azure region](./availability-zones-region-support.md). 1. All VMs - even single instance VMs - should be deployed into a scale set using [flexible orchestration](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration) mode to future-proof your application for scaling and availability. |
reliability | Reliability Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md | This article contains detailed information on VM regional resiliency with [avail [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] -Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](availability-zones-service-support.md). The customer is responsible for configuring and migrating their virtual machines for availability. +Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [Azure services with availability zones](availability-zones-service-support.md). The customer is responsible for configuring and migrating their virtual machines for availability. To learn more about availability zone readiness options, see: - See [availability options for VMs](/azure/virtual-machines/availability)-- Review [availability zone service and region support](availability-zones-service-support.md)+- Review [availability zone service support](./availability-zones-service-support.md) and [region support](availability-zones-region-support.md) - [Migrate existing VMs](migrate-vm.md) to availability zones ### Prerequisites -- Your virtual machine SKUs must be available across the zones in for your region. To review which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support). +- Your virtual machine SKUs must be available across the zones in for your region. To review which regions support availability zones, see the [list of supported regions](availability-zones-region-support.md). - Your VM SKUs must be available across the zones in your region. To check for VM SKU availability, use one of the following methods: |
resource-mover | Move Region Availability Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md | -[Azure availability zones](../availability-zones/az-overview.md#availability-zones) help protect your Azure deployment from datacenter failures. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, thereΓÇÖs a minimum of three separate zones in all [enabled regions](../availability-zones/az-region.md). Using Resource Mover, you can move: +[Azure availability zones](../availability-zones/az-overview.md#availability-zones) help protect your Azure deployment from datacenter failures. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, thereΓÇÖs a minimum of three separate zones in all [enabled regions](../reliability/availability-zones-region-support.md). Using Resource Mover, you can move: - A single instance VM to an availability zone/availability set in the target region. - A VM in an availability set to an availability zone/availability set in the target region. |
resource-mover | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/whats-new.md | -The [Azure Resource Mover](overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated on a regular basis. +[Azure Resource Mover](overview.md) is constantly improving and releasing new features that simplify moving workloads in Azure. These new features expand the current capability of region-to-region migration by supporting more resource types or adding in new move capabilities. +You can learn more about the new releases by bookmarking this page or by subscribing to updates [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR85jvyZFzJ9Fij7HO6nPfn5UNFc3QTJXNFMwNFhKMDUwOEhOTzdFQzFEMi4u). +## Updates (March 2024) -## What's new for Resource Mover +### Capability to move Azure VMs to another subscription and region -### Updates (September-2020) +Azure Resource Mover now supports moving resources from one subscription to another, in addition to moving Azure VMs across regions. This feature helps consolidate, organize, manage, and bill resources more effectively. The **Edit target subscription** option is available on the move resources blade, along with options to add or remove resources, prepare, initiate, discard, and commit the move. After adding resources, you can select **Edit target subscription** to move resources to a different subscription than the source. ++For more information, see [Move Azure VMs to another subscription and region](./move-region-within-resource-group.md). +++### Updates (September 2020) Azure Resource Mover is now in public preview. |
route-server | Troubleshoot Route Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/troubleshoot-route-server.md | Although Azure VPN gateway can receive the default route from its BGP peers incl The ASN that the Route Server uses is 65515. Make sure you configure a different ASN for your NVA so that an *eBGP* session can be established between your NVA and Route Server so route propagation can happen automatically. Make sure you enable "multi-hop" in your BGP configuration because your NVA and the Route Server are in different subnets in the virtual network. +### Why does connectivity not work when I advertise routes with an ASN of 0 in the AS-Path? ++Azure Route Server drops routes with an ASN of 0 in the AS-Path. To ensure these routes are successfully advertised into Azure, the AS-Path should not include 0. + ### The BGP peering between my NVA and Route Server is up. I can see routes exchanged correctly between them. Why aren't the NVA routes in the effective routing table of my VM? * If your VM is in the same virtual network as your NVA and Route Server: - Route Server exposes two BGP peer IPs, which are hosted on two VMs that share the responsibility of sending the routes to all other VMs running in your virtual network. Each NVA must set up two identical BGP sessions (for example, use the same AS number, the same AS path and advertise the same set of routes) to the two VMs so that your VMs in the virtual network can get consistent routing info from Azure Route Server. + Route Server exposes two BGP peer IPs, which share the responsibility of sending the routes to all other VMs running in your virtual network. Each NVA must set up two identical BGP sessions (for example, use the same AS number, the same AS path and advertise the same set of routes) to the two BGP peer IPs so that your VMs in the virtual network can get consistent routing info from Azure Route Server. :::image type="content" source="./media/troubleshoot-route-server/network-virtual-appliances.png" alt-text="Diagram showing a network virtual appliance (NVA) with Azure Route Server."::: |
sap | Compliance Bcdr Reliabilty | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-bcdr-reliabilty.md | This article describes reliability support in Azure Center for SAP Solutions, an Azure Center for SAP solutions is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems. ## Availability zone support-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In case of a local zone failure, availability zones are designed such that, if one zone is affected, the remaining two zones can support: regional services, capacity and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](/azure/reliability/availability-zones-service-support). +Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In case of a local zone failure, availability zones are designed such that, if one zone is affected, the remaining two zones can support: regional services, capacity and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [What are availability zones?](../../reliability/availability-zones-overview.md). -There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](/azure/reliability/availability-zones-service-support). +There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](../../reliability/availability-zones-service-support.md). Azure Center for SAP Solutions supports zone-redundancy. When creating a new SAP system through Azure Center for SAP solutions, you can choose the Compute availability option for the infrastructure being deployed. You can choose to deploy the SAP system with zone redundancy based on your requirements, while the service is zone-redundant by default. [Learn more about deployment type options for SAP systems here](/azure/sap/center-sap-solutions/deploy-s4hana#deployment-types). |
sap | Deployment Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-checklist.md | This document should contain: - The current inventory of SAP components and applications, and a target application inventory for Azure. - A responsibility assignment matrix (RACI) that defines the responsibilities and assignments of the parties involved. Start at a high level, and work to more granular levels throughout planning and the first deployments. - A high-level solution architecture. Best practices and example architectures from [Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-overview) should be consulted.-- A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/), and list of [regions with availability zone support](../../reliability/availability-zones-service-support.md). To learn which services are available in each region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/).+- A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/), and list of [regions with availability zone support](../../reliability/availability-zones-region-support.md). To learn which services are available in each region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/). - A networking architecture to connect from on-premises to Azure. Start to familiarize yourself with the [Azure enterprise scale landing zone](/azure/cloud-adoption-framework/ready/enterprise-scale/) concept. - Security principles for running high-impact business data in Azure. To learn about data security, start with the Azure security documentation. - Storage strategy to cover block devices (Managed Disk) and shared filesystems (such as Azure Files or Azure NetApp Files) that should be further refined to file-system sizes and layouts in the technical design document. |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md | In the SAP workload documentation space, you can find the following areas: - **Azure Monitor for SAP solutions**: Microsoft developed monitoring solutions specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver. This section documents the deployment and usage of the service ## Change Log+- November 19, 2024: Update parameter `enque/encni/set_so_keepalive` to uppercase, as the parameter is case sensitive. Updated in [SAP workloads on Azure: planning and deployment checklist](deployment-checklist.md),[HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md), [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md),[Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md),[HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md),[Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-netapp-files.md),[Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](./high-availability-guide-suse-nfs-simple-mount.md),[Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md),[HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md),[SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md),[SAP ASCS/SCS installation on Windows with file share](sap-high-availability-installation-wsfc-file-share.md),[SAP ASCS/ERS installation on Windows with shared disk](sap-high-availability-installation-wsfc-shared-disk.md). - November 5, 2024: Add missing step to start HANA [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md). - November 1, 2024: Adding HANA high-availability hook ChkSrv for [dying indexserver for RHEL based cluster setups](./sap-hana-high-availability-rhel.md#implement-sap-hana-system-replication-hooks). - October 29, 2024: some changes on disk caching and smaller updates in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md). Plus fixing some typoes in HANA storage configuration documents |
sap | High Availability Guide Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md | The following items are prefixed with: 7. **[A]** Configure RHEL. - Based on the RHEL version, perform the configuration mentioned in SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167), SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999), or SAP Note [3108316](https://launchpad.support.sap.com/#/notes/2772999). + Based on the RHEL version, perform the configuration mentioned in SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167), SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999), or SAP Note [3108316](https://launchpad.support.sap.com/#/notes/3108316). ### Install SAP NetWeaver ASCS/ERS |
sap | High Availability Guide Suse Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md | The following items are prefixed with either **[A]** - applicable to all nodes, service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector # Add the keep alive parameter, if using ENSA1- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736). |
sap | High Availability Guide Suse Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md | The following items are prefixed with either **[A]** - applicable to all nodes, service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector # Add the keep alive parameter, if using ENSA1- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736). |
sap | High Availability Guide Suse Nfs Simple Mount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md | The instructions in this section are applicable only if you're using Azure NetAp service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector # Add the keepalive parameter, if you're using ENSA1.- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For Standalone Enqueue Server 1 and 2 (ENSA1 and ENSA2), make sure that the `keepalive` OS parameters are set as described in SAP Note [1410736](https://launchpad.support.sap.com/#/notes/1410736). |
sap | High Availability Guide Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md | The following items are prefixed with either **[A]** - applicable to all nodes, service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector # Add the keep alive parameter, if using ENSA1- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736). |
sap | High Availability Guide Windows Netapp Files Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md | Update parameters in the SAP ASCS/SCS instance profile \<SID>_ASCS/SCS\<Nr>_\<Ho | Parameter name | Parameter value | | | | | gw/netstat_once | **0** |-| enque/encni/set_so_keepalive | **true** | +| enque/encni/set_so_keepalive | **TRUE** | | service/ha_check_node | **1** | Parameter `enque/encni/set_so_keepalive` is only needed if using ENSA1. |
sap | Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md | ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325 Previously updated : 06/19/2024 Last updated : 11/19/2024 If you're running ERS1, add the SAP profile parameter `enque/encni/set_so_keepal 1. Add this profile parameter to the SAP ASCS/SCS instance profile, if you're using ERS1: ```powershell- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For both ERS1 and ERS2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736). |
sap | Sap High Availability Installation Wsfc File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-file-share.md | Update parameters in the SAP ASCS/SCS instance profile \<SID>_ASCS/SCS\<Nr>_\<Ho | Parameter name | Parameter value | | | | | gw/netstat_once | **0** |-| enque/encni/set_so_keepalive | **true** | +| enque/encni/set_so_keepalive | **TRUE** | | service/ha_check_node | **1** | Parameter `enque/encni/set_so_keepalive` is only needed if using ENSA1. |
sap | Sap High Availability Installation Wsfc Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-shared-disk.md | Before you begin the installation, review these documents: * [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance][sap-high-availability-infrastructure-wsfc-shared-disk] -We don't describe the DBMS setup in this article because setups vary depending on the DBMS system you use. We assume that high-availability concerns with the DBMS are addressed with the functionalities that different DBMS vendors support for Azure. Examples are Always On or database mirroring for SQL Server and Oracle Data Guard for Oracle databases. The high availability scenarios for the DBMS are not covered in this article. +We don't describe the DBMS setup in this article because setups vary depending on the DBMS system you use. We assume that high-availability concerns with the DBMS are addressed with the functionalities that different DBMS vendors support for Azure. Examples are Always On or database mirroring for SQL Server and Oracle Data Guard for Oracle databases. The high availability scenarios for the DBMS aren't covered in this article. There are no special considerations when different DBMS services interact with a clustered SAP ASCS or SCS configuration in Azure. Installing SAP with a high-availability ASCS/SCS instance involves these tasks: _Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address_ -2. If are using the new SAP Enqueue Replication Server 2, which is also clustered instance, then you need to reserve in DNS a virtual host name for ERS2 as well. +2. If using the new SAP Enqueue Replication Server 2, which is also clustered instance, then you need to reserve in DNS a virtual host name for ERS2 as well. > [!IMPORTANT] > The IP address that you assign to the virtual host name of the ERS2 instance must be the second the IP address that you assigned to Azure Load Balancer. Installing SAP with a high-availability ASCS/SCS instance involves these tasks: ### <a name="e4caaab2-e90f-4f2c-bc84-2cd2e12a9556"></a> Modify the SAP profile of the ASCS/SCS instance -If you have Enqueue Replication Server 1, add SAP profile parameter `enque/encni/set_so_keepalive` as described below. The profile parameter prevents connections between SAP work processes and the enqueue server from closing when they are idle for too long. The SAP parameter is not required for ERS2. +If you have Enqueue Replication Server 1, add SAP profile parameter `enque/encni/set_so_keepalive` as described below. The profile parameter prevents connections between SAP work processes and the enqueue server from closing when they're idle for too long. The SAP parameter isn't required for ERS2. 1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1. ```- enque/encni/set_so_keepalive = true + enque/encni/set_so_keepalive = TRUE ``` For both ERS1 and ERS2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736). However, this won't work in some cluster configurations because only one instanc To add a probe port run this PowerShell Module on one of the cluster VMs: -- In the case of SAP ASC/SCS Instance +- For the SAP ASC/SCS Instance ```powershell Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID SID -ProbePort 62000 ``` -- If using ERS2, which is clustered. There is no need to configure probe port for ERS1, as it is not clustered. +- If using ERS2, which is clustered. There's no need to configure probe port for ERS1, as it isn't clustered. ```powershell Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID SID -ProbePort 62001 -IsSAPERSClusteredInstance $True ``` To add a probe port run this PowerShell Module on one of the cluster VMs: ### <a name="4498c707-86c0-4cde-9c69-058a7ab8c3ac"></a> Open the Windows firewall probe port Open a Windows firewall probe port on both cluster nodes. Use the following script to open a Windows firewall probe port. Update the PowerShell variables for your environment. -If using ERS2, you will also need to open the firewall port for the ERS2 probe port. +If using ERS2, you'll also need to open the firewall port for the ERS2 probe port. ```powershell $ProbePort = 62000 # ProbePort of the Azure internal load balancer Install an SAP Additional Application Server (AAS) on all the virtual machines t For the outlined failover tests, we assume that SAP ASCS is active on node A. -1. Verify that the SAP system can successfully failover from node A to node B +1. Verify that the SAP system can successfully fail over from node A to node B Choose one of these options to initiate a failover of the SAP \<SID\> cluster group from cluster node A to cluster node B: - Failover Cluster Manager - Failover Cluster PowerShell |
sentinel | Create Nrt Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md | You create NRT rules the same way you create regular [scheduled-query analytics - You can automate responses to both alerts and incidents. + - You can run the rule query across multiple workspaces. + Because of the [**nature and limitations of NRT rules**](near-real-time-rules.md#considerations), however, the following features of scheduled analytics rules will *not be available* in the wizard: - **Query scheduling** is not configurable, since queries are automatically scheduled to run once per minute with a one-minute lookback period. - **Alert threshold** is irrelevant, since an alert is always generated. - **Event grouping** configuration is now available to a limited degree. You can choose to have an NRT rule generate an alert for each event for up to 30 events. If you choose this option and the rule results in more than 30 events, single-event alerts will be generated for the first 29 events, and a 30th alert will summarize all the events in the result set. - In addition, the query itself has the following requirements: -- - You can't run the query across workspaces. -- - Due to the size limits of the alerts, your query should make use of `project` statements to include only the necessary fields from your table. Otherwise, the information you want to surface could end up being truncated. + In addition, due to the size limits of the alerts, your query should make use of `project` statements to include only the necessary fields from your table. Otherwise, the information you want to surface could end up being truncated. ## Next steps |
sentinel | Near Real Time Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md | The following limitations currently govern the use of NRT rules: - Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above). - - Queries can run only within a single workspace. There is no cross-workspace capability. + - Queries can now run across multiple workspaces. - Event grouping is now configurable to a limited degree. NRT rules can produce up to 30 single-event alerts. A rule with a query that results in more than 30 events will produce alerts for the first 29, then a 30th alert that summarizes all the applicable events. |
sentinel | Preparing Sap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md | For more information, see the [SAP documentation](https://help.sap.com/docs/ABAP Some installations of SAP systems might not have audit logging enabled by default. For best results in evaluating the performance and efficacy of the Microsoft Sentinel solution for SAP applications, enable auditing of your SAP system and configure the audit parameters. If you want to ingest SAP HANA DB logs, make sure to also enable auditing for SAP HANA DB. -We recommend that you configure auditing for all messages from the audit log, as this data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. +We recommend that you configure auditing for *all* messages from the audit log, instead of only specific logs. Ingestion cost differences are generally minimal and the data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. For more information, see the [SAP community](https://community.sap.com/t5/application-development-blog-posts/analysis-and-recommended-settings-of-the-security-audit-log-sm19-rsau/ba-p/13297094) and [Collect SAP HANA audit logs in Microsoft Sentinel](collect-sap-hana-audit-logs.md). |
sentinel | Sap Audit Controls Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-controls-workbook.md | Before you can start using the **SAP - Security Audit log and Initial Access** w - At least one incident in your workspace, with at least one entry available in the `SecurityIncident` table. This doesn't need to be an SAP incident, and you can generate a demo incident using a basic analytics rule if you don't have another one. +We recommend that you configure auditing for *all* messages from the audit log, instead of only specific logs. Ingestion cost differences are generally minimal and the data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. For more information, see [Configure SAP auditing](preparing-sap.md#configure-sap-auditing). + ## View a demo View a demonstration of this workbook: |
sentinel | Sap Audit Log Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-log-workbook.md | Before you can start using the **SAP - Security Audit log and Initial Access** w - At least one incident in your Microsoft Sentinel workspace, with at least one entry available in the `SecurityIncident` table. This doesn't need to be an SAP incident, and you can generate a demo incident using a basic analytics rule if you don't have another one. -- If your Microsoft Entra data is in a different Log Analytics workspace, make sure you select the relevant subscriptions and workspaces at the top of the workbook, under **Azure audit and activities**. +- If your Microsoft Entra data is in a different Log Analytics workspace, make sure you select the relevant subscriptions and workspaces at the top of the workbook, under **Azure audit and activities**. ++We recommend that you configure auditing for *all* messages from the audit log, instead of only specific logs. Ingestion cost differences are generally minimal and the data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. For more information, see [Configure SAP auditing](preparing-sap.md#configure-sap-auditing). ## Supported filters |
sentinel | Sap Deploy Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md | The change takes effect approximately two minutes after you save the file. You d 1. Enable any events as needed. 1. Verify whether messages arrive and exist in the SAP **SM20** or **RSAU_READ_LOG**, without any special errors appearing on the connector log. - ### Incorrect workspace ID or key in key vault If you realize that you entered an incorrect workspace ID or key in your deployment script, update the credentials stored in Azure key vault. Use the **RSAU_CONFIG_LOG** transaction for this step. For more information, see the [SAP documentation](https://community.sap.com/t5/application-development-blog-posts/analysis-and-recommended-settings-of-the-security-audit-log-sm19-rsau/ba-p/13297094) and [Collect SAP HANA audit logs in Microsoft Sentinel](collect-sap-hana-audit-logs.md). +We recommend that you configure auditing for *all* messages from the audit log, instead of only specific logs. Ingestion cost differences are generally minimal and the data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. For more information, see [Configure SAP auditing](preparing-sap.md#configure-sap-auditing). + ### Missing IP address or transaction code fields in the SAP audit log In SAP systems with versions for SAP BASIS 7.5 SP12 and above, Microsoft Sentinel can reflect extra fields in the `ABAPAuditLog_CL` and `SAPAuditLog` tables. The data collector agent relies on time zone information to be correct. If you s There might also be issues with the clock on the virtual machine where the data collector agent container is hosted, and any deviation from the clock on the VM from UTC impacts data collection. Even more importantly, the clocks on both the SAP system machines and the data collector agent machines must match. +We recommend that you configure auditing for *all* messages from the audit log, instead of only specific logs. Ingestion cost differences are generally minimal and the data is useful for Microsoft Sentinel detections and in post-compromise investigations and hunting. For more information, see [Configure SAP auditing](preparing-sap.md#configure-sap-auditing). + ### Network connectivity issues If you're having network connectivity issues to the SAP environment or to Microsoft Sentinel, check your network connectivity to make sure data is flowing as expected. |
service-bus-messaging | Service Bus Outages Disasters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md | All Service Bus tiers support [availability zones](../availability-zones/az-over When you use availability zones, **both metadata and data (messages)** are replicated across data centers in the availability zone. > [!NOTE]-> The availability zones support is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present. +> The availability zones support is only available in [Azure regions](../reliability/availability-zones-region-support.md) where availability zones are present. When you create a namespace, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. There's no extra cost for using this feature and you can't disable or enable this feature after namespace creation. When you create a namespace, the support for availability zones (if available in > Previously it was required to set the property `zoneRedundant` to `true` to enable availability zones, however this behavior has changed to enable availability zones by default. Existing namespaces are being migrated to availability zones where possible, and the property `zoneRedundant` is being deprecated. The property `zoneRedundant` might still show as `false`, even when availability zones has been enabled. > Existing namespaces that are being migrated: > - Currently does not have availability zones enabled.-> - The [region supports availability zones](/azure/reliability/availability-zones-service-support). +> - The [region supports availability zones](../reliability/availability-zones-region-support.md). > - The region has sufficient availability zone capacity. ## Protection against disasters - standard tier |
site-recovery | Azure To Azure How To Enable Zone To Zone Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md | Support for zone-to-zone disaster recovery is currently limited to the following When you use zone-to-zone disaster recovery, Site Recovery doesn't move or store data out of the region in which it's deployed. You can select a Recovery Services vault from a different region if you want one. The Recovery Services vault contains metadata but no actual customer data. -Learn more about [currently supported availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). +Learn more about [Azure regions with availability zones](../reliability/availability-zones-region-support.md). > [!NOTE] > Zone-to-zone disaster recovery isn't supported for VMs that have managed disks via zone-redundant storage (ZRS). |
site-recovery | Move Azure Vms Avset Azone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-AVset-Azone.md | -Availability Zones in Azure help protect your applications and data from datacenter failures. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, thereΓÇÖs a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region helps protect applications and data from datacenter failures. With Availability Zones, Azure offers a service-level agreement (SLA) of 99.99% for uptime of virtual machines (VMs). Availability Zones are supported in select regions, as mentioned in [Regions that support Availability Zones](../availability-zones/az-region.md). +Availability Zones in Azure help protect your applications and data from datacenter failures. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, thereΓÇÖs a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region helps protect applications and data from datacenter failures. With Availability Zones, Azure offers a service-level agreement (SLA) of 99.99% for uptime of virtual machines (VMs). Availability Zones are supported in select regions, as mentioned in [Regions with availability zones](../reliability/availability-zones-region-support.md). In a scenario where your virtual machines are deployed as *single instance* into a specific region, and you want to improve your availability by moving these virtual machines into an Availability Zone, you can do so by using Azure Site Recovery. This action can further be categorized into: In a scenario where your virtual machines are deployed as *single instance* into ## Check prerequisites -- Check whether the target region has [support for Availability Zones](../availability-zones/az-region.md). Check that your choice of [source region/target region combination is supported](./azure-to-azure-support-matrix.md#region-support). Make an informed decision on the target region.+- Check whether the target region has [support for availability zones](../reliability/availability-zones-region-support.md). Check that your choice of [source region/target region combination is supported](./azure-to-azure-support-matrix.md#region-support). Make an informed decision on the target region. - Make sure that you understand the [scenario architecture and components](azure-to-azure-architecture.md). - Review the [support limitations and requirements](azure-to-azure-support-matrix.md). - Check account permissions. If you just created your free Azure account, you're the admin of your subscription. If you aren't the subscription admin, work with the admin to assign the permissions you need. To enable replication for a virtual machine and eventually copy data to the target by using Azure Site Recovery, you must have: The following steps will guide you when using Azure Site Recovery to enable repl 1. In the Azure portal, select **Virtual machines**, and select the virtual machine you want to move into Availability Zones. 2. In **Backup + disaster recovery**, select **Disaster recovery**.-3. In **Configure disaster recovery** > **Target region**, select the target region to which you'll replicate. Ensure this region [supports](../availability-zones/az-region.md) Availability Zones. +3. In **Configure disaster recovery** > **Target region**, select the target region to which you'll replicate. Ensure this region [supports](../reliability/availability-zones-region-support.md) availability zones. 4. Select **Next: Advanced settings**. 5. Choose the appropriate values for the target subscription, target virtual machine resource group, and virtual network. 6. In the **Availability** section, choose the Availability Zone into which you want to move the virtual machine. |
spatial-anchors | Reliability Spatial Anchors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/reliability-spatial-anchors.md | SouthEastAsia region doesn't rely on Azure Paired Regions in order to be complia ### Prerequisites -For a list of regions that support availability zones, see [Azure regions with availability zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). If your Azure Spatial Anchors account is located in one of the regions listed, you don't need to take any other action beyond provisioning the service. +For a list of regions that support availability zones, see [Azure regions with availability zones](../../reliability/availability-zones-region-support.md#azure-regions-with-availability-zone-support). If your Azure Spatial Anchors account is located in one of the regions listed, you don't need to take any other action beyond provisioning the service. #### Create a resource with availability zone enabled |
storage | Blob Storage Monitoring Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-storage-monitoring-scenarios.md | You can find the friendly name of that security principal by taking the value of ### Auditing data plane operations -Data plane operations are captured in [Azure resource logs for Storage](monitor-blob-storage.md#analyzing-logs). You can [configure Diagnostic setting](/azure/azure-monitor/platform/diagnostic-settings) to export logs to Log Analytics workspace for a native query experience. +Data plane operations are captured in [Azure resource logs for Storage](monitor-blob-storage.md#azure-monitor-resource-logs). You can [configure Diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings) to export logs to Log Analytics workspace for a native query experience. Here's a Log Analytics query that retrieves the "when", "who", "what", and "how" information in a list of log entries. |
storage | Storage Sas Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md | Azure Storage supports three types of shared access signatures: ### User delegation SAS -A user delegation SAS is secured with Microsoft Entra credentials and also by the permissions specified for the SAS. A user delegation SAS applies to Blob storage only. +A user delegation SAS is secured with Microsoft Entra credentials and also by the permissions specified for the SAS. A user delegation SAS is supported for Azure Blob Storage and Azure Data Lake Storage. It's not currently supported for Azure Files, Azure Queue Storage, or Azure Table Storage. For more information about the user delegation SAS, see [Create a user delegation SAS (REST API)](/rest/api/storageservices/create-user-delegation-sas). |
storage | Files Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-redundancy.md | A write request to a storage account that is using ZRS happens synchronously. Th An advantage of using ZRS for Azure Files workloads is that if a zone becomes unavailable, no remounting of Azure file shares from the connected clients is required. We recommend using ZRS in the primary region for scenarios that require high availability. We also recommend ZRS for restricting replication of data to a particular country or region to meet data governance requirements. > [!NOTE]-> Azure File Sync is zone-redundant in all regions that [support zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support) except US Gov Virginia. In most cases, we recommend that Azure File Sync users configure storage accounts to use ZRS or GZRS. +> Azure File Sync is zone-redundant in all regions that [support availability zones](../../reliability/availability-zones-region-support.md) except US Gov Virginia. In most cases, we recommend that Azure File Sync users configure storage accounts to use ZRS or GZRS. The following diagram shows how your data is replicated across availability zones in the primary region with ZRS: The following diagram shows how your data is replicated across availability zone ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself might not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, we recommend using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region. -For more information about which regions support ZRS, see [Availability zone service and regional support](../../reliability/availability-zones-service-support.md). +For more information about which regions support ZRS, see [Azure regions with availability zones](../../reliability/availability-zones-region-support.md). #### Standard storage accounts |
storage | Files Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md | Nconnect is a client-side Linux mount option that increases performance at scale Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](files-redundancy.md). -Note: Azure File Sync is zone-redundant in all regions that [support zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support) except US Gov Virginia. +> [!NOTE] +> Azure File Sync is zone-redundant in all regions that [support availability zones](../../reliability/availability-zones-region-support.md) except US Gov Virginia. ## What's new in 2022 |
synapse-analytics | Source Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/source-control.md | Title: Source control in Synapse Studio -description: Learn how to configure source control in Azure Synapse Studio +description: Learn how to configure source control in Azure Synapse Studio. This guide includes best practices and troubleshooting steps. - Previously updated : 11/20/2020+ Last updated : 11/15/2024 - # Source control in Synapse Studio -By default, Synapse Studio authors directly against the Synapse service. If you have a need for collaboration using Git for source control, Synapse Studio allows you to associate your workspace with a Git repository, Azure DevOps, or GitHub. +By default, Synapse Studio authors directly against the Synapse service. If you have a need for collaboration using Git for source control, Synapse Studio allows you to associate your workspace with a Git repository, Azure DevOps, or GitHub. This article outlines how to configure and work in a Synapse workspace with git repository enabled. And we also highlight some best practices and a troubleshooting guide. This article outlines how to configure and work in a Synapse workspace with git >To use GitHub in Azure Gov and Microsoft Azure operated by 21Vianet, you can bring your own GitHub OAuth application in Synapse Studio for git integration. The configure experience is same with ADF. You can refer to the [announcement blog](https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918). ## Prerequisites-Users must have the Azure Contributor (Azure RBAC) or higher role on the Synapse workspace to configure, edit settings and disconnect a Git repository with Synapse. -## Configure Git repository in Synapse Studio +- Users must have the Azure Contributor (Azure RBAC) or higher role on the Synapse workspace to configure, edit settings, and disconnect a Git repository with Synapse. -After launching your Synapse Studio, you can configure a git repository in your workspace. A Synapse Studio workspace can be associated with only one git repository at a time. +## Configure Git repository in Synapse Studio ++After launching your Synapse Studio, you can configure a git repository in your workspace. A Synapse Studio workspace can be associated with only one git repository at a time. ### Configuration method 1: global bar -In the Synapse Studio global bar, select the **Synapse Live** drop-down menu, and then select **Set up code repository**. +In the Synapse Studio global bar at the top of the data, develop, integrate, and manage hubs, select the **Synapse Live** drop-down menu, and then select **Set up code repository**. ![Configure the code repository settings from authoring](media/configure-repo-1.png) ### Configuration method 2: Manage hub -Go to the Manage hub of Synapse Studio. Select **Git configuration** in the **Source control** section. If you have no repository connected, click **Configure**. +Go to the Manage hub of Synapse Studio. Select **Git configuration** in the **Source control** section. If you have no repository connected, select **Configure**. ![Configure the code repository settings from management hub](media/configure-repo-2.png) You can connect either Azure DevOps or GitHub git repository in your workspace. -## Connect with Azure DevOps Git +## Connect with Azure DevOps Git You can associate a Synapse workspace with an Azure DevOps Repository for source control, collaboration, versioning, and so on. If you don't have an Azure DevOps repository, follow [these instructions](/azure/devops/organizations/accounts/create-organization-msa-or-work-student) to create your repository resources. ### Azure DevOps Git repository settings -When connecting to your git repository, first select your repository type as Azure DevOps git, and then select one Microsoft Entra tenant from the dropdown list, and click **Continue**. +When connecting to your git repository, first select your repository type as Azure DevOps git, and then select one Microsoft Entra tenant from the dropdown list, and select **Continue**. ![Configure the code repository settings](media/connect-with-azure-devops-repo-selected.png) The configuration pane shows the following Azure DevOps git settings: | **Collaboration branch** | Your Azure Repos collaboration branch that is used for publishing. By default, its `master`. Change this setting in case you want to publish resources from another branch. You can select existing branches or create new | `<your collaboration branch name>` | | **Root folder** | Your root folder in your Azure Repos collaboration branch. | `<your root folder name>` | | **Import existing resources to repository** | Specifies whether to import existing resources from the Synapse Studio into an Azure Repos Git repository. Check the box to import your workspace resources (except pools) into the associated Git repository in JSON format. This action exports each resource individually. When this box isn't checked, the existing resources aren't imported. | Checked (default) |-| **Import resource into this branch** | Select which branch the resources (sql script, notebook, spark job definition, dataset, dataflow etc.) are imported to. +| **Import resource into this branch** | Select which branch the resources (sql script, notebook, spark job definition, dataset, dataflow etc.) are imported to. -Your can also use repository link to quickly point to the git repository you want to connect with. +You can also use repository link to quickly point to the git repository you want to connect with. > [!NOTE]-> Azure Synapse doesn't support connection to Prem Azure DevOps repository. +> Azure Synapse doesn't support connection to an on-premises Azure DevOps repository. <a name='use-a-different-azure-active-directory-tenant'></a> ### Use a different Microsoft Entra tenant -The Azure Repos Git repo can be in a different Microsoft Entra tenant. To specify a different Microsoft Entra tenant, you have to have administrator permissions for the Azure subscription that you're using. For more info, see [change subscription administrator](../../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator) +The Azure Repos Git repo can be in a different Microsoft Entra tenant. To specify a different Microsoft Entra tenant, you have to have administrator permissions for the Azure subscription that you're using. For more info, see [change subscription administrator](../../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator). > [!IMPORTANT]-> To connect to another Microsoft Entra ID, the user logged in must be a part of that active directory. +> To connect to another Microsoft Entra ID, the user logged in must be a part of that active directory. ### Use your personal Microsoft account To use a personal Microsoft account for Git integration, you can link your perso 1. Add your personal Microsoft account to your organization's Active Directory as a guest. For more info, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../../active-directory/external-identities/add-users-administrator.md). -2. Log in to the Azure portal with your personal Microsoft account. Then switch to your organization's Active Directory. +2. Sign in to the Azure portal with your personal Microsoft account. Then switch to your organization's Active Directory. 3. Go to the Azure DevOps section, where you now see your personal repo. Select the repo and connect with Active Directory. After these configuration steps, your personal repo is available when you set up For more info about connecting Azure Repos to your organization's Active Directory, see [Connect your organization to Microsoft Entra ID](/azure/devops/organizations/accounts/connect-organization-to-azure-ad). -### Use a cross tenant Azure DevOps account +### Use a cross tenant Azure DevOps organization -When your Azure DevOps isn't in the same tenant as the Synapse workspace, you can configure the workspace with cross tenant Azure DevOps account. +When your Azure DevOps isn't in the same tenant as the Synapse workspace, you can configure the workspace with cross tenant Azure DevOps organization. -1. Select the **Cross tenant sign in** option and click **Continue** +1. Select the **Cross tenant sign in** option and select **Continue** ![Select the cross tenant sign in ](media/cross-tenant-sign-in.png) When your Azure DevOps isn't in the same tenant as the Synapse workspace, you ca ![Confirm the cross tenant sign in ](media/cross-tenant-sign-in-confirm.png) -1. click **Use another account** and login with your Azure DevOps account. +1. Select **Use another account** and sign in with your Azure DevOps account. ![Use another account ](media/use-another-account.png) When your Azure DevOps isn't in the same tenant as the Synapse workspace, you ca > To login the workspace, you need to use the first sign-in to log into to your Synapse workspace user account. Your cross tenant Azure DevOps account is only used for signing into and getting access to the Azure DevOps repo associated with this Synapse workspace. -## Connect with GitHub +## Connect with GitHub You can associate a workspace with a GitHub repository for source control, collaboration, versioning. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources. The GitHub integration with Synapse Studio supports both public GitHub (that is, When connecting to your git repository, first select your repository type as GitHub, and then provide your GitHub account, your GitHub Enterprise Server URL if you use GitHub Enterprise Server, or your GitHub Enterprise organization name if you use GitHub Enterprise Cloud. Select **Continue**. > [!NOTE]-> If you're using GitHub Enterprise Cloud, leave the **Use GitHub Enterprise Server** checkbox cleared. +> If you're using GitHub Enterprise Cloud, leave the **Use GitHub Enterprise Server** checkbox cleared. ![GitHub repository settings](media/connect-with-github-repo-1.png) The configuration pane shows the following GitHub repository settings: |: |: |: | | **Repository Type** | The type of the Azure Repos code repository. | GitHub | | **Use GitHub Enterprise** | Checkbox to select GitHub Enterprise | unselected (default) |-| **GitHub Enterprise URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise** is selected | `<your GitHub enterprise url>` | +| **GitHub Enterprise URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise** is selected | `<your GitHub enterprise url>` | | **GitHub account** | Your GitHub account name. This name can be found from https:\//github.com/{account name}/{repository name}. Navigating to this page prompts you to enter GitHub OAuth credentials to your GitHub account. | `<your GitHub account name>` | | **Repository Name** | Your GitHub code repository name. GitHub accounts contain Git repositories to manage your source code. You can create a new repository or use an existing repository that's already in your account. | `<your repository name>` | | **Collaboration branch** | Your GitHub collaboration branch that is used for publishing. By default, its master. Change this setting in case you want to publish resources from another branch. | `<your collaboration branch>` | | **Root folder** | Your root folder in your GitHub collaboration branch. |`<your root folder name>` | | **Import existing resources to repository** | Specifies whether to import existing resources from the Synapse Studio into a Git repository. Check the box to import your workspace resources (except pools) into the associated Git repository in JSON format. This action exports each resource individually. When this box isn't checked, the existing resources aren't imported. | Selected (default) |-| **Import resource into this branch** | Select which branch the resources (sql script, notebook, spark job definition, dataset, dataflow etc.) is imported. +| **Import resource into this branch** | Select which branch the resources (sql script, notebook, spark job definition, dataset, dataflow etc.) is imported. ### GitHub organizations Connecting to a GitHub organization requires the organization to grant permissio If you're connecting to GitHub from Synapse Studio for the first time, follow these steps to connect to a GitHub organization. -1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to login into GitHub appears. +1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to sign in into GitHub appears. -1. Login using your user credentials. +1. Login using your user credentials. -1. You are asked to authorize Synapse as an application called *Azure Synapse*. On this screen, you see an option to grant permission for Synapse to access the organization. If you don't see the option to grant permission, ask an admin to manually grant the permission through GitHub. +1. You're asked to authorize Synapse as an application called *Azure Synapse*. On this screen, you see an option to grant permission for Synapse to access the organization. If you don't see the option to grant permission, ask an admin to manually grant the permission through GitHub. Once you follow these steps, your workspace is able to connect to both public and private repositories within your organization. If you're unable to connect, try clearing the browser cache and retrying. Version control systems (also known as _source control_) allow developers to col ### Creating feature branches -Each Git repository that's associated with a Synapse Studio has a collaboration branch. (`main` or `master` is the default collaboration branch). Users can also create feature branches by clicking **+ New Branch** in the branch dropdown. +Each Git repository that's associated with a Synapse Studio has a collaboration branch. (`main` or `master` is the default collaboration branch). Users can also create feature branches by clicking **+ New Branch** in the branch dropdown. ![Create new branch](media/create-new-branch.png) Once the new branch pane appears, enter the name of your feature branch and sele ![Create branch based on private branch ](media/create-branch-from-private-branch.png) -When you're ready to merge the changes from your feature branch to your collaboration branch, click on the branch dropdown and select **Create pull request**. This action takes you to Git provider where you can raise pull requests, do code reviews, and merge changes to your collaboration branch. You're only allowed to publish to the Synapse service from your collaboration branch. +When you're ready to merge the changes from your feature branch to your collaboration branch, select the branch dropdown and select **Create pull request**. This action takes you to Git provider where you can raise pull requests, do code reviews, and merge changes to your collaboration branch. You're only allowed to publish to the Synapse service from your collaboration branch. ![Create a new pull request](media/create-pull-request.png) By default, Synapse Studio generates the workspace templates and saves them into } ``` -Synapse Studio can only have one publish branch at a time. When you specify a new publish branch, the original publish branch would not been deleted. If you want to remove the previous publish branch, delete it manually. +Synapse Studio can only have one publish branch at a time. When you specify a new publish branch, the original publish branch won't be deleted. If you want to remove the previous publish branch, delete it manually. ### Publish code changes -After merging changes to the collaboration branch, click **Publish** to manually publish your code changes in the collaboration branch to the Synapse service. +After merging changes to the collaboration branch, select **Publish** to manually publish your code changes in the collaboration branch to the Synapse service. ![Publish changes](media/gitmode-publish.png) -A side pane opens where you confirm that the publish branch and pending changes are correct. Once you verify your changes, click **OK** to confirm the publish. +A side pane opens where you confirm that the publish branch and pending changes are correct. Once you verify your changes, select **OK** to confirm the publish. ![Confirm the correct publish branch](media/publish-change.png) A side pane opens where you confirm that the publish branch and pending changes ## Switch to a different Git repository -To switch to a different Git repository, go to Git configuration page in the management hub under **Source control**. Select **Disconnect**. +To switch to a different Git repository, go to Git configuration page in the management hub under **Source control**. Select **Disconnect**. ![Git icon](media/remove-repository.png) -Enter your workspace name and click **Disconnect** to remove the Git repository associated with your workspace. +Enter your workspace name and select **Disconnect** to remove the Git repository associated with your workspace. After you remove the association with the current repo, you can configure your Git settings to use a different repo and then import existing resources to the new repo. After you remove the association with the current repo, you can configure your G ## Best practices for Git integration -- **Permissions**. After you have a git repository connected to your workspace, anyone who can access to your git repo with any role in your workspace is able to update artifacts, like sql script, notebook, spark job definition, dataset, dataflow and pipeline in git mode. Typically you don't want every team member to have permissions to update workspace. -Only grant git repository permission to Synapse workspace artifact authors. +- **Permissions**. After you have a git repository connected to your workspace, anyone who can access to your git repo with any role in your workspace is able to update artifacts, like sql script, notebook, spark job definition, dataset, dataflow, and pipeline in git mode. Typically you don't want every team member to have permissions to update workspace. +Only grant git repository permission to Synapse workspace artifact authors. - **Collaboration**. It's recommended to not allow direct check-ins to the collaboration branch. This restriction can help prevent bugs as every check-in goes through a pull request review process described in [Creating feature branches](source-control.md#creating-feature-branches).-- **Synapse live mode**. After publishing in git mode, all changes are reflected in Synapse live mode. In Synapse live mode, publishing is disabled. And you can view, run artifacts in live mode if you have been granted the right permission. -- **Edit artifacts in Studio**. Synapse studio is the only place you can enable workspace source control and sync changes to git automatically. Any change via SDK, PowerShell,is not synced to git. We recommend you always edit artifact in Studio when git is enabled.+- **Synapse live mode**. After publishing in git mode, all changes are reflected in Synapse live mode. In Synapse live mode, publishing is disabled. And you can view, run artifacts in live mode if you have been granted the right permission. +- **Edit artifacts in Studio**. Synapse studio is the only place you can enable workspace source control and sync changes to git automatically. Any change via SDK, PowerShell, isn't synced to git. We recommend you always edit artifact in Studio when git is enabled. ## Troubleshooting git integration -### Access to git mode +### Access to git mode -If you have been granted the permission to the GitHub git repository linked with your workspace, but you cannot access to Git mode: +If you have been granted the permission to the GitHub git repository linked with your workspace, but you can't access to Git mode: -1. Clear your cache and refresh the page. +1. Clear your cache and refresh the page. -1. Login your GitHub account. +1. Sign in your GitHub account. ### Stale publish branch If the publish branch is out of sync with the collaboration branch and contains 1. Reconfigure Git with the same settings, but make sure **Import existing resources to repository** is checked and choose the same branch. -1. Create a pull request to merge the changes to the collaboration branch +1. Create a pull request to merge the changes to the collaboration branch ## Unsupported features -- Synapse Studio doesn't allow cherry-picking of commits or selective publishing of resources. +- Synapse Studio doesn't allow cherry-picking of commits or selective publishing of resources. - Synapse Studio doesn't support self-customized commit message. - By design, delete action in Studio is committed to git directly -## Next steps +## Next step -* To implement continuous integration and deployment, see [Continuous integration and delivery (CI/CD)](continuous-integration-delivery.md). +> [!div class="nextstepaction"] +> [Implement continuous integration and deployment](continuous-integration-delivery.md) |
synapse-analytics | Data Explorer Ingest Event Hub Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-csharp.md | Title: Use C\# ingestion to ingest data from Event Hub into Azure Synapse Data Explorer (Preview) -description: Learn how to use C\# to ingest (load) data into Azure Synapse Data Explorer from Event Hub. + Title: Use C\# ingestion to ingest data from Event Hubs into Azure Synapse Data Explorer (Preview) +description: Learn how to use C\# to ingest (load) data into Azure Synapse Data Explorer from Event Hubs. Last updated 11/02/2021 -# Create an Event Hub data connection for Azure Synapse Data Explorer by using C# (Preview) +# Create an Event Hubs data connection for Azure Synapse Data Explorer by using C# (Preview) > [!div class="op_single_selector"] > * [Portal](data-explorer-ingest-event-hub-portal.md)-In this article, you create an Event Hub data connection for Azure Synapse Data Explorer by using C\#. +In this article, you create an Event Hubs data connection for Azure Synapse Data Explorer by using C\#. ## Prerequisites [!INCLUDE [data-explorer-ingest-prerequisites](../includes/data-explorer-ingest-prerequisites.md)] -- [Event Hub with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub).+- [Event hub with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub). > [!NOTE] > Ingesting data from an Event Hub into Data Explorer pools will not work if your Synapse workspace uses a managed virtual network with data exfiltration protection enabled. In this article, you create an Event Hub data connection for Azure Synapse Data [!INCLUDE [data-explorer-authentication](../includes/data-explorer-authentication.md)] -## Add an Event Hub data connection +## Add an Event Hubs data connection -The following example shows you how to add an Event Hub data connection programmatically. See [connect to the Event Hub](data-explorer-ingest-event-hub-portal.md#connect-to-the-event-hub) for information about adding an Event Hub data connection using the Azure portal. +The following example shows you how to add an Event Hubs data connection programmatically. See [connect to the Event Hubs](data-explorer-ingest-event-hub-portal.md#connect-to-the-event-hubs) for information about adding an Event Hubs data connection using the Azure portal. ```csharp var tenantId = "xxxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxx";//Directory (tenant) ID await kustoManagementClient.DataConnections.CreateOrUpdateAsync(resourceGroupNam | tableName | *StormEvents* | The name of the target table in the target database.| | mappingRuleName | *StormEvents_CSV_Mapping* | The name of your column mapping related to the target table.| | dataFormat | *csv* | The data format of the message.|-| eventHubResourceId | *Resource ID* | The resource ID of your Event Hub that holds the data for ingestion. | -| consumerGroup | *$Default* | The consumer group of your Event Hub.| +| eventHubResourceId | *Resource ID* | The resource ID of your event hub that holds the data for ingestion. | +| consumerGroup | *$Default* | The consumer group of your event hub.| | location | *Central US* | The location of the data connection resource.| | compression | *Gzip* or *None* | The type of data compression. | ## Generate data -See the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an Event Hub. +See the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an event hub. An event can contain one or more records, up to its size limit. In the following sample we send two events, each has five records appended: |
synapse-analytics | Data Explorer Ingest Event Hub One Click | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-one-click.md | Title: Use one-click ingestion to ingest data from Event Hub into Azure Synapse Data Explorer (Preview) -description: Learn how to use one-click to ingest (load) data into Azure Synapse Data Explorer from Event Hub. + Title: Use one-select ingestion to ingest data from Event Hubs into Azure Synapse Data Explorer (Preview) +description: Learn how to use one-select to ingest (load) data into Azure Synapse Data Explorer from Event Hubs. Last updated 11/02/2021 -# Use one-click ingestion to create an Event Hub data connection for Azure Synapse Data Explorer (Preview) +# Use one-select ingestion to create an Event Hubs data connection for Azure Synapse Data Explorer (Preview) > [!div class="op_single_selector"] > * [Portal](data-explorer-ingest-event-hub-portal.md)-Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a big data streaming platform and event ingestion service. [Event Hubs](../../../event-hubs/event-hubs-about.md) can process millions of events per second in near real-time. In this article, you connect an Event Hub to a table in Azure Synapse Data Explorer using the [one-click ingestion](data-explorer-ingest-data-one-click.md) experience. +Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a big data streaming platform and event ingestion service. [Event Hubs](../../../event-hubs/event-hubs-about.md) can process millions of events per second in near real-time. In this article, you connect an Event Hubs to a table in Azure Synapse Data Explorer using the [one-select ingestion](data-explorer-ingest-data-one-click.md) experience. ## Prerequisites [!INCLUDE [data-explorer-ingest-prerequisites](../includes/data-explorer-ingest-prerequisites.md)] -- [Event Hub with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub).+- [Event Hubs with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub). > [!NOTE] > Ingesting data from an Event Hub into Data Explorer pools will not work if your Synapse workspace uses a managed virtual network with data exfiltration protection enabled. Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a b 1. In the left menu of the Web UI, select the **Data** tab. - :::image type="content" source="../media/ingest-data-event-hub/one-click-ingestion-event-hub.png" alt-text="Select one-click ingest data from Event Hub in the web UI."::: + :::image type="content" source="../media/ingest-data-event-hub/one-click-ingestion-event-hub.png" alt-text="Screenshot showing the Azure Data Explorer Data menu with ingest from event hub highlighted."::: 1. In the **Ingest data from Event Hub** card, select **Ingest**. The **Ingest new data** window opens with the **Destination** tab selected. 1. Under **Data Connection**, fill in the following fields: - :::image type="content" source="../media/ingest-data-one-click/select-azure-data-explorer-ingest-event-hub-details.png" alt-text="Screenshot of source tab with project details fields to be filled in - ingest new data to Azure Synapse Data Explorer with Event Hub in the one click experience."::: + :::image type="content" source="../media/ingest-data-one-click/select-azure-data-explorer-ingest-event-hub-details.png" alt-text="Screenshot of source tab with project details fields to be filled in - ingest new data to Azure Synapse Data Explorer with Event Hubs in the one select experience."::: |**Setting** | **Suggested value** | **Field description** |||| | Data connection name | *ContosoDataConnection* | The name that identifies your data connection.- | Subscription | | The subscription ID where the Event Hub resource is located. | - | Event Hub namespace | | The name that identifies your namespace. | - | Event Hub | | The Event Hub you wish to use. | - | Consumer group | | The consumer group defined in your Event Hub. | - | Event system properties | Select relevant properties | The [Event Hub system properties](../../../service-bus-messaging/service-bus-amqp-protocol-guide.md#message-annotations). If there are multiple records per event message, the system properties will be added to the first one. When adding system properties, [create](/azure/data-explorer/kusto/management/create-table-command?context=/azure/synapse-analytics/context/context) or [update](/azure/data-explorer/kusto/management/alter-table-command?context=/azure/synapse-analytics/context/context) table schema and [mapping](/azure/data-explorer/kusto/management/mappings?context=/azure/synapse-analytics/context/context) to include the selected properties. | + | Subscription | | The subscription ID where the Event Hubs resource is located. | + | Event Hubs namespace | | The name that identifies your namespace. | + | Event Hubs | | The Event Hubs you wish to use. | + | Consumer group | | The consumer group defined in your Event Hubs. | + | Event system properties | Select relevant properties | The [Event Hubs system properties](../../../service-bus-messaging/service-bus-amqp-protocol-guide.md#message-annotations). If there are multiple records per event message, the system properties will be added to the first one. When adding system properties, [create](/azure/data-explorer/kusto/management/create-table-command?context=/azure/synapse-analytics/context/context) or [update](/azure/data-explorer/kusto/management/alter-table-command?context=/azure/synapse-analytics/context/context) table schema and [mapping](/azure/data-explorer/kusto/management/mappings?context=/azure/synapse-analytics/context/context) to include the selected properties. | 1. Select **Next: Schema**. ## Schema tab -Data is read from the Event Hub in form of [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects. Supported formats are CSV, JSON, PSV, SCsv, SOHsv TSV, TXT, and TSVE. +Data is read from the Event Hubs in form of [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects. Supported formats are CSV, JSON, PSV, SCsv, SOHsv TSV, TXT, and TSVE. <!-- For information on schema mapping with JSON-formatted data, see [Edit the schema](one-click-ingestion-existing-table.md#edit-the-schema). For information on schema mapping with CSV-formatted data, see [Edit the schema](one-click-ingestion-new-table.md#edit-the-schema). --> -1. If the data you see in the preview window is not complete, you may need more data to create a table with all necessary data fields. Use the following commands to fetch new data from your Event Hub: +1. If the data you see in the preview window isn't complete, you may need more data to create a table with all necessary data fields. Use the following commands to fetch new data from your Event Hubs: * **Discard and fetch new data**: discards the data presented and searches for new events. * **Fetch more data**: Searches for more events in addition to the events already found. For information on schema mapping with CSV-formatted data, see [Edit the schema] 1. Select **Next: Summary**. -## Continuous ingestion from Event Hub +## Continuous ingestion from Event Hubs -In the **Continuous ingestion from Event Hub established** window, all steps will be marked with green check marks when establishment finishes successfully. The cards below these steps give you options to explore your data with **Quick queries**, undo changes made using **Tools**, or **Monitor** the Event Hub connections and data. +In the **Continuous ingestion from Event Hub established** window, all steps will be marked with green check marks when establishment finishes successfully. The cards below these steps give you options to explore your data with **Quick queries**, undo changes made using **Tools**, or **Monitor** the Event Hubs connections and data. ## Next steps |
synapse-analytics | Data Explorer Ingest Event Hub Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-overview.md | Title: Event Hub data connection for Azure Synapse Data Explorer (Preview) -description: This article provides an overview of how to ingest (load) data into Azure Synapse Data Explorer from Event Hub. + Title: Event Hubs data connection for Azure Synapse Data Explorer (Preview) +description: This article provides an overview of how to ingest (load) data into Azure Synapse Data Explorer from Event Hubs. Last updated 11/02/2021 -# Event Hub data connection (Preview) +# Event Hubs data connection (Preview) [Azure Event Hubs](../../../event-hubs/event-hubs-about.md) is a big data streaming platform and event ingestion service. Azure Synapse Data Explorer offers continuous ingestion from customer-managed Event Hubs. -The Event Hub ingestion pipeline transfers events to Azure Synapse Data Explorer in several steps. You first create an Event Hub in the Azure portal. You then create a target table in Azure Synapse Data Explorer into which the [data in a particular format](#data-format), will be ingested using the given [ingestion properties](#ingestion-properties). The Event Hub connection needs to know [events routing](#events-routing). Data is embedded with selected properties according to the [event system properties mapping](#event-system-properties-mapping). [Create a connection](#event-hub-connection) to Event Hub to [create an Event Hub](#create-an-event-hub) and [send events](#send-events). This process can be managed through the [Azure portal](data-explorer-ingest-event-hub-portal.md), programmatically with [C#](data-explorer-ingest-event-hub-csharp.md) or [Python](data-explorer-ingest-event-hub-python.md), or with the [Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md). +The Event Hubs ingestion pipeline transfers events to Azure Synapse Data Explorer in several steps. You first create an Event Hubs in the Azure portal. You then create a target table in Azure Synapse Data Explorer into which the [data in a particular format](#data-format), will be ingested using the given [ingestion properties](#ingestion-properties). The Event Hubs connection needs to know [events routing](#events-routing). Data is embedded with selected properties according to the [event system properties mapping](#event-system-properties-mapping). [Create a connection](#event-hubs-connection) to Event Hubs to [create an Event Hubs](#create-an-event-hubs) and [send events](#send-events). This process can be managed through the [Azure portal](data-explorer-ingest-event-hub-portal.md), programmatically with [C#](data-explorer-ingest-event-hub-csharp.md) or [Python](data-explorer-ingest-event-hub-python.md), or with the [Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md). For general information about data ingestion in Azure Synapse Data Explorer, see [Azure Synapse Data Explorer data ingestion overview](data-explorer-ingest-data-overview.md). ## Data format -* Data is read from the Event Hub in form of [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects. +* Data is read from the Event Hubs in form of [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects. * See [supported formats](data-explorer-ingest-data-supported-formats.md). > [!NOTE] > Event Hub doesn't support the .raw format. Ingestion properties instruct the ingestion process, where to route the data, an ## Events routing -When you set up an Event Hub connection to Azure Synapse Data Explorer cluster, you specify target table properties (table name, data format, compression, and mapping). The default routing for your data is also referred to as `static routing`. +When you set up an Event Hubs connection to Azure Synapse Data Explorer cluster, you specify target table properties (table name, data format, compression, and mapping). The default routing for your data is also referred to as `static routing`. You can also specify target table properties for each event, using event properties. The connection will dynamically route the data as specified in the [EventData.Properties](/dotnet/api/microsoft.servicebus.messaging.eventdata.properties#Microsoft_ServiceBus_Messaging_EventData_Properties), overriding the static properties for this event. -In the following example, set Event Hub details and send weather metric data to table `WeatherMetrics`. +In the following example, set Event Hubs details and send weather metric data to table `WeatherMetrics`. Data is in `json` format. `mapping1` is pre-defined on the table `WeatherMetrics`. +>[!WARNING] +>This example uses connection string authentication to connect to Event Hubs for simplicity of the example. However, hard-coding a connection string into your script requires a very high degree of trust in the application, and carries security risks. +> +>For long-term, secure solutions, use one of these options: +> +>* [Passwordless authentication](../../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md?tabs=passwordless) +>* [Store your connection string in an Azure Key Vault](/azure/key-vault/secrets/quick-create-portal) and use [this method](/azure/key-vault/secrets/quick-create-net#retrieve-a-secret) to retrieve it in your code. + ```csharp var eventHubNamespaceConnectionString=<connection_string>; var eventHubName=<event_hub>; eventHubClient.Close(); ## Event system properties mapping -System properties store properties that are set by the Event Hubs service, at the time the event is enqueued. The Azure Synapse Data Explorer Event Hub connection will embed the selected properties into the data landing in your table. +System properties store properties that are set by the Event Hubs service, at the time, the event is enqueued. The Azure Synapse Data Explorer Event Hubs connection will embed the selected properties into the data landing in your table. [!INCLUDE [event-hub-system-mapping](../includes/data-explorer-event-hub-system-mapping.md)] ### System properties -Event Hub exposes the following system properties: +Event Hubs exposes the following system properties: |Property |Data Type |Description| |||| | x-opt-enqueued-time |datetime | UTC time when the event was enqueued |-| x-opt-sequence-number |long | The logical sequence number of the event within the partition stream of the Event Hub -| x-opt-offset |string | The offset of the event from the Event Hub partition stream. The offset identifier is unique within a partition of the Event Hub stream | +| x-opt-sequence-number |long | The logical sequence number of the event within the partition stream of the Event Hubs +| x-opt-offset |string | The offset of the event from the Event Hubs partition stream. The offset identifier is unique within a partition of the Event Hubs stream | | x-opt-publisher |string | The publisher name, if the message was sent to a publisher endpoint | | x-opt-partition-key |string |The partition key of the corresponding partition that stored the event | If you selected **Event system properties** in the **Data Source** section of th [!INCLUDE [data-explorer-container-system-properties](../includes/data-explorer-container-system-properties.md)] -## Event Hub connection +## Event Hubs connection > [!Note] > For best performance, create all resources in the same region as the Azure Synapse Data Explorer cluster. -### Create an Event Hub +### Create an Event Hubs -If you don't already have one, [Create an Event Hub](../../../event-hubs/event-hubs-create.md). Connecting to Event Hub can be managed through the [Azure portal](data-explorer-ingest-event-hub-portal.md), programmatically with [C#](data-explorer-ingest-event-hub-csharp.md) or [Python](data-explorer-ingest-event-hub-python.md), or with the [Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md). +If you don't already have one, [Create an Event Hubs](../../../event-hubs/event-hubs-create.md). Connecting to Event Hubs can be managed through the [Azure portal](data-explorer-ingest-event-hub-portal.md), programmatically with [C#](data-explorer-ingest-event-hub-csharp.md) or [Python](data-explorer-ingest-event-hub-python.md), or with the [Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md). > [!Note] > * The partition count isn't changeable, so you should consider long-term scale when setting partition count. If you don't already have one, [Create an Event Hub](../../../event-hubs/event-h ## Send events -See the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an Event Hub. +See the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an Event Hubs. -For an example of how to generate sample data, see [Ingest data from Event Hub into Azure Synapse Data Explorer](data-explorer-ingest-event-hub-portal.md#generate-sample-data) +For an example of how to generate sample data, see [Ingest data from Event Hubs into Azure Synapse Data Explorer](data-explorer-ingest-event-hub-portal.md#generate-sample-data) ## Set up Geo-disaster recovery solution -Event Hub offers a [Geo-disaster recovery](../../../event-hubs/event-hubs-geo-dr.md) solution. -Azure Synapse Data Explorer doesn't support `Alias` Event Hub namespaces. To implement the Geo-disaster recovery in your solution, create two Event Hub data connections: one for the primary namespace and one for the secondary namespace. Azure Synapse Data Explorer will listen to both Event Hub connections. +Event Hubs offers a [Geo-disaster recovery](../../../event-hubs/event-hubs-geo-dr.md) solution. +Azure Synapse Data Explorer doesn't support `Alias` Event Hubs namespaces. To implement the Geo-disaster recovery in your solution, create two Event Hubs data connections: one for the primary namespace and one for the secondary namespace. Azure Synapse Data Explorer will listen to both Event Hubs connections. > [!NOTE] > It's the user's responsibility to implement a failover from the primary namespace to the secondary namespace. ## Next steps -- [Ingest data from Event Hub into Azure Synapse Data Explorer](data-explorer-ingest-event-hub-portal.md)-- [Create an Event Hub data connection for Azure Synapse Data Explorer using C#](data-explorer-ingest-event-hub-csharp.md)-- [Create an Event Hub data connection for Azure Synapse Data Explorer using Python](data-explorer-ingest-event-hub-python.md)-- [Create an Event Hub data connection for Azure Synapse Data Explorer using Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md)+- [Ingest data from Event Hubs into Azure Synapse Data Explorer](data-explorer-ingest-event-hub-portal.md) +- [Create an Event Hubs data connection for Azure Synapse Data Explorer using C#](data-explorer-ingest-event-hub-csharp.md) +- [Create an Event Hubs data connection for Azure Synapse Data Explorer using Python](data-explorer-ingest-event-hub-python.md) +- [Create an Event Hubs data connection for Azure Synapse Data Explorer using Azure Resource Manager template](data-explorer-ingest-event-hub-resource-manager.md) |
synapse-analytics | Data Explorer Ingest Event Hub Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-portal.md | Title: Ingest data from Event Hub into Azure Synapse Data Explorer (Preview) -description: Learn how to ingest (load) data into Azure Synapse Data Explorer from Event Hub. + Title: Ingest data from Event Hubs into Azure Synapse Data Explorer (Preview) +description: Learn how to ingest (load) data into Azure Synapse Data Explorer from Event Hubs. Last updated 11/02/2021 -# Ingest data from Event Hub into Azure Synapse Data Explorer +# Ingest data from Event Hubs into Azure Synapse Data Explorer > [!div class="op_single_selector"] > * [Portal](data-explorer-ingest-event-hub-portal.md)-Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a big data streaming platform and event ingestion service. [Event Hubs](../../../event-hubs/event-hubs-about.md) can process millions of events per second in near real time. In this article, you create an Event Hub, connect to it from Azure Synapse Data Explorer and see data flow through the system. +Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a big data streaming platform and event ingestion service. [Event Hubs](../../../event-hubs/event-hubs-about.md) can process millions of events per second in near real time. In this article, you create an Event Hubs, connect to it from Azure Synapse Data Explorer and see data flow through the system. ## Prerequisites Azure Synapse Data Explorer offers ingestion (data loading) from Event Hubs, a b ``` - We recommend using a [user assigned managed identity](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#user-assigned-managed-identity) or [system assigned managed identity](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) for the data connection (optional).-- [A sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an Event Hub. Download the sample app to your system.+- [A sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) that generates data and sends it to an event hub. Download the sample app to your system. - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) to run the sample app. ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com/). -## Create an Event Hub +## Create an event hub -Create an Event Hub by using an Azure Resource Manager template in the Azure portal. +Create an event hub by using an Azure Resource Manager template in the Azure portal. -1. To create an Event Hub, use the following button to start the deployment. Right-click and select **Open in new window**, so you can follow the rest of the steps in this article. +1. To create an event hub, use the following button to start the deployment. Right-click and select **Open in new window**, so you can follow the rest of the steps in this article. :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.eventhub%2Fevent-hubs-create-event-hub-and-consumer-group%2Fazuredeploy.json"::: The **Deploy to Azure** button takes you to the Azure portal. -1. Select the subscription where you want to create the Event Hub, and create a resource group named *test-hub-rg*. +1. Select the subscription where you want to create the event hub, and create a resource group named *test-hub-rg*. ![Create a resource group](../media/ingest-data-event-hub/create-resource-group.png) Create an Event Hub by using an Azure Resource Manager template in the Azure por **Setting** | **Suggested value** | **Field description** ||||- | Subscription | Your subscription | Select the Azure subscription that you want to use for your Event Hub.| + | Subscription | Your subscription | Select the Azure subscription that you want to use for your Event Hubs.| | Resource group | *test-hub-rg* | Create a new resource group. |- | Location | *West US* | Select *West US* for this article. For a production system, select the region that best meets your needs. Create the Event Hub namespace in the same Location as the Azure Synapse Data Explorer cluster for best performance (most important for Event Hub namespaces with high throughput). + | Location | *West US* | Select *West US* for this article. For a production system, select the region that best meets your needs. Create the Event Hubs namespace in the same Location as the Azure Synapse Data Explorer cluster for best performance (most important for Event Hubs namespaces with high throughput). | Namespace name | A unique namespace name | Choose a unique name that identifies your namespace. For example, *mytestnamespace*. The domain name *servicebus.windows.net* is appended to the name you provide. The name can contain only letters, numbers, and hyphens. The name must start with a letter, and it must end with a letter or number. The value must be between 6 and 50 characters long.- | Event Hub name | *test-hub* | The Event Hub sits under the namespace, which provides a unique scoping container. The Event Hub name must be unique within the namespace. | + | Event Hubs name | *test-hub* | The Event Hubs sits under the namespace, which provides a unique scoping container. The Event Hubs name must be unique within the namespace. | | Consumer group name | *test-group* | Consumer groups enable multiple consuming applications to each have a separate view of the event stream. | | | | Create an Event Hub by using an Azure Resource Manager template in the Azure por 1. Review the **Summary** of resources created. Select **Create**, which acknowledges that you're creating resources in your subscription. - :::image type="content" source="../media/ingest-data-event-hub/review-create.png" alt-text="Screen shot of Azure portal for reviewing and creating Event Hub namespace, Event Hub, and consumer group."::: + :::image type="content" source="../media/ingest-data-event-hub/review-create.png" alt-text="Screen shot of Azure portal for reviewing and creating Event Hubs namespace, Event Hubs, and consumer group."::: 1. Select **Notifications** on the toolbar to monitor the provisioning process. It might take several minutes for the deployment to succeed, but you can move on to the next step now. Create an Event Hub by using an Azure Resource Manager template in the Azure por ### Authentication considerations -Depending on the type of identity, you are using to authenticate with the Event Hub, you may need some additional configurations. +Depending on the type of identity, you're using to authenticate with the Event Hubs, you might need some other configurations. -- If you are authenticating with Event Hub using a user assigned managed identity, go to your Event Hub > **Networking**, and then under **Allow access from**, select **All networks** and save the changes.+- If you're authenticating with Event Hubs using a user assigned managed identity, go to your Event Hubs > **Networking**, and then under **Allow access from**, select **All networks** and save the changes. - :::image type="content" source="../media/ingest-data-event-hub/configure-event-hub-all-networks.png" alt-text="Screenshot of the Event Hub networking page, showing the selection of allowing access to all networks."::: + :::image type="content" source="../media/ingest-data-event-hub/configure-event-hub-all-networks.png" alt-text="Screenshot of the Event Hubs networking page, showing the selection of allowing access to all networks."::: -- If you are authenticating with the Event Hub using a system assigned managed identity, go to your Event Hub > **Networking**, and then either allow access from all networks or under **Allow access from**, select **Selected networks**, select **Allow trusted Microsoft services to bypass this firewall** and save the changes.+- If you're authenticating with the Event Hubs using a system assigned managed identity, go to your Event Hubs > **Networking**, and then either allow access from all networks or under **Allow access from**, select **Selected networks**, select **Allow trusted Microsoft services to bypass this firewall** and save the changes. - :::image type="content" source="../media/ingest-data-event-hub/configure-event-hub-trusted-services.png" alt-text="Screenshot of the Event Hub networking page, showing the selection of allowing access to trusted services."::: + :::image type="content" source="../media/ingest-data-event-hub/configure-event-hub-trusted-services.png" alt-text="Screenshot of the Event Hubs networking page, showing the selection of allowing access to trusted services."::: -## Connect to the Event Hub +## Connect to the Event Hubs -Now you connect to the Event Hub from Data Explorer pool. When this connection is in place, data that flows into the Event Hub streams to the test table you created earlier in this article. +Now you connect to the Event Hubs from Data Explorer pool. When this connection is in place, data that flows into the Event Hubs streams to the test table you created earlier in this article. -1. Select **Notifications** on the toolbar to verify that the Event Hub deployment was successful. +1. Select **Notifications** on the toolbar to verify that the Event Hubs deployment was successful. 1. Under the Data Explorer pool you created, select **Databases** > **TestDatabase**. - :::image type="content" source="../media/ingest-data-event-hub/select-test-database.png" alt-text="Select test database."::: + :::image type="content" source="../media/ingest-data-event-hub/select-test-database.png" alt-text="Screenshot of the test database pool, showing select test database."::: 1. Select **Data connections** and **Add data connection**. Now you connect to the Event Hub from Data Explorer pool. When this connection i Fill out the form with the following information, and then select **Create**. **Setting** | **Suggested value** | **Field description** |||| | Data connection name | *test-hub-connection* | The name of the connection you want to create in Azure Synapse Data Explorer.|-| Subscription | | The subscription ID where the Event Hub resource is located. This field is autopopulated. | -| Event Hub namespace | A unique namespace name | The name you chose earlier that identifies your namespace. | -| Event Hub | *test-hub* | The Event Hub you created. | -| Consumer group | *test-group* | The consumer group defined in the Event Hub you created. | -| Event system properties | Select relevant properties | The [Event Hub system properties](../../../service-bus-messaging/service-bus-amqp-protocol-guide.md#message-annotations). If there are multiple records per event message, the system properties will be added to the first record. When adding system properties, [create](/azure/data-explorer/kusto/management/create-table-command?context=/azure/synapse-analytics/context/context) or [update](/azure/data-explorer/kusto/management/alter-table-command?context=/azure/synapse-analytics/context/context) table schema and [mapping](/azure/data-explorer/kusto/management/mappings?context=/azure/synapse-analytics/context/context) to include the selected properties. | -| Compression | *None* | The compression type of the Event Hub messages payload. Supported compression types: *None, Gzip*.| -| Managed Identity | System-assigned | The managed identity used by the Data Explorer cluster for access to read from the Event Hub.<br /><br />**Note**:<br />When the data connection is created:<br/>\- *System-assigned* identities are automatically created if they don't exist<br />\- The managed identity is automatically assigned the *Azure Event Hubs Data Receiver* role and is added to your Data Explorer cluster. We recommend verifying that the role was assigned and that the identity was added to the cluster. | +| Subscription | | The subscription ID where the Event Hubs resource is located. This field is autopopulated. | +| Event Hubs namespace | A unique namespace name | The name you chose earlier that identifies your namespace. | +| Event Hubs | *test-hub* | The Event Hubs you created. | +| Consumer group | *test-group* | The consumer group defined in the Event Hubs you created. | +| Event system properties | Select relevant properties | The [Event Hubs system properties](../../../service-bus-messaging/service-bus-amqp-protocol-guide.md#message-annotations). If there are multiple records per event message, the system properties will be added to the first record. When adding system properties, [create](/azure/data-explorer/kusto/management/create-table-command?context=/azure/synapse-analytics/context/context) or [update](/azure/data-explorer/kusto/management/alter-table-command?context=/azure/synapse-analytics/context/context) table schema and [mapping](/azure/data-explorer/kusto/management/mappings?context=/azure/synapse-analytics/context/context) to include the selected properties. | +| Compression | *None* | The compression type of the Event Hubs messages payload. Supported compression types: *None, Gzip*.| +| Managed Identity | System-assigned | The managed identity used by the Data Explorer cluster for access to read from the Event Hubs.<br /><br />**Note**:<br />When the data connection is created:<br/>\- *System-assigned* identities are automatically created if they don't exist<br />\- The managed identity is automatically assigned the *Azure Event Hubs Data Receiver* role and is added to your Data Explorer cluster. We recommend verifying that the role was assigned and that the identity was added to the cluster. | #### Target table There are two options for routing the ingested data: *static* and *dynamic*.-For this article, you use static routing, where you specify the table name, data format, and mapping as default values. If the Event Hub message includes data routing information, this routing information will override the default settings. +For this article, you use static routing, where you specify the table name, data format, and mapping as default values. If the Event Hubs message includes data routing information, this routing information will override the default settings. 1. Fill out the following routing settings: - :::image type="content" source="../media/ingest-data-event-hub/default-routing-settings.png" alt-text="Default routing settings for ingesting data to Event Hub - Azure Synapse Data Explorer."::: + :::image type="content" source="../media/ingest-data-event-hub/default-routing-settings.png" alt-text="Default routing settings for ingesting data to Event Hubs - Azure Synapse Data Explorer."::: |**Setting** | **Suggested value** | **Field description** |||| If you selected **Event system properties** in the **Data Source** section of th ## Copy the connection string -When you run the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) listed in Prerequisites, you need the connection string for the Event Hub namespace. +When you run the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) listed in Prerequisites, you need the connection string for the Event Hubs namespace. -1. Under the Event Hub namespace you created, select **Shared access policies**, then **RootManageSharedAccessKey**. +1. Under the Event Hubs namespace you created, select **Shared access policies**, then **RootManageSharedAccessKey**. ![Shared access policies.](../media/ingest-data-event-hub/shared-access-policies.png) When you run the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet Use the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) you downloaded to generate data. +>[!WARNING] +>This sample uses connection string authentication to connect to Event Hubs for simplicity of the example. However, hard-coding a connection string into your script requires a very high degree of trust in the application, and carries security risks. +> +>For long-term, secure solutions, use one of these options: +> +>* [Passwordless authentication](../../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md?tabs=passwordless) +>* [Store your connection string in an Azure Key Vault](/azure/key-vault/secrets/quick-create-portal) and use [this method](/azure/key-vault/secrets/quick-create-net#retrieve-a-secret) to retrieve it in your code. + 1. Open the sample app solution in Visual Studio.-1. In the *program.cs* file, update the `eventHubName` constant to the name of your Event Hub and update the `connectionString` constant to the connection string you copied from the Event Hub namespace. +1. In the *program.cs* file, update the `eventHubName` constant to the name of your Event Hubs and update the `connectionString` constant to the connection string you copied from the Event Hubs namespace. ```csharp const string eventHubName = "test-hub"; Use the [sample app](https://github.com/Azure-Samples/event-hubs-dotnet-ingest) const string connectionString = @"<YourConnectionString>"; ``` -1. Build and run the app. The app sends messages to the Event Hub, and prints its status every 10 seconds. -1. After the app has sent a few messages, move on to the next step: reviewing the flow of data into your Event Hub and test table. +1. Build and run the app. The app sends messages to the Event Hubs, and prints its status every 10 seconds. +1. After the app has sent a few messages, move on to the next step: reviewing the flow of data into your Event Hubs and test table. ## Review the data flow -With the app generating data, you can now see the flow of that data from the Event Hub to the table in your cluster. +With the app generating data, you can now see the flow of that data from the Event Hubs to the table in your cluster. -1. In the Azure portal, under your Event Hub, you see the spike in activity while the app is running. +1. In the Azure portal, under your Event Hubs, you see the spike in activity while the app is running. ![Event Hub graph.](../media/ingest-data-event-hub/event-hub-graph.png) With the app generating data, you can now see the flow of that data from the Eve ## Clean up resources -If you don't plan to use your Event Hub again, clean up **test-hub-rg**, to avoid incurring costs. +If you don't plan to use your Event Hubs again, clean up **test-hub-rg**, to avoid incurring costs. 1. In the Azure portal, select **Resource groups** on the far left, and then select the resource group you created. |
synapse-analytics | Data Explorer Ingest Event Hub Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-python.md | In this article, you create an Event Hub data connection for Azure Synapse Data ## Add an Event Hub data connection -The following example shows you how to add an Event Hub data connection programmatically. See [connect to the event hub](data-explorer-ingest-event-hub-portal.md#connect-to-the-event-hub) for adding an Event Hub data connection using the Azure portal. +The following example shows you how to add an Event Hub data connection programmatically. See [connect to the event hub](data-explorer-ingest-event-hub-portal.md#connect-to-the-event-hubs) for adding an Event Hub data connection using the Azure portal. ```Python from azure.mgmt.kusto import KustoManagementClient |
synapse-analytics | Data Explorer Ingest Event Hub Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-resource-manager.md | Title: Create an Event Hub data connection for Azure Synapse Data Explorer by using Azure Resource Manager template (Preview) -description: In this article, you learn how to create an Event Hub data connection for Azure Synapse Data Explorer by using Azure Resource Manager template. + Title: Create an Event Hubs data connection for Azure Synapse Data Explorer by using Azure Resource Manager template (Preview) +description: In this article, you learn how to create an Event Hubs data connection for Azure Synapse Data Explorer by using Azure Resource Manager template. Last updated 11/02/2021 -# Create an Event Hub data connection for Azure Synapse Data Explorer by using Azure Resource Manager template (Preview) +# Create an Event Hubs data connection for Azure Synapse Data Explorer by using Azure Resource Manager template (Preview) > [!div class="op_single_selector"] > * [Portal](data-explorer-ingest-event-hub-portal.md)-In this article, you create an Event Hub data connection for Azure Synapse Data Explorer by using Azure Resource Manager template. +In this article, you create an Event Hubs data connection for Azure Synapse Data Explorer by using Azure Resource Manager template. ## Prerequisites [!INCLUDE [data-explorer-ingest-prerequisites](../includes/data-explorer-ingest-prerequisites.md)] -- [Event Hub with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub).+- [Event Hubs with data for ingestion](data-explorer-ingest-event-hub-portal.md#create-an-event-hub). [!INCLUDE [data-explorer-ingest-event-hub-table-mapping](../includes/data-explorer-ingest-event-hub-table-mapping.md)] -## Azure Resource Manager template for adding an Event Hub data connection +## Azure Resource Manager template for adding an Event Hubs data connection -The following example shows an Azure Resource Manager template for adding an Event Hub data connection. You can [edit and deploy the template in the Azure portal](/azure/azure-resource-manager/resource-manager-quickstart-create-templates-use-the-portal#edit-and-deploy-the-template) by using the form. +The following example shows an Azure Resource Manager template for adding an Event Hubs data connection. You can [edit and deploy the template in the Azure portal](/azure/azure-resource-manager/resource-manager-quickstart-create-templates-use-the-portal#edit-and-deploy-the-template) by using the form. ```json { |
synapse-analytics | Get Started Analyze Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md | Title: 'Quickstart: Get started analyzing with Spark' -description: In this tutorial, you'll learn to analyze data with Apache Spark. +description: In this tutorial, you'll learn to analyze some sample data with Apache Spark in Azure Synapse Analytics. - Previously updated : 11/18/2022+ Last updated : 11/15/2024 -# Analyze with Apache Spark +# Quickstart: Analyze with Apache Spark In this tutorial, you'll learn the basic steps to load and analyze data with Apache Spark for Azure Synapse. +## Prerequisites ++Make sure you have [placed the sample data in the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account). + ## Create a serverless Apache Spark pool 1. In Synapse Studio, on the left-side pane, select **Manage** > **Apache Spark pools**.-1. Select **New** +1. Select **New** 1. For **Apache Spark pool name** enter **Spark1**. 1. For **Node size** enter **Small**. 1. For **Number of nodes** Set the minimum to 3 and the maximum to 3 1. Select **Review + create** > **Create**. Your Apache Spark pool will be ready in a few seconds. -## Understanding serverless Apache Spark pools +## Understand serverless Apache Spark pools A serverless Spark pool is a way of indicating how a user wants to work with Spark. When you start using a pool, a Spark session is created if needed. The pool controls how many Spark resources will be used by that session and how long the session will last before it automatically pauses. You pay for spark resources used during that session and not for the pool itself. This way a Spark pool lets you use Apache Spark without managing clusters. This is similar to how a serverless SQL pool works. Data is available via the dataframe named **df**. Load it into a Spark database spark.sql("CREATE DATABASE IF NOT EXISTS nyctaxi") df.write.mode("overwrite").saveAsTable("nyctaxi.trip") ```+ ## Analyze the NYC Taxi data using Spark and notebooks 1. Create a new code cell and enter the following code. Data is available via the dataframe named **df**. Load it into a Spark database 1. In the cell results, select **Chart** to see the data visualized. -## Next steps +## Next step > [!div class="nextstepaction"] > [Analyze data with dedicated SQL pool](get-started-analyze-sql-pool.md) |
synapse-analytics | Get Started Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-pipelines.md | -# Integrate with pipelines +# Tutorial: Integrate with pipelines In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio. ## Create a pipeline and add a notebook activity 1. In Synapse Studio, go to the **Integrate** hub.-1. Select **+** > **Pipeline** to create a new pipeline. Click on the new pipeline object to open the Pipeline designer. +1. Select **+** > **Pipeline** to create a new pipeline. Select the new pipeline object to open the Pipeline designer. 1. Under **Activities**, expand the **Synapse** folder, and drag a **Notebook** object into the designer. 1. Select the **Settings** tab of the Notebook activity properties. Use the drop-down list to select a notebook from your current Synapse workspace. In this tutorial, you'll learn how to integrate pipelines and activities using S 1. In the pipeline, select **Add trigger** > **New/edit**. 1. In **Choose trigger**, select **New**, and set the **Recurrence** to "every 1 hour".-1. Select **OK**. -1. Select **Publish All**. +1. Select **OK**. +1. Select **Publish All**. ## Forcing a pipeline to run immediately -Once the pipeline is published, you may want to run it immediately without waiting for an hour to pass. +Once the pipeline is published, you might want to run it immediately without waiting for an hour to pass. 1. Open the pipeline.-1. Click **Add trigger** > **Trigger now**. -1. Select **OK**. +1. Select **Add trigger** > **Trigger now**. +1. Select **OK**. ## Monitor pipeline execution 1. Go to the **Monitor** hub. 1. Select **Pipeline runs** to monitor pipeline execution progress.-1. In this view you can switch between tabular **List** display a graphical **Gantt** chart. -1. Click on a pipeline name to see the status of activities in that pipeline. +1. In this view you can switch between tabular **List** display a graphical **Gantt** chart. +1. Select a pipeline name to see the status of activities in that pipeline. -## Next steps +## Next step > [!div class="nextstepaction"] > [Visualize data with Power BI](get-started-visualize-power-bi.md) |
synapse-analytics | How To Move Workspace From One Region To Another | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md | New-AzStorageAccount -ResourceGroupName $resourceGroupName ` -EnableHierarchicalNamespace $true ``` - #### Create an Azure Synapse workspace ```powershell |
synapse-analytics | Tutorial Score Model Predict Spark Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md | Make sure all prerequisites are in place before following these steps for using > [!NOTE] > Update tenant, client, subscription, resource group, AML workspace and linked service details in this script before running it. - - **Through service principal:** You can use service principal client ID and secret directly to authenticate to AML workspace. Service principal must have "Contributor" access to the AML workspace. + - **(Recommended) Through linked service:** You can use linked service to authenticate to AML workspace. Linked service can use "service principal" or Synapse workspace's "Managed Service Identity (MSI)" for authentication. "Service principal" or "Managed Service Identity (MSI)" must have "Contributor" access to the AML workspace. ++ ```python + #AML workspace authentication using linked service + from notebookutils.mssparkutils import azureML + ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both + ``` ++ - **Through service principal:** Though not recommended, you can use service principal client ID and secret directly to authenticate to AML workspace. Providing the service principal password directly poses some security risk, so we suggest using a linked service where possible. Service principal must have "Contributor" access to the AML workspace. ```python #AML workspace authentication using service principal Make sure all prerequisites are in place before following these steps for using ) ``` - - **Through linked service:** You can use linked service to authenticate to AML workspace. Linked service can use "service principal" or Synapse workspace's "Managed Service Identity (MSI)" for authentication. "Service principal" or "Managed Service Identity (MSI)" must have "Contributor" access to the AML workspace. -- ```python - #AML workspace authentication using linked service - from notebookutils.mssparkutils import azureML - ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both - ``` - 4. **Enable PREDICT in spark session:** Set the spark configuration `spark.synapse.ml.predict.enabled` to `true` to enable the library. ```python Make sure all prerequisites are in place before following these steps for using from azureml.core import Workspace, Model from azureml.core.authentication import ServicePrincipalAuthentication+ from notebookutils.mssparkutils import azureML AZURE_TENANT_ID = "xyz" AZURE_CLIENT_ID = "xyz" Make sure all prerequisites are in place before following these steps for using AML_RESOURCE_GROUP = "xyz" AML_WORKSPACE_NAME = "xyz" - svc_pr = ServicePrincipalAuthentication( - tenant_id=AZURE_TENANT_ID, - service_principal_id=AZURE_CLIENT_ID, - service_principal_password=AZURE_CLIENT_SECRET - ) -- ws = Workspace( - workspace_name = AML_WORKSPACE_NAME, - subscription_id = AML_SUBSCRIPTION_ID, - resource_group = AML_RESOURCE_GROUP, - auth=svc_pr - ) + #AML workspace authentication using linked service + ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both model = Model.register( model_path="./artifacts/output", |
synapse-analytics | Tutorial Text Analytics Use Mmlspark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md | description: Learn how to use text analytics in Azure Synapse Analytics. Previously updated : 11/02/2021 Last updated : 11/19/2024 +# customer intent: As a Synapse Analytics user, I want to be able to analyze my text using Azure AI services. # Tutorial: Text Analytics with Azure AI services -[Text Analytics](/azure/ai-services/language-service/) is an [Azure AI services](/azure/ai-services/) that enables you to perform text mining and text analysis with Natural Language Processing (NLP) features. In this tutorial, you'll learn how to use [Text Analytics](/azure/ai-services/language-service/) to analyze unstructured text on Azure Synapse Analytics. +In this tutorial, you learn how to use [Text Analytics](/azure/ai-services/language-service/) to analyze unstructured text on Azure Synapse Analytics. [Text Analytics](/azure/ai-services/language-service/) is an [Azure AI services](/azure/ai-services/) that enables you to perform text mining and text analysis with Natural Language Processing (NLP) features. This tutorial demonstrates using text analytics with [SynapseML](https://github.com/microsoft/SynapseML) to: If you don't have an Azure subscription, [create a free account before you begin - [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).-+- Preconfiguration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md). ## Get started-Open Synapse Studio and create a new notebook. To get started, import [SynapseML](https://github.com/microsoft/SynapseML). ++Open Synapse Studio and create a new notebook. To get started, import [SynapseML](https://github.com/microsoft/SynapseML). ```python import synapse.ml-from synapse.ml.cognitive import * +from synapse.ml.services import * from pyspark.sql.functions import col ``` ## Configure text analytics -Use the linked text analytics you configured in the [pre-configuration steps](tutorial-configure-cognitive-services-synapse.md) . +Use the linked text analytics you configured in the [preconfiguration steps](tutorial-configure-cognitive-services-synapse.md). ```python-ai_service_name = "<Your linked service for text analytics>" +linked_service_name = "<Your linked service for text analytics>" ``` ## Text Sentiment-The Text Sentiment Analysis provides a way for detecting the sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. See the [Supported languages in Text Analytics API](/azure/ai-services/language-service/language-detection/overview?tabs=sentiment-analysis) for the list of enabled languages. ++The Text Sentiment Analysis provides a way for detecting the sentiment labels (such as "negative", "neutral", and "positive") and confidence scores at the sentence and document-level. See the [Supported languages in Text Analytics API](/azure/ai-services/language-service/language-detection/overview?tabs=sentiment-analysis) for the list of enabled languages. ```python # Create a dataframe that's tied to it's column names df = spark.createDataFrame([- ("I am so happy today, its sunny!", "en-US"), + ("I am so happy today, it's sunny!", "en-US"), ("I am frustrated by this rush hour traffic", "en-US"), ("The Azure AI services on spark aint bad", "en-US"), ], ["text", "language"]) display(results .select("text", "sentiment")) ```+ ### Expected results |text|sentiment| |||-|I am so happy today, its sunny!|positive| -|I am frustrated by this rush hour traffic|negative| -|The Azure AI services on spark aint bad|positive| +|I'm so happy today, it's sunny!|positive| +|I'm frustrated by this rush hour traffic|negative| +|The Azure AI services on spark aint bad|neutral| ner = (NER() display(ner.transform(df).select("text", col("replies").getItem("document").getItem("entities").alias("entities"))) ```+ ### Expected results+ ![Expected results for named entity recognition v3.1](./media/tutorial-text-analytics-use-mmlspark/expected-output-ner-v-31.png) ## Personally Identifiable Information (PII) V3.1+ The PII feature is part of NER and it can identify and redact sensitive entities in text that are associated with an individual person such as: phone number, email address, mailing address, passport number. See the [Supported languages in Text Analytics API](/azure/ai-services/language-service/language-detection/overview?tabs=pii) for the list of enabled languages. ```python pii = (PII() display(pii.transform(df).select("text", col("replies").getItem("document").getItem("entities").alias("entities"))) ```+ ### Expected results+ ![Expected results for personal identifiable information v3.1](./media/tutorial-text-analytics-use-mmlspark/expected-output-pii-v-31.png) ## Clean up resources+ To ensure the Spark instance is shut down, end any connected sessions(notebooks). The pool shuts down when the **idle time** specified in the Apache Spark pool is reached. You can also select **stop session** from the status bar at the upper right of the notebook. ![Screenshot showing the Stop session button on the status bar.](./media/tutorial-build-applications-use-mmlspark/stop-session.png) -## Next steps +## Related content * [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning) * [SynapseML GitHub Repo](https://github.com/microsoft/SynapseML) |
synapse-analytics | Synapse Workspace Managed Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-managed-private-endpoints.md | Title: Managed private endpoints description: An article that explains Managed private endpoints in Azure Synapse Analytics -+ Previously updated : 01/12/2020 Last updated : 11/15/2024 -# Synapse Managed private endpoints --This article will explain Managed private endpoints in Azure Synapse Analytics. --## Managed private endpoints +# Azure Synapse Analytics managed private endpoints Managed private endpoints are private endpoints created in a Managed Virtual Network associated with your Azure Synapse workspace. Managed private endpoints establish a private link to Azure resources. Azure Synapse manages these private endpoints on your behalf. You can create Managed private endpoints from your Azure Synapse workspace to access Azure services (such as Azure Storage or Azure Cosmos DB) and Azure hosted customer/partner services. Learn more about [private links and private endpoints](../../private-link/index. >[!NOTE] >When creating an Azure Synapse workspace, you can choose to associate a Managed Virtual Network to it. If you choose to have a Managed Virtual Network associated to your workspace, you can also choose to limit outbound traffic from your workspace to only approved targets. You must create Managed private endpoints to these targets. - A private endpoint connection is created in a "Pending" state when you create a Managed private endpoint in Azure Synapse. An approval workflow is started. The private link resource owner is responsible to approve or reject the connection. If the owner approves the connection, the private link is established. But, if the owner doesn't approve the connection, then the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection. Only a Managed private endpoint in an approved state can be used to send traffic to the private link resource that is linked to the Managed private endpoint. ## Managed private endpoints for dedicated SQL pool and serverless SQL pool -Dedicated SQL pool and serverless SQL pool are analytic capabilities in your Azure Synapse workspace. These capabilities use multi-tenant infrastructure that isn't deployed into the [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md). +Dedicated SQL pool and serverless SQL pool are analytic capabilities in your Azure Synapse workspace. These capabilities use multitenant infrastructure that isn't deployed into the [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md). -When a workspace is created, Azure Synapse creates two Managed private endpoints in the workspace, one for dedicated SQL pool and one for serverless SQL pool. +When a workspace is created, Azure Synapse creates two Managed private endpoints in the workspace, one for dedicated SQL pool and one for serverless SQL pool. These two Managed private endpoints are listed in Synapse Studio. Select **Manage** in the left navigation, then select **Managed private endpoints** to see them in the Studio. The Managed private endpoint that targets SQL pool is called *synapse-ws-sql--\< These two Managed private endpoints are automatically created for you when you create your Azure Synapse workspace. You aren't charged for these two Managed private endpoints. - ## Supported data sources Azure Synapse Spark supports over 25 data sources to connect to using managed private endpoints. Users need to specify the resource identifier, which can be found in the **Properties** settings page of their data source in the Azure portal. Azure Synapse Spark supports over 25 data sources to connect to using managed pr | Azure App Services | /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/sites/{app-service-name} -## Next steps +## Get started -To learn more, advance to the [Create Managed private endpoints to your data sources](./how-to-create-managed-private-endpoints.md) article. +To learn more, advance to the [create managed private endpoints to your data sources](./how-to-create-managed-private-endpoints.md) article. |
synapse-analytics | Apache Spark Azure Create Spark Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md | -In this tutorial, you will learn how to create an Apache Spark configuration for your synapse studio. The created Apache Spark configuration can be managed in a standardized manner and when you create Notebook or Apache spark job definition can select the Apache Spark configuration that you want to use with your Apache Spark pool. When you select it, the details of the configuration are displayed. +In this article, you learn how to create an Apache Spark configuration for your synapse studio. The created Apache Spark configuration can be managed in a standardized manner and when you create Notebook or Apache spark job definition can select the Apache Spark configuration that you want to use with your Apache Spark pool. When you select it, the details of the configuration are displayed. -## Create an Apache Spark Configuration +## Create an Apache Spark Configuration You can create custom configurations from different entry points, such as from the Apache Spark configuration page of an existing spark pool. You can create custom configurations from different entry points, such as from t Follow the steps below to create an Apache Spark Configuration in Synapse Studio. - 1. Select **Manage** > **Apache Spark configurations**. - 2. Click on **New** button to create a new Apache Spark configuration, or click on **Import** a local .json file to your workspace. - 3. **New Apache Spark configuration** page will be opened after you click on **New** button. - 4. For **Name**, you can enter your preferred and valid name. - 5. For **Description**, you can input some description in it. - 6. For **Annotations**, you can add annotations by clicking the **New** button, and also you can delete existing annotations by selecting and clicking **Delete** button. - 7. For **Configuration properties**, customize the configuration by clicking **Add** button to add properties. If you do not add a property, Azure Synapse will use the default value when applicable. - - ![Screenshot that create spark configuration.](./media/apache-spark-azure-log-analytics/create-spark-configuration.png) - - 8. Click on **Continue** button. - 9. Click on **Create** button when the validation succeeded. - 10. Publish all ---> [!NOTE] +1. Select **Manage** > **Apache Spark configurations**. +1. Select **New** button to create a new Apache Spark configuration, or select **Import** a local .json file to your workspace. +1. **New Apache Spark configuration** page will be opened after you select **New** button. +1. For **Name**, you can enter your preferred and valid name. +1. For **Description**, you can input some description in it. +1. For **Annotations**, you can add annotations by clicking the **New** button, and also you can delete existing annotations by selecting and clicking **Delete** button. +1. For **Configuration properties**, customize the configuration by clicking **Add** button to add properties. If you don't add a property, Azure Synapse will use the default value when applicable. ++ ![Screenshot that create spark configuration.](./media/apache-spark-azure-log-analytics/create-spark-configuration.png) ++1. Select **Continue** button. +1. Select **Create** button when the validation succeeded. +1. Publish all. ++> [!NOTE] +> **Upload Apache Spark configuration** feature has been removed. >-> **Upload Apache Spark configuration** feature has been removed, but Synapse Studio will keep your previously uploaded configuration. +> Pools using an uploaded configuration need to be updated. [Update your pool's configuration](#create-an-apache-spark-configuration-in-already-existing-apache-spark-pool) by selecting an existing configuration or creating a new configuration in the **Apache Spark configuration** menu for the pool. If no new configuration is selected, jobs for these pools will be run using the default configuration in the Spark system settings. ## Create an Apache Spark Configuration in already existing Apache Spark pool Follow the steps below to create an Apache Spark configuration in an existing Apache Spark pool. - 1. Select an existing Apache Spark pool, and click on action "..." button. - 2. Select the **Apache Spark configuration** in the content list. - - ![Screenshot that apache spark configuration.](./media/apache-spark-azure-create-spark-configuration/create-spark-configuration-by-right-click-on-spark-pool.png) -- 3. For Apache Spark configuration, you can select an already created configuration from the drop-down list, or click on **+New** to create a new configuration. - - * If you click **+New**, the Apache Spark Configuration page will open, and you can create a new configuration by following the steps in [Create custom configurations in Apache Spark configurations](#create-custom-configurations-in-apache-spark-configurations). - * If you select an existing configuration, the configuration details will be displayed at the bottom of the page, you can also click the **Edit** button to edit the existing configuration. - - ![Screenshot that edit spark configuration.](./media/apache-spark-azure-create-spark-configuration/edit-spark-config.png) - - 4. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use on this Apache Spark pool. - - ![Screenshot that select a configuration.](./media/apache-spark-azure-create-spark-configuration/select-a-configuration.png) +1. Select an existing Apache Spark pool, and select action "..." button. +1. Select the **Apache Spark configuration** in the content list. - 5. Click on **Apply** button to save your action. + ![Screenshot that apache spark configuration.](./media/apache-spark-azure-create-spark-configuration/create-spark-configuration-by-right-click-on-spark-pool.png) +1. For Apache Spark configuration, you can select an already created configuration from the drop-down list, or select **+New** to create a new configuration. ++ * If you select **+New**, the Apache Spark Configuration page will open, and you can create a new configuration by following the steps in [Create custom configurations in Apache Spark configurations](#create-custom-configurations-in-apache-spark-configurations). + * If you select an existing configuration, the configuration details will be displayed at the bottom of the page, you can also select the **Edit** button to edit the existing configuration. ++ ![Screenshot that edit spark configuration.](./media/apache-spark-azure-create-spark-configuration/edit-spark-config.png) ++1. Select **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use on this Apache Spark pool. + + ![Screenshot that select a configuration.](./media/apache-spark-azure-create-spark-configuration/select-a-configuration.png) ++1. Select **Apply** button to save your action. ## Create an Apache Spark Configuration in the Notebook's configure session If you need to use a custom Apache Spark Configuration when creating a Notebook, you can create and configure it in the **configure session** by following the steps below. - 1. Create a new/Open an existing Notebook. - 2. Open the **Properties** of this notebook. - 3. Click on **Configure session** to open the Configure session page. - 4. Scroll down the configure session page, for Apache Spark configuration, expand the drop-down menu, you can click on New button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration, if you select an existing configuration, click the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration. - 5. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use. +1. Create a new/Open an existing Notebook. +1. Open the **Properties** of this notebook. +1. Select **Configure session** to open the Configure session page. +1. Scroll down the configure session page, for Apache Spark configuration, expand the drop-down menu, you can select New button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration, if you select an existing configuration, select the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration. +1. Select **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use. - ![Screenshot that create configuration in configure session.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-configure-session.png) + ![Screenshot that create configuration in configure session.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-configure-session.png) ## Create an Apache Spark Configuration in Apache Spark job definitions -When you are creating a spark job definition, you need to use Apache Spark configuration, which can be created by following the steps below: -- 1. Create a new/Open an existing Apache Spark job definitions. - 2. For **Apache Spark configuration**, you can click on New button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration in the drop-down menu, if you select an existing configuration, click the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration. - 3. Click **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use. +When you're creating a spark job definition, you need to use Apache Spark configuration, which can be created by following the steps below: - ![Screenshot that create configuration in spark job definitions.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-spark-job-definition.png) +1. Create a new/Open an existing Apache Spark job definition. +1. For **Apache Spark configuration**, you can select the **New** button to [create a new configuration](#create-custom-configurations-in-apache-spark-configurations). Or select an existing configuration in the drop-down menu, if you select an existing configuration, select the **Edit** icon to go to the Edit Apache Spark configuration page to edit the configuration. +1. Select **View Configurations** to open the **Select a Configuration** page. All configurations will be displayed on this page. You can select a configuration that you want to use. + ![Screenshot that create configuration in spark job definitions.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-spark-job-definition.png) -> [!NOTE] +> [!NOTE] > > If the Apache Spark configuration in the Notebook and Apache Spark job definition does not do anything special, the default configuration will be used when running the job. - ## Import and Export an Apache Spark configuration You can import .txt/.conf/.json config in three formats and then convert it to artifact and publish it. And can also export to one of these three formats. -- Import .txt/.conf/.json configuration from local.+* Import .txt/.conf/.json configuration from local. ![Screenshot that import config.](./media/apache-spark-azure-create-spark-configuration/import-config.png) --- Export .txt/.conf/.json configuration to local.+* Export .txt/.conf/.json configuration to local. ![Screenshot that export config.](./media/apache-spark-azure-create-spark-configuration/export-config.png) - For .txt config file and .conf config file, you can refer to the following examples: - ```txt +```txt - spark.synapse.key1 sample - spark.synapse.key2 true - # spark.synapse.key3 sample2 +spark.synapse.key1 sample +spark.synapse.key2 true +# spark.synapse.key3 sample2 - ``` +``` For .json config file, you can refer to the following examples: - ```json - { - "configs": { - "spark.synapse.key1": "hello world", - "spark.synapse.key2": "true" - }, - "annotations": [ - "Sample" - ] - } - ``` --> [!NOTE] -> +```json +{ +"configs": { + "spark.synapse.key1": "hello world", +"spark.synapse.key2": "true" +}, +"annotations": [ + "Sample" +] +} +``` ++> [!NOTE] > Synapse Studio will continue to support terraform or bicep-based configuration files. +## Related content -## Next steps -+* [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md). +* [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md). +* [Collect Apache Spark applications logs and metrics with Azure Storage account](./azure-synapse-diagnostic-emitters-azure-storage.md). |
synapse-analytics | Apache Spark Azure Portal Add Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md | Title: Manage Apache Spark packages description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics. -+ Previously updated : 04/15/2023 Last updated : 11/15/2024 To learn more about how to manage session-scoped packages, see the following art - [R session packages](./apache-spark-manage-session-packages.md#session-scoped-r-packages-preview): Within your session, you can install packages across all nodes within your Spark pool by using `install.packages` or `devtools`. - ## Automate the library management process through Azure PowerShell cmdlets and REST APIs If your team wants to manage libraries without visiting the package management UIs, you have the option to manage the workspace packages and pool-level package updates through Azure PowerShell cmdlets or REST APIs for Azure Synapse Analytics. For more information, see the following articles: - [Manage your Spark pool libraries through REST APIs](apache-spark-manage-packages-outside-ui.md#manage-packages-through-rest-apis) - [Manage your Spark pool libraries through Azure PowerShell cmdlets](apache-spark-manage-packages-outside-ui.md#manage-packages-through-azure-powershell-cmdlets) -## Next steps +## Related content - [View the default libraries and supported Apache Spark versions](apache-spark-version-support.md) - [Troubleshoot library installation errors](apache-spark-troubleshoot-library-errors.md) |
synapse-analytics | Apache Spark External Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md | The feature works with Spark 3.1. The following table shows the supported Hive M Follow below steps to set up a linked service to the external Hive Metastore in Synapse workspace. -1. Open Synapse Studio, go to **Manage > Linked services** at left, click **New** to create a new linked service. +1. Open Synapse Studio, go to **Manage > Linked services** at left, select **New** to create a new linked service. :::image type="content" source="./media/use-external-metastore/set-up-hive-metastore-linked-service.png" alt-text="Set up Hive Metastore linked service" border="true"::: -2. Choose **Azure SQL Database** or **Azure Database for MySQL** based on your database type, click **Continue**. +2. Choose **Azure SQL Database** or **Azure Database for MySQL** based on your database type, select **Continue**. 3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used to configure Spark shortly. Follow below steps to set up a linked service to the external Hive Metastore in 6. **Test connection** to verify the username and password. -7. Click **Create** to create the linked service. +7. Select **Create** to create the linked service. ### Test connection and get the metastore version in notebook-Some network security rule settings may block access from Spark pool to the external Hive Metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive Metastore DB. ++Some network security rule settings could block access from Spark pool to the external Hive Metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive Metastore DB. You can also get your Hive Metastore version from the output results. The Hive Metastore version will be used in the Spark configuration. +>[!WARNING] +>Don't publish the test scripts in your notebook with your password hardcoded as this could cause a potential security risk for your Hive Metastore. + #### Connection testing code for Azure SQL+ ```scala %%spark import java.sql.DriverManager try { ``` #### Connection testing code for Azure Database for MySQL+ ```scala %%spark import java.sql.DriverManager try { ``` ## Configure Spark to use the external Hive Metastore-After creating the linked service to the external Hive Metastore successfully, you need to set up a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level. ++After creating the linked service to the external Hive Metastore successfully, you need to set up a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level. Here are the configurations and descriptions: Here are the configurations and descriptions: |Spark config|Description| |--|--|-|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part| +|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first two parts without the third part| |`spark.sql.hive.metastore.jars`|<ul><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*`</li></ul>| |`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service| |`spark.sql.hive.metastore.sharedPrefixes`|`com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas`| spark.sql.hive.metastore.jars /opt/hive-metastore/lib-<your hms version, 2 parts spark.sql.hive.metastore.sharedPrefixes com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas ``` -Here is an example for metastore version 2.3 with linked service named as HiveCatalog21: +Here's an example for metastore version 2.3 with linked service named as HiveCatalog21: ```properties spark.sql.hive.metastore.version 2.3 spark.sql.hive.metastore.sharedPrefixes com.mysql.jdbc,com.microsoft.sqlserver,c ``` ### Configure at Spark session level-For notebook session, you can also configure the Spark session in notebook using `%%configure` magic command. Here is the code. +For notebook session, you can also configure the Spark session in notebook using `%%configure` magic command. Here's the code. ```json %%configure -f If the underlying data of your Hive tables are stored in Azure Blob storage acco :::image type="content" source="./media/use-external-metastore/connect-to-storage-account.png" alt-text="Connect to storage account" border="true"::: -2. Choose **Azure Blob Storage** and click **Continue**. +2. Choose **Azure Blob Storage** and select **Continue**. 3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark configuration shortly. 4. Select the Azure Blob Storage account. Make sure Authentication method is **Account key**. Currently Spark pool can only access Blob Storage account via account key.-5. **Test connection** and click **Create**. +5. **Test connection** and select **Create**. 6. After creating the linked service to Blob Storage account, when you run Spark queries, make sure you run below Spark code in the notebook to get access to the Blob Storage account for the Spark session. Learn more about why you need to do this [here](./apache-spark-secure-credentials-with-tokenlibrary.md). ```python After setting up storage connections, you can query the existing tables in the H - [SQL <-> Spark synchronization](../sql/develop-storage-files-spark-tables.md) doesn't work when using external HMS. - Only Azure SQL Database and Azure Database for MySQL are supported as external Hive Metastore database. Only SQL authorization is supported. - Currently Spark only works on external Hive tables and non-transactional/non-ACID managed Hive tables. It doesn't support Hive ACID/transactional tables.-- Apache Ranger integration is not supported.+- Apache Ranger integration isn't supported. ## Troubleshooting ### See below error when querying a Hive table with data stored in Blob Storage spark.hadoop.hive.synapse.externalmetastore.schema.usedefault false If you need to migrate your HMS version, we recommend using [hive schema tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool). And if the HMS has been used by HDInsight clusters, we suggest using [HDI provided version](../../hdinsight/interactive-query/apache-hive-migrate-workloads.md). ### HMS schema change for OSS HMS 3.1-Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not fully compatible with the OSS HMS 3.1. So please apply the following manually to your HMS 3.1 if itΓÇÖs not provisioned by HDI. +Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 isn't fully compatible with the OSS HMS 3.1. Apply the following manually to your HMS 3.1 if itΓÇÖs not provisioned by HDI. ```sql -- HIVE-19416 ALTER TABLE TBLS ADD WRITE_ID bigint NOT NULL DEFAULT(0); ALTER TABLE PARTITIONS ADD WRITE_ID bigint NOT NULL DEFAULT(0); ``` -### When sharing the metastore with HDInsight 4.0 Spark cluster, I cannot see the tables -If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, please ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value for HDI spark is `spark` and the default value for Synapse spark is `hive`. +### When sharing the metastore with HDInsight 4.0 Spark cluster, I can't see the tables +If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value for HDI spark is `spark` and the default value for Synapse spark is `hive`. ### When sharing the Hive Metastore with HDInsight 4.0 Hive cluster, I can list the tables successfully, but only get empty result when I query the table As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. In HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables. |
synapse-analytics | Apache Spark What Is Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md | Title: What is Delta Lake? -description: Overview of Delta Lake and how it works as part of Azure Synapse Analytics +description: Overview of Delta Lake's key features and how it brings atomicity, consistency, isolation, and durability to Azure Synapse Analytics. -+ Previously updated : 12/06/2022 Last updated : 11/15/2024 Delta Lake is an open-source storage layer that brings ACID (atomicity, consiste The current version of Delta Lake included with Azure Synapse has language support for Scala, PySpark, and .NET and is compatible with Linux Foundation Delta Lake. There are links at the bottom of the page to more detailed examples and documentation. You can learn more from the [Introduction to Delta Tables video](https://www.youtube.com/watch?v=B_wyRXlLKok). -## Key features - | Feature | Description | | | | | **ACID Transactions** | Data lakes are typically populated through multiple processes and pipelines, some of which are writing data concurrently with reads. Prior to Delta Lake and the addition of transactions, data engineers had to go through a manual error prone process to ensure data integrity. Delta Lake brings familiar ACID transactions to data lakes. It provides serializability, the strongest level of isolation level. Learn more at [Diving into Delta Lake: Unpacking the Transaction Log](https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html).| | **Scalable Metadata Handling** | In big data, even the metadata itself can be "big data." Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. | | **Time Travel (data versioning)** | The ability to "undo" a change or go back to a previous version is one of the key features of transactions. Delta Lake provides snapshots of data enabling you to revert to earlier versions of data for audits, rollbacks or to reproduce experiments. Learn more in [Introducing Delta Lake Time Travel for Large Scale Data Lakes](https://databricks.com/blog/2019/02/04/introducing-delta-time-travel-for-large-scale-data-lakes.html). | | **Open Format** | Apache Parquet is the baseline format for Delta Lake, enabling you to leverage the efficient compression and encoding schemes that are native to the format. |-| **Unified Batch and Streaming Source and Sink** | A table in Delta Lake is both a batch table, as well as a streaming source and sink. Streaming data ingest, batch historic backfill, and interactive queries all just work out of the box. | +| **Unified Batch and Streaming Source and Sink** | A table in Delta Lake is both a batch table, and a streaming source and sink. Streaming data ingest, batch historic backfill, and interactive queries all just work out of the box. | | **Schema Enforcement** | Schema enforcement helps ensure that the data types are correct and required columns are present, preventing bad data from causing data inconsistency. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) | | **Schema Evolution** | Delta Lake enables you to make changes to a table schema that can be applied automatically, without having to write migration DDL. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) | | **Audit History** | Delta Lake transaction log records details about every change made to data providing a full audit trail of the changes. | For full documentation, see the [Delta Lake Documentation Page](https://docs.del For more information, see [Delta Lake Project](https://github.com/delta-io/delta). -## Next steps +## Related content - [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet) - [Azure Synapse Analytics](../index.yml) |
synapse-analytics | Sql Synapse Link Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md | Title: What is Azure Synapse Link for SQL? description: Learn about Azure Synapse Link for SQL, the benefits it offers, and price. -+ Previously updated : 11/16/2022 Last updated : 11/18/2024 The following image shows the Azure Synapse Link integration with Azure SQL DB, :::image type="content" source="../media/sql-synapse-link-overview/synapse-link-sql-architecture.png" alt-text="Diagram of the Azure Synapse Link for SQL architecture."::: -## Benefit - Azure Synapse Link for SQL provides fully managed and turnkey experience for you to land operational data in Azure Synapse Analytics dedicated SQL pools. It does this by continuously replicating the data from Azure SQL Database or SQL Server 2022 with full consistency. By using Azure Synapse Link for SQL, you can get the following benefits: * **Minimum impact on operational workload** With the new change feed technology in Azure SQL Database and SQL Server 2022, Azure Synapse Link for SQL can automatically extract incremental changes from Azure SQL Database or SQL Server 2022. It then replicates to Azure Synapse Analytics dedicated SQL pool with minimal impact on the operational workload. * **Reduced complexity with No ETL jobs to manage**-After going through a few clicks including selecting your operational database and tables, updates made to the operational data in Azure SQL Database or SQL Server 2022 are visible in the Azure Synapse Analytics dedicated SQL pool. They're available in near real-time with no ETL or data integration logic. You can focus on analytical and reporting logic against operational data via all the capabilities within Azure Synapse Analytics. +After selecting your operational database and tables, updates made to the operational data in Azure SQL Database or SQL Server 2022 are visible in the Azure Synapse Analytics dedicated SQL pool. They're available in near real-time with no ETL or data integration logic. You can focus on analytical and reporting logic against operational data via all the capabilities within Azure Synapse Analytics. * **Near real-time insights into your operational data**-You can now get rich insights by analyzing operational data in Azure SQL Database or SQL Server 2022 in near real-time to enable new business scenarios including operational BI reporting, real time scoring and personalization, or supply chain forecasting etc. via Azure Synapse link for SQL. +You can now get rich insights by analyzing operational data in Azure SQL Database or SQL Server 2022 in near real-time to enable new business scenarios including operational BI reporting, real time scoring and personalization, or supply chain forecasting etc. via Azure Synapse Link for SQL. ++## Related content -## Next steps +* [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md) and how to [configure Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md). +* [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md) and how to [configure Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md). -* [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md). -* [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md). -* How to [Configure Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md). -* How to [Configure Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md). |
synapse-analytics | Troubleshoot Synapse Studio Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-powershell.md | Azure Synapse Studio depends on a set of Web API endpoints to work properly. Thi ## Troubleshooting steps -Right-click on the following link, and select "Save target as": +Open the link and save the opened script file. Don't save the address of the link, as it could change in the future. - [Test-AzureSynapse.ps1](https://go.microsoft.com/fwlink/?linkid=2119734) -Alternatively, you can open the link directly, and save the opened script file. Don't save the address of the link, as it could change in the future. - In file explorer, right-click on the downloaded script file, and select "Run with PowerShell". ![Run downloaded script file with PowerShell](media/troubleshooting-synapse-studio-powershell/run-with-powershell.png) If you're a network administrator and tuning your firewall configuration for Azu For the failed requests, the reason is shown in yellow, such as `NamedResolutionFailure` or `ConnectFailure`. These reasons might help you figure out whether there are misconfigurations with your network environment. ## Next steps+ If the previous steps don't help to resolve your issue, [create a support ticket](../sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md). |
virtual-desktop | Autoscale Create Assign Scaling Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-create-assign-scaling-plan.md | To use a dynamic scaling plan (preview): - You must grant Azure Virtual Desktop access to manage the power state of your session host VMs. You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to assign the role-based access control (RBAC) role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles. -- You must grant Azure Virtual Desktop access to read session host configuration. During the preview, you will need to assign a custom role with the `Microsoft.DesktopVirtualization/hostPools/activeSessionHostConfigurations/read` permission to the Azure Virtual Desktop service principal.+- Dynamic autoscaling currently requires access to the public Azure Storage endpoint `wvdhpustgr0prod.blob.core.windows.net` to deploy the RDAgent when creating session hosts. Until this is migrated to a [required endpoint for Azure Virtual Desktop](required-fqdn-endpoint.md), session hosts that can't access wvdhpustgr0prod.blob.core.windows.net will fail with a "CustomerVmNoAccessToDeploymentPackageException" error. - If you're using PowerShell to create and assign your scaling plan, you need module [Az.DesktopVirtualization](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/) version 4.2.0 or later. ::: zone-end |
virtual-desktop | Disaster Recovery Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery-concepts.md | When you design a disaster recovery plan, you should keep the following three th - Business continuity: how an organization can keep operating during outages of any size. - Disaster recovery: the process of getting back to operation after a full outage. -Azure Virtual Desktop doesn't have any native features for managing disaster recovery scenarios, but you can use many other Azure services for each scenario depending on your requirements, such as [Availability sets](/azure/virtual-machines/availability-set-overview), [availability zones](../availability-zones/az-region.md), Azure Site Recovery, and [Azure Files data redundancy](../storage/files/files-redundancy.md) options for user profiles and data. +Azure Virtual Desktop doesn't have any native features for managing disaster recovery scenarios, but you can use many other Azure services for each scenario depending on your requirements, such as [Availability sets](/azure/virtual-machines/availability-set-overview), [availability zones](../reliability/availability-zones-overview.md), Azure Site Recovery, and [Azure Files data redundancy](../storage/files/files-redundancy.md) options for user profiles and data. You can also distribute session hosts across multiple [Azure regions](../best-practices-availability-paired-regions.md) provides even more geographical distribution, which further reduces outage impact. All these and other Azure features provide a certain level of protection within Azure Virtual Desktop, and you should carefully consider them along with any cost implications. |
virtual-desktop | Whats New Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md | Here's information about the Azure Virtual Desktop Agent. > [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it. -## Version 1.0.10292.500 +## Version 1.0.10292.500 (validation) *Published: November 2024* |
virtual-network | Public Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md | Full details are listed in the table below: | Allocation method| Static | For IPv4: Dynamic or Static; For IPv6: Dynamic.| | Idle Timeout | Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.|Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.| | Security | Secure by default model and be closed to inbound traffic when used as a frontend. Allow traffic with [network security group (NSG)](../../virtual-network/network-security-groups-overview.md#network-security-groups) is required (for example, on the NIC of a virtual machine with a Standard SKU Public IP attached).| Open by default. Network security groups are recommended but optional for restricting inbound or outbound traffic.| -| [Availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) | Supported. Standard IPs can be nonzonal, zonal, or zone-redundant. **Zone redundant IPs can only be created in [regions where 3 availability zones](../../availability-zones/az-region.md) are live.** IPs created before availability zones aren't zone redundant. | Not supported. | +| [Availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) | Supported. Standard IPs can be nonzonal, zonal, or zone-redundant. **Zone redundant IPs can only be created in [regions where 3 availability zones](../../reliability/availability-zones-region-support.md) are live.** IPs created before availability zones aren't zone redundant. | Not supported. | | [Routing preference](routing-preference-overview.md)| Supported to enable more granular control of how traffic is routed between Azure and the Internet. | Not supported.| | Global tier | Supported via [cross-region load balancers](../../load-balancer/cross-region-overview.md).| Not supported. | Static public IP addresses are commonly used in the following scenarios: > Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Asia, East US 2, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2, Central Spain > -Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). Basic SKU Public IPs do not have any zones and are created as non-zonal. +Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../reliability/availability-zones-region-support.md). Basic SKU Public IPs do not have any zones and are created as non-zonal. A public IP's availability zone can't be changed after the public IP's creation. | Value | Behavior | |
virtual-network | Public Ip Basic Upgrade Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md | This section lists out some key differences between these two SKUs. |||| | **Allocation method** | Static. | For IPv4: Dynamic or Static; For IPv6: Dynamic. | | **Security** | Secure by default model and be closed to inbound traffic when used as a frontend. Allow traffic with [network security group](../network-security-groups-overview.md#network-security-groups) is required (for example, on the NIC of a virtual machine with a Standard SKU public IP attached). | Open by default. Network security groups are recommended but optional for restricting inbound or outbound traffic. |-| **[Availability zones](../../availability-zones/az-overview.md)** | Supported. Standard IPs can be nonzonal, zonal, or zone-redundant. Zone redundant IPs can only be created in [regions where three availability zones](../../availability-zones/az-region.md) are live. IPs created before availability zones aren't zone redundant. | Not supported | +| **[Availability zones](../../reliability/availability-zones-overview.md)** | Supported. Standard IPs can be nonzonal, zonal, or zone-redundant. Zone redundant IPs can only be created in [regions where three availability zones](../../reliability/availability-zones-region-support.md) are live. IPs created before availability zones aren't zone redundant. | Not supported | | **[Routing preference](routing-preference-overview.md)** | Supported to enable more granular control of how traffic is routed between Azure and the Internet. | Not supported. | | **Global tier** | Supported via [cross-region load balancers](../../load-balancer/cross-region-overview.md)| Not supported | | **[Standard Load Balancer Support](../../load-balancer/skus.md)** | Both IPv4 and IPv6 are supported | Not supported | |
virtual-network | Tutorial Connect Virtual Networks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md | - Title: Connect virtual networks with virtual network peering - Azure CLI -description: In this article, you learn how to connect virtual networks with virtual network peering, using the Azure CLI. ---- Previously updated : 04/15/2024---# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network. ---# Connect virtual networks with virtual network peering using the Azure CLI --You can connect virtual networks to each other with virtual network peering. Once virtual networks are peered, resources in both virtual networks are able to communicate with each other, with the same latency and bandwidth as if the resources were in the same virtual network. --In this article, you learn how to: --* Create two virtual networks --* Connect two virtual networks with a virtual network peering --* Deploy a virtual machine (VM) into each virtual network --* Communicate between VMs ----- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create virtual networks --Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named **test-rg** in the **eastus** location. --```azurecli-interactive -az group create \ - --name test-rg \ - --location eastus -``` --Create a virtual network with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The following example creates a virtual network named **vnet-1** with the address prefix **10.0.0.0/16**. --```azurecli-interactive -az network vnet create \ - --name vnet-1 \ - --resource-group test-rg \ - --address-prefixes 10.0.0.0/16 \ - --subnet-name subnet-1 \ - --subnet-prefix 10.0.0.0/24 -``` --Create a virtual network named **vnet-2** with the address prefix **10.1.0.0/16**: --```azurecli-interactive -az network vnet create \ - --name vnet-2 \ - --resource-group test-rg \ - --address-prefixes 10.1.0.0/16 \ - --subnet-name subnet-1 \ - --subnet-prefix 10.1.0.0/24 -``` --## Peer virtual networks --Peerings are established between virtual network IDs. Obtain the ID of each virtual network with [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) and store the ID in a variable. --```azurecli-interactive -# Get the id for vnet-1. -vNet1Id=$(az network vnet show \ - --resource-group test-rg \ - --name vnet-1 \ - --query id --out tsv) --# Get the id for vnet-2. -vNet2Id=$(az network vnet show \ - --resource-group test-rg \ - --name vnet-2 \ - --query id \ - --out tsv) -``` --Create a peering from **vnet-1** to **vnet-2** with [az network vnet peering create](/cli/azure/network/vnet/peering#az-network-vnet-peering-create). If the `--allow-vnet-access` parameter isn't specified, a peering is established, but no communication can flow through it. --```azurecli-interactive -az network vnet peering create \ - --name vnet-1-to-vnet-2 \ - --resource-group test-rg \ - --vnet-name vnet-1 \ - --remote-vnet $vNet2Id \ - --allow-vnet-access -``` --In the output returned after the previous command executes, you see that the **peeringState** is **Initiated**. The peering remains in the **Initiated** state until you create the peering from **vnet-2** to **vnet-1**. Create a peering from **vnet-2** to **vnet-1**. --```azurecli-interactive -az network vnet peering create \ - --name vnet-2-to-vnet-1 \ - --resource-group test-rg \ - --vnet-name vnet-2 \ - --remote-vnet $vNet1Id \ - --allow-vnet-access -``` --In the output returned after the previous command executes, you see that the **peeringState** is **Connected**. Azure also changed the peering state of the **vnet-1-to-vnet-2** peering to **Connected**. Confirm that the peering state for the **vnet-1-to-vnet-2** peering changed to **Connected** with [az network vnet peering show](/cli/azure/network/vnet/peering#az-network-vnet-show). --```azurecli-interactive -az network vnet peering show \ - --name vnet-1-to-vnet-2 \ - --resource-group test-rg \ - --vnet-name vnet-1 \ - --query peeringState -``` --Resources in one virtual network can't communicate with resources in the other virtual network until the **peeringState** for the peerings in both virtual networks is **Connected**. --## Create virtual machines --Create a VM in each virtual network so that you can communicate between them in a later step. --### Create the first VM --Create a VM with [az vm create](/cli/azure/vm#az-vm-create). The following example creates a VM named **vm-1** in the **vnet-1** virtual network. If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. The `--no-wait` option creates the VM in the background, so you can continue to the next step. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-1 \ - --image Ubuntu2204 \ - --vnet-name vnet-1 \ - --subnet subnet-1 \ - --generate-ssh-keys \ - --no-wait -``` --### Create the second VM --Create a VM in the **vnet-2** virtual network. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-2 \ - --image Ubuntu2204 \ - --vnet-name vnet-2 \ - --subnet subnet-1 \ - --generate-ssh-keys -``` --The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example: --```output -{ - "fqdns": "", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-2", - "location": "eastus", - "macAddress": "00-0D-3A-23-9A-49", - "powerState": "VM running", - "privateIpAddress": "10.1.0.4", - "publicIpAddress": "13.90.242.231", - "resourceGroup": "test-rg" -} -``` --Take note of the **publicIpAddress**. This address is used to access the VM from the internet in a later step. ---## Communicate between VMs --Use the following command to create an SSH session with the **vm-2** VM. Replace `<publicIpAddress>` with the public IP address of your VM. In the previous example, the public IP address is **13.90.242.231**. --```bash -ssh <publicIpAddress> -``` --Ping the VM in *vnet-1*. --```bash -ping 10.0.0.4 -c 4 -``` --You receive four replies. --Close the SSH session to the **vm-2** VM. --## Clean up resources --When no longer needed, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all of the resources it contains. --```azurecli-interactive -az group delete \ - --name test-rg \ - --yes -``` --## Next steps --In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md). --You can [connect your own computer to a virtual network](../vpn-gateway/point-to-site-certificate-gateway.md?toc=%2fazure%2fvirtual-network%2ftoc.json) through a VPN, and interact with resources in a virtual network, or in peered virtual networks. For reusable scripts to complete many of the tasks covered in the virtual network articles, see [script samples](cli-samples.md). |
virtual-network | Tutorial Connect Virtual Networks Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md | - Title: 'Tutorial: Connect virtual networks with VNet peering - Azure portal' -description: In this tutorial, you learn how to connect virtual networks with virtual network peering using the Azure portal. --- Previously updated : 06/17/2024---# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network. ---# Tutorial: Connect virtual networks with virtual network peering using the Azure portal --You can connect virtual networks to each other with virtual network peering. These virtual networks can be in the same region or different regions (also known as global virtual network peering). Once virtual networks are peered, resources in both virtual networks can communicate with each other over a low-latency, high-bandwidth connection using Microsoft backbone network. ---In this tutorial, you learn how to: --> [!div class="checklist"] -> * Create virtual networks -> * Connect two virtual networks with a virtual network peering -> * Deploy a virtual machine (VM) into each virtual network -> * Communicate between VMs --## Prerequisites --- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). -- -Repeat the previous steps to create a second virtual network with the following values: -->[!NOTE] ->The second virtual network can be in the same region as the first virtual network or in a different region. You can skip the **Security** tab and the Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment. --| Setting | Value | -| | | -| Name | **vnet-2** | -| Address space | **10.1.0.0/16** | -| Resource group | **test-rg** | -| Subnet name | **subnet-1** | -| Subnet address range | **10.1.0.0/24** | --<a name="peer-virtual-networks"></a> ---## Create virtual machines --Create a virtual machine in each virtual network to test the communication between them. ---Repeat the previous steps to create a second virtual machine in the second virtual network with the following values: --| Setting | Value | -| | | -| Virtual machine name | **vm-2** | -| Region | **East US 2** or same region as **vnet-2**. | -| Virtual network | Select **vnet-2**. | -| Subnet | Select **subnet-1 (10.1.0.0/24)**. | -| Public IP | **None** | -| Network security group name | **nsg-2** | --Wait for the virtual machines to be created before continuing with the next steps. --## Connect to a virtual machine --Use `ping` to test the communication between the virtual machines. --1. In the portal, search for and select **Virtual machines**. --1. On the **Virtual machines** page, select **vm-1**. --1. In the **Overview** of **vm-1**, select **Connect**. --1. In the **Connect to virtual machine** page, select the **Bastion** tab. --1. Select **Use Bastion**. --1. Enter the username and password you created when you created the VM, and then select **Connect**. --## Communicate between VMs --1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`. -- You get a reply similar to the following message: -- ```output - azureuser@vm-1:~$ ping -c 4 vm-2 - PING vm-2.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.1.0.4) 56(84) bytes of data. - 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=1 ttl=64 time=1.83 ms - 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=2 ttl=64 time=0.987 ms - 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=3 ttl=64 time=0.864 ms - 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=4 ttl=64 time=0.890 ms - ``` --1. Close the Bastion connection to **vm-1**. --1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**. --1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`. -- You get a reply similar to the following message: -- ```output - azureuser@vm-2:~$ ping -c 4 vm-1 - PING vm-1.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.0.0.4) 56(84) bytes of data. - 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=1 ttl=64 time=0.695 ms - 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=2 ttl=64 time=0.896 ms - 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=3 ttl=64 time=3.43 ms - 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms - ``` --1. Close the Bastion connection to **vm-2**. ---## Next steps --In this tutorial, you: --* Created virtual network peering between two virtual networks. --* Tested the communication between two virtual machines over the virtual network peering with `ping`. --To learn more about a virtual network peering: --> [!div class="nextstepaction"] -> [Virtual network peering](virtual-network-peering-overview.md) |
virtual-network | Tutorial Connect Virtual Networks Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md | - Title: Connect virtual networks with VNet peering - Azure PowerShell -description: In this article, you learn how to connect virtual networks with virtual network peering, using Azure PowerShell. ---- Previously updated : 04/15/2024---# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network. ---# Connect virtual networks with virtual network peering using PowerShell --You can connect virtual networks to each other with virtual network peering. Once virtual networks are peered, resources in both virtual networks are able to communicate with each other, with the same latency and bandwidth as if the resources were in the same virtual network. --In this article, you learn how to: --* Create two virtual networks --* Connect two virtual networks with a virtual network peering --* Deploy a virtual machine (VM) into each virtual network --* Communicate between VMs --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. --## Create virtual networks --Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named **test-rg** in the **eastus** location. --```azurepowershell-interactive -$resourceGroup = @{ - Name = "test-rg" - Location = "EastUS" -} -New-AzResourceGroup @resourceGroup -``` --Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named **vnet-1** with the address prefix **10.0.0.0/16**. --```azurepowershell-interactive -$vnet1 = @{ - ResourceGroupName = "test-rg" - Location = "EastUS" - Name = "vnet-1" - AddressPrefix = "10.0.0.0/16" -} -$virtualNetwork1 = New-AzVirtualNetwork @vnet1 -``` --Create a subnet configuration with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a **10.0.0.0/24** address prefix: --```azurepowershell-interactive -$subConfig = @{ - Name = "subnet-1" - AddressPrefix = "10.0.0.0/24" - VirtualNetwork = $virtualNetwork1 -} -$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subConfig -``` --Write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork), which creates the subnet: --```azurepowershell-interactive -$virtualNetwork1 | Set-AzVirtualNetwork -``` --Create a virtual network with a **10.1.0.0/16** address prefix and one subnet: --```azurepowershell-interactive -# Create the virtual network. -$vnet2 = @{ - ResourceGroupName = "test-rg" - Location = "EastUS" - Name = "vnet-2" - AddressPrefix = "10.1.0.0/16" -} -$virtualNetwork2 = New-AzVirtualNetwork @vnet2 --# Create the subnet configuration. -$subConfig = @{ - Name = "subnet-1" - AddressPrefix = "10.1.0.0/24" - VirtualNetwork = $virtualNetwork2 -} -$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subConfig --# Write the subnet configuration to the virtual network. -$virtualNetwork2 | Set-AzVirtualNetwork -``` --## Peer virtual networks --Create a peering with [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering). The following example peers **vnet-1** to **vnet-2**. --```azurepowershell-interactive -$peerConfig1 = @{ - Name = "vnet-1-to-vnet-2" - VirtualNetwork = $virtualNetwork1 - RemoteVirtualNetworkId = $virtualNetwork2.Id -} -Add-AzVirtualNetworkPeering @peerConfig1 -``` --In the output returned after the previous command executes, you see that the **PeeringState** is **Initiated**. The peering remains in the **Initiated** state until you create the peering from **vnet-2** to **vnet-1**. Create a peering from **vnet-2** to **vnet-1**. --```azurepowershell-interactive -$peerConfig2 = @{ - Name = "vnet-2-to-vnet-1" - VirtualNetwork = $virtualNetwork2 - RemoteVirtualNetworkId = $virtualNetwork1.Id -} -Add-AzVirtualNetworkPeering @peerConfig2 -``` --In the output returned after the previous command executes, you see that the **PeeringState** is **Connected**. Azure also changed the peering state of the **vnet-1-to-vnet-2** peering to **Connected**. Confirm that the peering state for the **vnet-1-to-vnet-2** peering changed to **Connected** with [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering). --```azurepowershell-interactive -$peeringState = @{ - ResourceGroupName = "test-rg" - VirtualNetworkName = "vnet-1" -} -Get-AzVirtualNetworkPeering @peeringState | Select PeeringState -``` --Resources in one virtual network cannot communicate with resources in the other virtual network until the **PeeringState** for the peerings in both virtual networks is **Connected**. --## Create virtual machines --Create a VM in each virtual network so that you can communicate between them in a later step. --### Create the first VM --Create a VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named **vm-1** in the **vnet-1** virtual network. The `-AsJob` option creates the VM in the background, so you can continue to the next step. When prompted, enter the user name and password for the virtual machine. --```azurepowershell-interactive -$vm1 = @{ - ResourceGroupName = "test-rg" - Location = "EastUS" - VirtualNetworkName = "vnet-1" - SubnetName = "subnet-1" - ImageName = "Win2019Datacenter" - Name = "vm-1" -} -New-AzVm @vm1 -AsJob -``` --### Create the second VM --```azurepowershell-interactive -$vm2 = @{ - ResourceGroupName = "test-rg" - Location = "EastUS" - VirtualNetworkName = "vnet-2" - SubnetName = "subnet-1" - ImageName = "Win2019Datacenter" - Name = "vm-2" -} -New-AzVm @vm2 -``` --The VM takes a few minutes to create. Don't continue with the later steps until Azure creates **vm-2** and returns output to PowerShell. ---## Communicate between VMs --You can connect to a VM's public IP address from the internet. Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to return the public IP address of a VM. The following example returns the public IP address of the **vm-1** VM: --```azurepowershell-interactive -$ipAddress = @{ - ResourceGroupName = "test-rg" - Name = "vm-1" -} -Get-AzPublicIpAddress @ipAddress | Select IpAddress -``` --Use the following command to create a remote desktop session with the **vm-1** VM from your local computer. Replace `<publicIpAddress>` with the IP address returned from the previous command. --``` -mstsc /v:<publicIpAddress> -``` --A Remote Desktop Protocol (.rdp) file is created and opened. Enter the user name and password (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), and then click **OK**. You may receive a certificate warning during the sign-in process. Click **Yes** or **Continue** to proceed with the connection. --On **vm-1**, enable the Internet Control Message Protocol (ICMP) through the Windows Firewall so you can ping this VM from **vm-2** in a later step, using PowerShell: --```powershell -New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4 -``` --**Though ping is used to communicate between VMs in this article, allowing ICMP through the Windows Firewall for production deployments is not recommended.** --To connect to **vm-2**, enter the following command from a command prompt on **vm-1**: --``` -mstsc /v:10.1.0.4 -``` --You enabled ping on **vm-1**. You can now ping **vm-1** by IP address from a command prompt on **vm-2**. --``` -ping 10.0.0.4 -``` --You receive four replies. Disconnect your RDP sessions to both **vm-1** and **vm-2**. --## Clean up resources --When no longer needed, use [Remove-AzResourcegroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains. --```azurepowershell-interactive -Remove-AzResourceGroup -Name test-rg -Force -``` --## Next steps --In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md). --You can [connect your own computer to a virtual network](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json) through a VPN, and interact with resources in a virtual network, or in peered virtual networks. For reusable scripts to complete many of the tasks covered in the virtual network articles, see [script samples](powershell-samples.md). |
virtual-network | Tutorial Connect Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks.md | + + Title: 'Tutorial: Connect virtual networks with peering' +description: In this tutorial, you learn how to connect virtual networks with virtual network peering. +++ Last updated : 11/14/2024+++ - template-tutorial + - devx-track-azurecli + - devx-track-azurepowershell + - linux-related-content +content_well_notification: + - AI-contribution +ai-usage: ai-assisted ++# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network. +++# Tutorial: Connect virtual networks with virtual network peering ++You can connect virtual networks to each other with virtual network peering. These virtual networks can be in the same region or different regions (also known as global virtual network peering). Once virtual networks are peered, resources in both virtual networks can communicate with each other over a low-latency, high-bandwidth connection using Microsoft backbone network. +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create virtual networks +> * Connect two virtual networks with a virtual network peering +> * Deploy a virtual machine (VM) into each virtual network +> * Communicate between VMs ++## Prerequisites ++### [Portal](#tab/portal) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++### [PowerShell](#tab/powershell) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +++If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ++### [CLI](#tab/cli) ++++- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ++++### [Portal](#tab/portal) ++ +Repeat the previous steps to create a second virtual network with the following values: ++>[!NOTE] +>The second virtual network can be in the same region as the first virtual network or in a different region. You can skip the **Security** tab and the Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment. ++| Setting | Value | +| | | +| Name | **vnet-2** | +| Address space | **10.1.0.0/16** | +| Resource group | **test-rg** | +| Subnet name | **subnet-1** | +| Subnet address range | **10.1.0.0/24** | ++### [PowerShell](#tab/powershell) ++Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named **test-rg** in the **eastus** location. ++```azurepowershell-interactive +$resourceGroup = @{ + Name = "test-rg" + Location = "EastUS2" +} +New-AzResourceGroup @resourceGroup +``` ++Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named **vnet-1** with the address prefix **10.0.0.0/16**. ++```azurepowershell-interactive +$vnet1 = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vnet-1" + AddressPrefix = "10.0.0.0/16" +} +$virtualNetwork1 = New-AzVirtualNetwork @vnet1 +``` ++Create a subnet configuration with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a **10.0.0.0/24** address prefix: ++```azurepowershell-interactive +$subConfig = @{ + Name = "subnet-1" + AddressPrefix = "10.0.0.0/24" + VirtualNetwork = $virtualNetwork1 +} +$subnetConfig1 = Add-AzVirtualNetworkSubnetConfig @subConfig +``` ++Create a subnet configuration for Azure Bastion with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a **10.0.1.0/24** address prefix: ++```azurepowershell-interactive +$subBConfig = @{ + Name = "AzureBastionSubnet" + AddressPrefix = "10.0.1.0/24" + VirtualNetwork = $virtualNetwork1 +} +$subnetConfig2 = Add-AzVirtualNetworkSubnetConfig @subBConfig +``` ++Write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork), which creates the subnet: ++```azurepowershell-interactive +$virtualNetwork1 | Set-AzVirtualNetwork +``` ++### Create Azure Bastion ++Create a public IP address for the Azure Bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress). The following example creates a public IP address named *public-ip-bastion* in the *vnet-1* virtual network. ++```azurepowershell-interactive +$publicIpParams = @{ + ResourceGroupName = "test-rg" + Name = "public-ip-bastion" + Location = "EastUS2" + AllocationMethod = "Static" + Sku = "Standard" +} +New-AzPublicIpAddress @publicIpParams +``` ++Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion). The following example creates an Azure Bastion host named *bastion* in the *AzureBastionSubnet* subnet of the *vnet-1* virtual network. Azure Bastion is used to securely connect Azure virtual machines without exposing them to the public internet. ++```azurepowershell-interactive +$bastionParams = @{ + ResourceGroupName = "test-rg" + Name = "bastion" + VirtualNetworkName = "vnet-1" + PublicIpAddressName = "public-ip-bastion" + PublicIpAddressRgName = "test-rg" + VirtualNetworkRgName = "test-rg" +} +New-AzBastion @bastionParams -AsJob +``` ++### Create a second virtual network ++Create a second virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named **vnet-2** with the address prefix **10.1.0.0/16**. ++>[!NOTE] +>The second virtual network can be in the same region as the first virtual network or in a different region. You don't need a Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment. ++```azurepowershell-interactive +$vnet2 = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vnet-2" + AddressPrefix = "10.1.0.0/16" +} +$virtualNetwork2 = New-AzVirtualNetwork @vnet2 +``` ++Create a subnet configuration with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a **10.1.0.0/24** address prefix: ++```azurepowershell-interactive +$subConfig = @{ + Name = "subnet-1" + AddressPrefix = "10.1.0.0/24" + VirtualNetwork = $virtualNetwork2 +} +$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subConfig +``` ++Write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork), which creates the subnet: ++```azurepowershell-interactive +$virtualNetwork2 | Set-AzVirtualNetwork +``` ++### [CLI](#tab/cli) ++Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named **test-rg** in the **eastus** location. ++```azurecli-interactive +az group create \ + --name test-rg \ + --location eastus2 +``` ++Create a virtual network with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The following example creates a virtual network named **vnet-1** with the address prefix **10.0.0.0/16**. ++```azurecli-interactive +az network vnet create \ + --name vnet-1 \ + --resource-group test-rg \ + --address-prefixes 10.0.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefix 10.0.0.0/24 +``` ++Create the Bastion subnet with [az network vnet subnet create](/cli/azure/network/vnet/subnet). ++```azurecli-interactive +# Create a bastion subnet. +az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name AzureBastionSubnet \ + --address-prefix 10.0.1.0/24 +``` ++### Create Azure Bastion ++Create a public IP address for the Azure Bastion host with [az network public-ip create](/cli/azure/network/public-ip). The following example creates a public IP address named *public-ip-bastion* in the *vnet-1* virtual network. ++```azurecli-interactive +az network public-ip create \ + --resource-group test-rg \ + --name public-ip-bastion \ + --location eastus2 \ + --allocation-method Static \ + --sku Standard +``` ++Create an Azure Bastion host with [az network bastion create](/cli/azure/network/bastion). The following example creates an Azure Bastion host named *bastion* in the *AzureBastionSubnet* subnet of the *vnet-1* virtual network. Azure Bastion is used to securely connect Azure virtual machines without exposing them to the public internet. ++```azurecli-interactive +az network bastion create \ + --resource-group test-rg \ + --name bastion \ + --vnet-name vnet-1 \ + --public-ip-address public-ip-bastion \ + --location eastus2 \ + --no-wait +``` ++### Create a second virtual network ++Create a second virtual network with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The following example creates a virtual network named **vnet-2** with the address prefix **10.1.0.0/16**. ++>[!NOTE] +>The second virtual network can be in the same region as the first virtual network or in a different region. You don't need a Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment. ++```azurecli-interactive +az network vnet create \ + --name vnet-2 \ + --resource-group test-rg \ + --address-prefixes 10.1.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefix 10.1.0.0/24 +``` ++++### [Portal](#tab/portal) ++<a name="peer-virtual-networks"></a> +++### [PowerShell](#tab/powershell) ++## Peer virtual networks ++Create a peering with [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering). The following example peers **vnet-1** to **vnet-2**. ++```azurepowershell-interactive +$peerConfig1 = @{ + Name = "vnet-1-to-vnet-2" + VirtualNetwork = $virtualNetwork1 + RemoteVirtualNetworkId = $virtualNetwork2.Id +} +Add-AzVirtualNetworkPeering @peerConfig1 +``` ++In the output returned after the previous command executes, you see that the **PeeringState** is **Initiated**. The peering remains in the **Initiated** state until you create the peering from **vnet-2** to **vnet-1**. Create a peering from **vnet-2** to **vnet-1**. ++```azurepowershell-interactive +$peerConfig2 = @{ + Name = "vnet-2-to-vnet-1" + VirtualNetwork = $virtualNetwork2 + RemoteVirtualNetworkId = $virtualNetwork1.Id +} +Add-AzVirtualNetworkPeering @peerConfig2 +``` ++In the output returned after the previous command executes, you see that the **PeeringState** is **Connected**. Azure also changed the peering state of the **vnet-1-to-vnet-2** peering to **Connected**. Confirm that the peering state for the **vnet-1-to-vnet-2** peering changed to **Connected** with [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering). ++```azurepowershell-interactive +$peeringState = @{ + ResourceGroupName = "test-rg" + VirtualNetworkName = "vnet-1" +} +Get-AzVirtualNetworkPeering @peeringState | Select PeeringState +``` ++Resources in one virtual network can't communicate with resources in the other virtual network until the **PeeringState** for the peerings in both virtual networks is **Connected**. ++### [CLI](#tab/cli) ++## Peer virtual networks ++Peerings are established between virtual network IDs. Obtain the ID of each virtual network with [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) and store the ID in a variable. ++```azurecli-interactive +# Get the id for vnet-1. +vNet1Id=$(az network vnet show \ + --resource-group test-rg \ + --name vnet-1 \ + --query id --out tsv) ++# Get the id for vnet-2. +vNet2Id=$(az network vnet show \ + --resource-group test-rg \ + --name vnet-2 \ + --query id \ + --out tsv) +``` ++Create a peering from **vnet-1** to **vnet-2** with [az network vnet peering create](/cli/azure/network/vnet/peering#az-network-vnet-peering-create). If the `--allow-vnet-access` parameter isn't specified, a peering is established, but no communication can flow through it. ++```azurecli-interactive +az network vnet peering create \ + --name vnet-1-to-vnet-2 \ + --resource-group test-rg \ + --vnet-name vnet-1 \ + --remote-vnet $vNet2Id \ + --allow-vnet-access +``` ++In the output returned after the previous command executes, you see that the **peeringState** is **Initiated**. The peering remains in the **Initiated** state until you create the peering from **vnet-2** to **vnet-1**. Create a peering from **vnet-2** to **vnet-1**. ++```azurecli-interactive +az network vnet peering create \ + --name vnet-2-to-vnet-1 \ + --resource-group test-rg \ + --vnet-name vnet-2 \ + --remote-vnet $vNet1Id \ + --allow-vnet-access +``` ++In the output returned after the previous command executes, you see that the **peeringState** is **Connected**. Azure also changed the peering state of the **vnet-1-to-vnet-2** peering to **Connected**. Confirm that the peering state for the **vnet-1-to-vnet-2** peering changed to **Connected** with [az network vnet peering show](/cli/azure/network/vnet/peering#az-network-vnet-show). ++```azurecli-interactive +az network vnet peering show \ + --name vnet-1-to-vnet-2 \ + --resource-group test-rg \ + --vnet-name vnet-1 \ + --query peeringState +``` ++Resources in one virtual network can't communicate with resources in the other virtual network until the **peeringState** for the peerings in both virtual networks is **Connected**. ++++## Create virtual machines ++Test the communication between the virtual machines by creating a virtual machine in each virtual network. The virtual machines can communicate with each other over the virtual network peering. ++### [Portal](#tab/portal) +++Repeat the previous steps to create a second virtual machine in the second virtual network with the following values: ++| Setting | Value | +| | | +| Virtual machine name | **vm-2** | +| Region | **East US 2** or same region as **vnet-2**. | +| Virtual network | Select **vnet-2**. | +| Subnet | Select **subnet-1 (10.1.0.0/24)**. | +| Public IP | **None** | +| Network security group name | **nsg-2** | ++### [PowerShell](#tab/powershell) ++### Create the first virtual machine ++Create a VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named **vm-1** in the **vnet-1** virtual network. When prompted, enter the username and password for the virtual machine. ++```azurepowershell-interactive +# Create a credential object +$cred = Get-Credential ++# Define the VM parameters +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vm-1" + ImageName = "Canonical:ubuntu-24_04-lts:server-gen1:latest" + Size = "Standard_DS1_v2" + Credential = $cred + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-1" + PublicIpAddressName = $null # No public IP address +} ++# Create the VM +New-AzVM @vmParams +``` ++### Create the second VM ++```azurepowershell-interactive +# Create a credential object +$cred = Get-Credential ++# Define the VM parameters +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "EastUS2" + Name = "vm-2" + ImageName = "Canonical:ubuntu-24_04-lts:server-gen1:latest" + Size = "Standard_DS1_v2" + Credential = $cred + VirtualNetworkName = "vnet-2" + SubnetName = "subnet-1" + PublicIpAddressName = $null # No public IP address +} ++# Create the VM +New-AzVM @vmParams +``` ++### [CLI](#tab/cli) ++### Create the first VM ++Create a VM with [az vm create](/cli/azure/vm#az-vm-create). The following example creates a VM named **vm-1** in the **vnet-1** virtual network. If SSH keys don't already exist in a default key location, the command creates them. The `--no-wait` option creates the VM in the background, so you can continue to the next step. ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-1 \ + --image Ubuntu2204 \ + --vnet-name vnet-1 \ + --subnet subnet-1 \ + --admin-username azureuser \ + --authentication-type password \ + --no-wait +``` ++### Create the second VM ++Create a VM in the **vnet-2** virtual network. ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-2 \ + --image Ubuntu2204 \ + --vnet-name vnet-2 \ + --subnet subnet-1 \ + --admin-username azureuser \ + --authentication-type password +``` ++The VM takes a few minutes to create. ++++Wait for the virtual machines to be created before continuing with the next steps. ++## Connect to a virtual machine ++Use `ping` to test the communication between the virtual machines. Sign-in to the Azure portal to complete the following steps. ++1. In the portal, search for and select **Virtual machines**. ++1. On the **Virtual machines** page, select **vm-1**. ++1. In the **Overview** of **vm-1**, select **Connect**. ++1. In the **Connect to virtual machine** page, select the **Bastion** tab. ++1. Select **Use Bastion**. ++1. Enter the username and password you created when you created the VM, and then select **Connect**. ++## Communicate between VMs ++1. At the bash prompt for **vm-1**, enter `ping -c 4 10.1.0.4`. ++ You get a reply similar to the following message: ++ ```output + azureuser@vm-1:~$ ping -c 4 10.1.0.4 + PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data. + 64 bytes from 10.1.0.4: icmp_seq=1 ttl=64 time=2.29 ms + 64 bytes from 10.1.0.4: icmp_seq=2 ttl=64 time=1.06 ms + 64 bytes from 10.1.0.4: icmp_seq=3 ttl=64 time=1.30 ms + 64 bytes from 10.1.0.4: icmp_seq=4 ttl=64 time=0.998 ms ++ 10.1.0.4 ping statistics + 4 packets transmitted, 4 received, 0% packet loss, time 3004ms + rtt min/avg/max/mdev = 0.998/1.411/2.292/0.520 ms + ``` ++1. Close the Bastion connection to **vm-1**. ++1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**. ++1. At the bash prompt for **vm-2**, enter `ping -c 4 10.0.0.4`. ++ You get a reply similar to the following message: ++ ```output + azureuser@vm-2:~$ ping -c 4 10.0.0.4 + PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. + 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=1.81 ms + 64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=3.35 ms + 64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=0.811 ms + 64 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=1.28 ms + ``` ++1. Close the Bastion connection to **vm-2**. ++### [Portal](#tab/portal) +++### [PowerShell](#tab/powershell) ++When no longer needed, use [Remove-AzResourcegroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains. ++```azurepowershell-interactive +$rgParams = @{ + Name = "test-rg" +} +Remove-AzResourceGroup @rgParams -Force +``` ++### [CLI](#tab/cli) ++When no longer needed, use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains. ++```azurecli-interactive +az group delete \ + --name test-rg \ + --yes \ + --no-wait +``` ++++## Next steps ++In this tutorial, you: ++* Created virtual network peering between two virtual networks. ++* Tested the communication between two virtual machines over the virtual network peering with `ping`. ++To learn more about a virtual network peering: ++> [!div class="nextstepaction"] +> [Virtual network peering](virtual-network-peering-overview.md) |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | Enabling or disabling the toggle will only affect the following traffic flow: tr :::image type="content" source="./media/virtual-wan-expressroute-portal/expressroute-bowtie-virtual-network-virtual-wan.png" alt-text="Diagram of a standalone virtual network connecting to a virtual hub via ExpressRoute circuit." lightbox="./media/virtual-wan-expressroute-portal/expressroute-bowtie-virtual-network-virtual-wan.png"::: +### Why does connectivity not work when I advertise routes with an ASN of 0 in the AS-Path? ++The Virtual WAN hub drops routes with an ASN of 0 in the AS-Path. To ensure these routes are successfully advertised into Azure, the AS-Path should not include 0. + ### Can hubs be created in different resource groups in Virtual WAN? Yes. This option is currently available via PowerShell only. The Virtual WAN portal requires that the hubs are in the same resource group as the Virtual WAN resource itself. |
vpn-gateway | About Zone Redundant Vnet Gateways | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md | Yes, you can use the Azure portal to deploy these SKUs. However, you see these S ### What regions are available for me to use these SKUs? -These SKUs are available in Azure regions that have Azure availability zones. For more information, see [Azure regions with availability zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). +These SKUs are available in Azure regions that have Azure availability zones. For more information, see [Azure regions with availability zones](../reliability/availability-zones-region-support.md). ### Can I change/migrate/upgrade my existing virtual network gateways to zone-redundant or zonal gateways? |
vpn-gateway | Create Gateway Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-gateway-powershell.md | Title: 'Create a virtual network gateway: PowerShell' + Title: Create a virtual network gateway - PowerShell -description: Learn how to create a route-based virtual network gateway for a VPN connection to your on-premises network, or to connect virtual networks. +description: Learn how to create a virtual network gateway for VPN Gateway connections using PowerShell. Previously updated : 07/23/2024 Last updated : 11/19/2024 # Create a VPN gateway using PowerShell -This article helps you create an Azure VPN gateway using PowerShell. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect VNets. For more comprehensive information about some of the settings in this article, see [Create a VPN gateway - portal](tutorial-create-gateway-portal.md). +This article helps you create an Azure VPN gateway using PowerShell. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect virtual networks. For more comprehensive information about some of the settings in this article, see [Create a VPN gateway - portal](tutorial-create-gateway-portal.md). :::image type="content" source="./media/tutorial-create-gateway-portal/gateway-diagram.png" alt-text="Diagram that shows a virtual network and a VPN gateway." lightbox="./media/tutorial-create-gateway-portal/gateway-diagram-expand.png"::: -A VPN gateway is one part of a connection architecture to help you securely access resources within a virtual network. - * The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article. * You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [site-to-site](tutorial-site-to-site-portal.md) and [point-to-site](point-to-site-about.md) connections. To view different design architectures that you can build, see [VPN gateway design](design.md). -The steps in this article create a virtual network, a subnet, a gateway subnet, and a route-based, zone-redundant active-active VPN gateway (virtual network gateway) using the Generation 2 VpnGw2AZ SKU. If you want to create a VPN gateway using the **Basic** SKU instead, see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). Once the gateway creation completes, you can then create connections. --Active-active gateways differ from active-standby gateways in the following ways: +The steps in this article create a virtual network, a subnet, a gateway subnet, and a route-based, zone-redundant active-active mode VPN gateway (virtual network gateway) using the Generation 2 VpnGw2AZ SKU. Once the gateway is created, you can configure connections. -* Active-active gateways have two Gateway IP configurations and two public IP addresses. -* Active-active gateways have active-active setting enabled. -* The virtual network gateway SKU can't be Basic or Standard. +* If you want to create a VPN gateway using the **Basic** SKU instead, see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). +* We recommend that you create an active-active mode VPN gateway when possible. Active-active mode VPN gateways provide better availability and performance than standard mode VPN gateways. For more information about active-active gateways, see [About active-active mode gateways](about-active-active-gateways.md). +* For information about availability zones and zone redundant gateways, see [What are availability zones](/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)? -For more information about active-active gateways, see [Highly Available cross-premises and VNet-to-VNet connectivity](vpn-gateway-highlyavailable.md). -For more information about availability zones and zone redundant gateways, see [What are availability zones](/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)? +> [!NOTE] +> [!INCLUDE [AZ SKU region support note](../../includes/vpn-gateway-az-regions-support-include.md)] ## Before you begin These steps require an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -### Working with Azure PowerShell - ## Create a resource group -Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command. +Create an Azure resource group using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. A resource group is a logical container into which Azure resources are deployed and managed. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command. ```azurepowershell-interactive New-AzResourceGroup -Name TestRG1 -Location EastUS New-AzResourceGroup -Name TestRG1 -Location EastUS ## <a name="vnet"></a>Create a virtual network -Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named **VNet1** in the **EastUS** location: +If you don't already have a virtual network, create one with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). When you create a virtual network, make sure that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network. If a duplicate address range exists on both sides of the VPN connection, traffic doesn't route the way you might expect it to. Additionally, if you want to connect this virtual network to another virtual network, the address space can't overlap with other virtual network. Take care to plan your network configuration accordingly. ++The following example creates a virtual network named **VNet1** in the **EastUS** location: ```azurepowershell-interactive $virtualnetwork = New-AzVirtualNetwork ` $virtualnetwork = New-AzVirtualNetwork ` -AddressPrefix 10.1.0.0/16 ``` -Create a subnet configuration using the [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) cmdlet. +Create a subnet configuration using the [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) cmdlet. The FrontEnd subnet isn't used in this exercise. You can substitute your own subnet name. ```azurepowershell-interactive $subnetConfig = Add-AzVirtualNetworkSubnetConfig `- -Name Frontend ` + -Name FrontEnd ` -AddressPrefix 10.1.0.0/24 ` -VirtualNetwork $virtualnetwork ``` $virtualnetwork | Set-AzVirtualNetwork ## <a name="gwsubnet"></a>Add a gateway subnet -The gateway subnet contains the reserved IP addresses that the virtual network gateway services use. Use the following examples to add a gateway subnet: + Set a variable for your virtual network. Set the subnet configuration for the virtual network using the [Set-AzVirtualNet $vnet | Set-AzVirtualNetwork ``` -## <a name="PublicIP"></a>Request a public IP address +## <a name="PublicIP"></a>Request public IP addresses ++A VPN gateway must have a public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. For active-active mode gateways, each gateway instance has its own public IP address resource. You first request the IP address resource, and then refer to it when creating your virtual network gateway. Additionally, for any gateway SKU ending in *AZ*, you must also specify the Zone setting. This example specifies a zone-redundant configuration because it specifies all three regional zones. -Each VPN gateway must have an allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. In this exercise, we create an active-active zone-redundant VPN gateway environment. That means that two Standard public IP addresses are required, one for each gateway, and we must also specify the Zone setting. This example specifies a zone-redundant configuration because it specifies all 3 regional zones. +The IP address is assigned to the resource when the VPN gateway is created. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway. -Use the following examples to request a public IP address for each gateway. The allocation method must be **Static**. +Use the following examples to request a static public IP address for each gateway instance. ```azurepowershell-interactive $gw1pip1 = New-AzPublicIpAddress -Name "VNet1GWpip1" -ResourceGroupName "TestRG1" -Location "EastUS" -AllocationMethod Static -Sku Standard -Zone 1,2,3- ``` +``` ++To create an active-active gateway (recommended), request a second public IP address: ```azurepowershell-interactive $gw1pip2 = New-AzPublicIpAddress -Name "VNet1GWpip2" -ResourceGroupName "TestRG1" -Location "EastUS" -AllocationMethod Static -Sku Standard -Zone 1,2,3 $gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig2 -SubnetId $ ## <a name="CreateGateway"></a>Create the VPN gateway -Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway is created, you can create a connection between your virtual network and another virtual network. Or, create a connection between your virtual network and an on-premises location. +Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway is created, you can create connection between your virtual network and your on-premises location. Or, create a connection between your virtual network and another virtual network. -Create a VPN gateway using the [New-AzVirtualNetworkGateway](/powershell/module/az.network/New-azVirtualNetworkGateway) cmdlet. Notice in the examples that both public IP addresses are referenced and the gateway is configured as active-active. In the example, we add the optional `-Debug` switch. +Create a VPN gateway using the [New-AzVirtualNetworkGateway](/powershell/module/az.network/New-azVirtualNetworkGateway) cmdlet. Notice in the examples that both public IP addresses are referenced and the gateway is configured as active-active using the `EnableActiveActiveFeature` switch. In the example, we add the optional `-Debug` switch. If you want to create a gateway using a different SKU, see [About Gateway SKUs](about-gateway-skus.md) to determine the SKU that best fits your configuration requirements. ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 ` You can view the VPN gateway using the [Get-AzVirtualNetworkGateway](/powershell Get-AzVirtualNetworkGateway -Name Vnet1GW -ResourceGroup TestRG1 ``` -## <a name="viewgwpip"></a>View the public IP addresses +## <a name="viewgwpip"></a>View gateway IP addresses -To view the public IP address for your VPN gateway, use the [Get-AzPublicIpAddress](/powershell/module/az.network/Get-azPublicIpAddress) cmdlet. Example: +Each VPN gateway instance is assigned a public IP address resource. To view the IP address associated with the resource, use the [Get-AzPublicIpAddress](/powershell/module/az.network/Get-azPublicIpAddress) cmdlet. Repeat for each gateway instance. Active-active gateways have a different public IP address assigned to each instance. ```azurepowershell-interactive Get-AzPublicIpAddress -Name VNet1GWpip1 -ResourceGroupName TestRG1 Remove-AzResourceGroup -Name TestRG1 ## Next steps -Once the gateway has finished creating, you can create a connection between your virtual network and another virtual network. Or, create a connection between your virtual network and an on-premises location. +Once the gateway is created, you can configure connections. -* [Create a site-to-site connection](vpn-gateway-create-site-to-site-rm-powershell.md)<br><br> -* [Create a point-to-site connection](vpn-gateway-howto-point-to-site-rm-ps.md)<br><br> -* [Create a connection to another VNet](vpn-gateway-vnet-vnet-rm-ps.md) +* [Create a site-to-site connection](vpn-gateway-create-site-to-site-rm-powershell.md) +* [Create a point-to-site connection](vpn-gateway-howto-point-to-site-rm-ps.md) +* [Create a connection to another virtual network](vpn-gateway-vnet-vnet-rm-ps.md) |
vpn-gateway | Create Routebased Vpn Gateway Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-cli.md | Title: 'Create a route-based virtual network gateway: CLI' + Title: Create a virtual network gateway - CLI -description: Learn how to create a route-based virtual network gateway for a VPN connection to an on-premises network, or to connect virtual networks. +description: Learn how to create a virtual network gateway for VPN Gateway connections using CLI. Previously updated : 03/12/2024 Last updated : 11/18/2024 -# Create a route-based VPN gateway using CLI +# Create a VPN gateway using CLI -This article helps you quickly create a route-based Azure VPN gateway using the Azure CLI. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect VNets. --In this article you'll create a VNet, a subnet, a gateway subnet, and a route-based VPN gateway (virtual network gateway). Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway creation has completed, you can then create connections. These steps require an Azure subscription. --A VPN gateway is just one part of a connection architecture to help you securely access resources within a virtual network. +This article helps you create an Azure VPN gateway using Azure CLI. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect virtual networks. For more comprehensive information about some of the settings in this article, see [Create a VPN gateway - portal](tutorial-create-gateway-portal.md). :::image type="content" source="./media/tutorial-create-gateway-portal/gateway-diagram.png" alt-text="Diagram that shows a virtual network and a VPN gateway." lightbox="./media/tutorial-create-gateway-portal/gateway-diagram-expand.png"::: * The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article. * You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [site-to-site](tutorial-site-to-site-portal.md) and [point-to-site](point-to-site-about.md) connections. To view different design architectures that you can build, see [VPN gateway design](design.md). +The steps in this article create a virtual network, a subnet, a gateway subnet, and a route-based, zone-redundant active-active mode VPN gateway (virtual network gateway) using the Generation 2 VpnGw2AZ SKU. The steps in this article create a virtual network, a subnet, a gateway subnet, and a route-based, zone-redundant active-active mode VPN gateway (virtual network gateway) using the Generation 2 VpnGw2AZ SKU. Once the gateway is created, you can configure connections. ++* If you want to create a VPN gateway using the **Basic** SKU instead, see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). +* We recommend that you create an active-active mode VPN gateway when possible. Active-active mode VPN gateways provide better availability and performance than standard mode VPN gateways. For more information about active-active gateways, see [About active-active mode gateways](about-active-active-gateways.md). +* For information about availability zones and zone redundant gateways, see [What are availability zones](/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)? ++> [!NOTE] +> [!INCLUDE [AZ SKU region support note](../../includes/vpn-gateway-az-regions-support-include.md)] ++## Before you begin ++These steps require an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.+* This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Create a resource group az group create --name TestRG1 --location eastus ## <a name="vnet"></a>Create a virtual network -Create a virtual network using the [az network vnet create](/cli/azure/network/vnet) command. The following example creates a virtual network named **VNet1** in the **EastUS** location: +If you don't already have a virtual network, create one using the [az network vnet create](/cli/azure/network/vnet) command. When you create a virtual network, make sure that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network. If a duplicate address range exists on both sides of the VPN connection, traffic doesn't route the way you might expect it to. Additionally, if you want to connect this virtual network to another virtual network, the address space can't overlap with other virtual network. Take care to plan your network configuration accordingly. ++The following example creates a virtual network named 'VNet1' and a subnet, 'FrontEnd'. The FrontEnd subnet isn't used in this exercise. You can substitute your own subnet name. ```azurecli-interactive az network vnet create \ az network vnet create \ -g TestRG1 \ -l eastus \ --address-prefix 10.1.0.0/16 \- --subnet-name Frontend \ + --subnet-name FrontEnd \ --subnet-prefix 10.1.0.0/24 ``` ## <a name="gwsubnet"></a>Add a gateway subnet -The gateway subnet contains the reserved IP addresses that the virtual network gateway services use. Use the following examples to add a gateway subnet: ++++Use the following example to add a gateway subnet: ```azurecli-interactive az network vnet subnet create \ az network vnet subnet create \ --address-prefix 10.1.255.0/27  ``` -## <a name="PublicIP"></a>Request a public IP address +## <a name="PublicIP"></a>Request public IP addresses ++A VPN gateway must have a public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. For active-active mode gateways, each gateway instance has its own public IP address resource. You first request the IP address resource, and then refer to it when creating your virtual network gateway. Additionally, for any gateway SKU ending in *AZ*, you must also specify the Zone setting. This example specifies a zone-redundant configuration because it specifies all three regional zones. ++The IP address is assigned to the resource when the VPN gateway is created. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway. -A VPN gateway must have a public IP address. The public IP address is allocated to the VPN gateway that you create for your virtual network. Use the following example to request a public IP address using the [az network public-ip create](/cli/azure/network/public-ip) command: +Use the [az network public-ip create](/cli/azure/network/public-ip) command to request a public IP address: ```azurecli-interactive-az network public-ip create \ - -n VNet1GWIP \ - -g TestRG1 \ +az network public-ip create --name VNet1GWpip1 --resource-group TestRG1 --allocation-method Static --sku Standard --version IPv4 --zone 1 2 3 +``` ++To create an active-active gateway (recommended), request a second public IP address: ++```azurecli-interactive +az network public-ip create --name VNet1GWpip2 --resource-group TestRG1 --allocation-method Static --sku Standard --version IPv4 --zone 1 2 3 ``` ## <a name="CreateGateway"></a>Create the VPN gateway -Create the VPN gateway using the [az network vnet-gateway create](/cli/azure/network/vnet-gateway) command. +Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway is created, you can create connection between your virtual network and your on-premises location. Or, create a connection between your virtual network and another virtual network. ++Create the VPN gateway using the [az network vnet-gateway create](/cli/azure/network/vnet-gateway) command. If you run this command by using the `--no-wait` parameter, you don't see any feedback or output. The `--no-wait` parameter allows the gateway to be created in the background. It doesn't mean that the VPN gateway is created immediately. If you want to create a gateway using a different SKU, see [About Gateway SKUs](about-gateway-skus.md) to determine the SKU that best fits your configuration requirements. -If you run this command by using the `--no-wait` parameter, you don't see any feedback or output. The `--no-wait` parameter allows the gateway to be created in the background. It doesn't mean that the VPN gateway is created immediately. +**Active-active mode gateway** ```azurecli-interactive-az network vnet-gateway create \ - -n VNet1GW \ - -l eastus \ - --public-ip-address VNet1GWIP \ - -g TestRG1 \ - --vnet VNet1 \ - --gateway-type Vpn \ - --sku VpnGw2 \ - --vpn-gateway-generation Generation2 \ - --no-wait +az network vnet-gateway create --name VNet1GW --public-ip-addresses VNet1GWpip1 VNet1GWpip2 --resource-group TestRG1 --vnet VNet1 --gateway-type Vpn --vpn-type RouteBased --sku VpnGw2AZ --vpn-gateway-generation Generation2 --no-wait +``` ++**Active-standby mode gateway** ++```azurecli-interactive +az network vnet-gateway create --name VNet1GW --public-ip-addresses VNet1GWpip1 --resource-group TestRG1 --vnet VNet1 --gateway-type Vpn --vpn-type RouteBased --sku VpnGw2AZ --vpn-gateway-generation Generation2 --no-wait ``` A VPN gateway can take 45 minutes or more to create. az network vnet-gateway show \ -g TestRG1 ``` -The response looks similar to this: --```output -{ - "activeActive": false, - "bgpSettings": { - "asn": 65515, - "bgpPeeringAddress": "10.1.255.30", - "bgpPeeringAddresses": [ - { - "customBgpIpAddresses": [], - "defaultBgpIpAddresses": [ - "10.1.255.30" - ], - "ipconfigurationId": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW/ipConfigurations/vnetGatewayConfig0", - "tunnelIpAddresses": [ - "20.228.164.35" - ] - } - ], - "peerWeight": 0 - }, - "disableIPSecReplayProtection": false, - "enableBgp": false, - "enableBgpRouteTranslationForNat": false, - "enablePrivateIpAddress": false, - "etag": "W/\"6c61f8cb-d90f-4796-8697\"", - "gatewayType": "Vpn", - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW", - "ipConfigurations": [ - { - "etag": "W/\"6c61f8cb-d90f-4796-8697\"", - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW/ipConfigurations/vnetGatewayConfig0", - "name": "vnetGatewayConfig0", - "privateIPAllocationMethod": "Dynamic", - "provisioningState": "Succeeded", - "publicIPAddress": { - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/publicIPAddresses/VNet1GWIP", - "resourceGroup": "TestRG1" - }, - "resourceGroup": "TestRG1", - "subnet": { - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworks/VNet1/subnets/GatewaySubnet", - "resourceGroup": "TestRG1" - } - } - ], - "location": "eastus", - "name": "VNet1GW", - "natRules": [], - "provisioningState": "Succeeded", - "resourceGroup": "TestRG1", - "resourceGuid": "69c269e3-622c-4123-9231", - "sku": { - "capacity": 2, - "name": "VpnGw2", - "tier": "VpnGw2" - }, - "type": "Microsoft.Network/virtualNetworkGateways", - "vpnGatewayGeneration": "Generation2", - "vpnType": "RouteBased" -} -``` --### View the public IP address +## View gateway IP addresses -To view the public IP address assigned to your gateway, use the following example: +Each VPN gateway instance is assigned a public IP address resource. To view the IP address associated with the resource, use the following command. Repeat for each gateway instance. ```azurecli-interactive-az network public-ip show \ - --name VNet1GWIP \ - --resource-group TestRG1 -``` --The value associated with the **ipAddress** field is the public IP address of your VPN gateway. --Example response: --```output -{ - "dnsSettings": null, - "etag": "W/\"69c269e3-622c-4123-9231\"", - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/publicIPAddresses/VNet1GWIP", - "idleTimeoutInMinutes": 4, - "ipAddress": "13.90.195.184", - "ipConfiguration": { - "etag": null, - "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW/ipConfigurations/vnetGatewayConfig0", +az network public-ip show -g TestRG1 -n VNet1GWpip1 ``` ## Clean up resources When you no longer need the resources you created, use [az group delete](/cli/azure/group) to delete the resource group. This deletes the resource group and all of the resources it contains. -```azurecli-interactive +```azurecli-interactive az group delete --name TestRG1 --yes ``` ## Next steps -Once the gateway has finished creating, you can create a connection between your virtual network and another VNet. Or, create a connection between your virtual network and an on-premises location. +Once the gateway is created, you can configure connections. -> [!div class="nextstepaction"] -> [Create a site-to-site connection](vpn-gateway-create-site-to-site-rm-powershell.md)<br><br> -> [Create a point-to-site connection](vpn-gateway-howto-point-to-site-rm-ps.md)<br><br> -> [Create a connection to another VNet](vpn-gateway-vnet-vnet-rm-ps.md) +* [Create a site-to-site connection](vpn-gateway-howto-site-to-site-resource-manager-cli.md) +* [Create a connection to another virtual network](vpn-gateway-howto-vnet-vnet-cli.md) +* [Create a point-to-site connection](point-to-site-about.md) |
vpn-gateway | Create Zone Redundant Vnet Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-zone-redundant-vnet-gateway.md | -You can deploy VPN and ExpressRoute gateways in Azure availability zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in availability zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. For more information, see [About zone-redundant virtual network gateways](about-zone-redundant-vnet-gateways.md), [What are availability zones?](../reliability/availability-zones-overview.md), and [Availability zone service and regional support](../reliability/availability-zones-service-support.md). +You can deploy VPN and ExpressRoute gateways in Azure availability zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in availability zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. For more information, see [About zone-redundant virtual network gateways](about-zone-redundant-vnet-gateways.md), [What are availability zones?](../reliability/availability-zones-overview.md), and [Azure regions with availability zones](../reliability/availability-zones-region-support.md). ## Azure portal workflow For an ExpressRoute gateway, follow the [ExpressRoute documentation](../expressr ### <a name="variables"></a>1. Declare your variables -Declare the variables that you want to use. Use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to redeclare the variables. When specifying location, verify that the region you specify is supported. For more information, see [Availability zone service and regional support](../reliability/availability-zones-service-support.md). +Declare the variables that you want to use. Use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to redeclare the variables. When specifying location, verify that the region you specify is supported. For more information, see [Azure regions with availability zones](../reliability/availability-zones-region-support.md). ```azurepowershell-interactive $RG1 = "TestRG1" |
vpn-gateway | Gateway Sku Consolidation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-consolidation.md | No. This migration is seamless and there's no expected downtime during migration ### Will there be any performance impact on my gateways with this migration? -Yes. AZ SKUs get the benefits of Zone redundancy for VPN gateways in [zone redundant regions](https://learn.microsoft.com/azure/reliability/availability-zones-service-support). If the region doesn't support zone redundancy, the gateway is regional until the region it's deployed to supports zone redundancy. +Yes. AZ SKUs get the benefits of Zone redundancy for VPN gateways in [Azure regions with availability zones](../reliability/availability-zones-region-support.md). If the region doesn't support zone redundancy, the gateway is regional until the region it's deployed to supports zone redundancy. ### Is VPN Gateway Basic SKU retiring? |
vpn-gateway | Tutorial Create Gateway Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md | This tutorial helps you create and manage a virtual network gateway (VPN gateway * The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article. * You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [site-to-site](tutorial-site-to-site-portal.md) and [point-to-site](point-to-site-about.md) connections. To view different design architectures that you can build, see [VPN gateway design](design.md). -If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). For more information about Azure VPN Gateway, see [What is Azure VPN Gateway](vpn-gateway-about-vpngateways.md). These steps create an [active-active mode](about-active-active-gateways.md) [zone-redundant](about-zone-redundant-vnet-gateways.md) gateway. If you want to create a Basic SKU gateway instead, see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). - In this tutorial, you learn how to: > [!div class="checklist"] > * Create a virtual network.-> * Create an active-active mode VPN gateway. +> * Create an active-active mode zone-redundant VPN gateway. > * View the gateway public IP address. > * Resize a VPN gateway (resize SKU). > * Reset a VPN gateway. +* If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). +* For more information about Azure VPN Gateway, see [What is Azure VPN Gateway](vpn-gateway-about-vpngateways.md). +* If you want to create a gateway using the Basic SKU (instead of VpnGw2AZ), see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). +* For more information about active-active mode gateways, see [About active-active mode](about-active-active-gateways.md). +* For more information about zone-redundant gateways, see [About zone-redundant gateways](about-zone-redundant-vnet-gateways.md). ++> [!NOTE] +> [!INCLUDE [AZ SKU region support note](../../includes/vpn-gateway-az-regions-support-include.md)] ## Prerequisites After you create your virtual network, you can optionally configure Azure DDoS P ## Create a gateway subnet -The virtual network gateway requires a specific subnet named **GatewaySubnet**. The gateway subnet is part of the IP address range for your virtual network and contains the IP addresses that the virtual network gateway resources and services use. Specify a gateway subnet that's /27 or larger. [!INCLUDE [Create gateway subnet](../../includes/vpn-gateway-create-gateway-subnet-portal-include.md)] |
vpn-gateway | Tutorial Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md | You can configure more settings for your connection, if necessary. Otherwise, sk [!INCLUDE [Verify the connection](../../includes/vpn-gateway-verify-connection-portal-include.md)] -## <a name="connectVM"></a>Connect to a virtual machine -- ## Optional steps ### <a name="reset"></a>Reset a gateway |
vpn-gateway | Vpn Gateway Create Site To Site Rm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md | Title: 'Connect your on-premises network to an Azure VNet: site-to-site VPN: PowerShell' -description: Learn how to create a site-to-site VPN Gateway connection between your on-premises network and an Azure VNet using PowerShell. + Title: 'Connect your on-premises network to an Azure virtual network: site-to-site VPN: PowerShell' +description: Learn how to create a site-to-site VPN Gateway connection between your on-premises network and an Azure virtual network using PowerShell. -# Create a VNet with a site-to-site VPN connection using PowerShell +# Create a site-to-site VPN connection using PowerShell -This article shows you how to use PowerShell to create a site-to-site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list: --> [!div class="op_single_selector"] -> * [Azure portal](./tutorial-site-to-site-portal.md) -> * [PowerShell](vpn-gateway-create-site-to-site-rm-powershell.md) -> * [CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md) -> * [Azure portal (classic)](vpn-gateway-howto-site-to-site-classic-portal.md) +This article shows you how to use PowerShell to create a site-to-site VPN gateway connection from your on-premises network to a virtual network (VNet). The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md). :::image type="content" source="./media/tutorial-site-to-site-portal/diagram.png" alt-text="Diagram of site-to-site VPN Gateway cross-premises connections." lightbox="./media/tutorial-site-to-site-portal/diagram.png"::: -## <a name="before"></a>Before you begin +## Prerequisites -Verify that you have met the following criteria before beginning your configuration: +Verify that your environment meets the following criteria before beginning your configuration: +* Verify that you have a functioning route-based VPN gateway. To create a VPN gateway, see [Create a VPN gateway](create-gateway-powershell.md). * Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * Verify that you have an externally facing public IPv4 address for your VPN device.-* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. +* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure routes to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. ### Azure PowerShell [!INCLUDE [powershell](../../includes/vpn-gateway-cloud-shell-powershell-about.md)] -### <a name="example"></a>Example values --The examples in this article use the following values. You can use these values to create a test environment, or refer to them to better understand the examples in this article. --``` -#Example values --VnetName = VNet1 -ResourceGroup = TestRG1 -Location = East US -AddressSpace = 10.1.0.0/16 -SubnetName = Frontend -Subnet = 10.1.0.0/24 -GatewaySubnet = 10.1.255.0/27 -LocalNetworkGatewayName = Site1 -LNG Public IP = <On-premises VPN device IP address> -Local Address Prefixes = 10.0.0.0/24, 20.0.0.0/24 -Gateway Name = VNet1GW -PublicIP = VNet1GWPIP -Gateway IP Config = gwipconfig1 -VPNType = RouteBased -GatewayType = Vpn -ConnectionName = VNet1toSite1 -``` --## <a name="VNet"></a>1. Create a virtual network and a gateway subnet --If you don't already have a virtual network, create one. When creating a virtual network, make sure that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network. --> [!NOTE] -> In order for this VNet to connect to an on-premises location, you need to coordinate with your on-premises network administrator to carve out an IP address range that you can use specifically for this virtual network. If a duplicate address range exists on both sides of the VPN connection, traffic does not route the way you might expect it to. Additionally, if you want to connect this VNet to another VNet, the address space cannot overlap with other VNet. Take care to plan your network configuration accordingly. --### About the gateway subnet ----### <a name="vnet"></a>Create a virtual network and a gateway subnet --This example creates a virtual network and a gateway subnet. If you already have a virtual network that you need to add a gateway subnet to, see [To add a gateway subnet to a virtual network you have already created](#gatewaysubnet). --Create a resource group: --```azurepowershell-interactive -New-AzResourceGroup -Name TestRG1 -Location 'East US' -``` --Create your virtual network. --1. Set the variables. -- ```azurepowershell-interactive - $subnet1 = New-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 - $subnet2 = New-AzVirtualNetworkSubnetConfig -Name 'Frontend' -AddressPrefix 10.1.0.0/24 - ``` --1. Create the VNet. -- ```azurepowershell-interactive - New-AzVirtualNetwork -Name VNet1 -ResourceGroupName TestRG1 ` - -Location 'East US' -AddressPrefix 10.1.0.0/16 -Subnet $subnet1, $subnet2 - ``` --#### <a name="gatewaysubnet"></a>To add a gateway subnet to a virtual network you have already created --Use the steps in this section if you already have a virtual network, but need to add a gateway subnet. --1. Set the variables. -- ```azurepowershell-interactive - $vnet = Get-AzVirtualNetwork -ResourceGroupName TestRG1 -Name VNet1 - ``` --1. Create the gateway subnet. -- ```azurepowershell-interactive - Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $vnet - ``` --1. Set the configuration. -- ```azurepowershell-interactive - Set-AzVirtualNetwork -VirtualNetwork $vnet - ``` +## <a name="localnet"></a>Create a local network gateway -## 2. <a name="localnet"></a>Create the local network gateway --The local network gateway (LNG) typically refers to your on-premises location. It isn't the same as a virtual network gateway. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes, you can easily update the prefixes. +The local network gateway (LNG) typically refers to your on-premises location. It isn't the same as a virtual network gateway. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you'll create a connection. You also specify the IP address prefixes that are routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes, you can easily update the prefixes. Select one of the following examples. The values used in the examples are: -* The *GatewayIPAddress* is the IP address of your on-premises VPN device. +* The *GatewayIPAddress* is the IP address of your on-premises VPN device, not your Azure VPN gateway. * The *AddressPrefix* is your on-premises address space. **Single address prefix example** ```azurepowershell-interactive New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 `- -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix '10.0.0.0/24' + -Location 'East US' -GatewayIpAddress '[IP address of your on-premises VPN device]' -AddressPrefix '10.0.0.0/24' ``` **Multiple address prefix example** ```azurepowershell-interactive New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 `- -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix @('20.0.0.0/24','10.0.0.0/24') + -Location 'East US' -GatewayIpAddress '[IP address of your on-premises VPN device]' -AddressPrefix @('192.168.0.0/24','10.0.0.0/24') ``` -## <a name="PublicIP"></a>3. Request a public IP address --A VPN gateway must have a public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway. --Request a public IP address for your virtual network VPN gateway. --```azurepowershell-interactive -$gwpip= New-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1 -Location 'East US' -AllocationMethod Static -Sku Standard -``` --## <a name="GatewayIPConfig"></a>4. Create the gateway IP addressing configuration --The gateway configuration defines the subnet (the 'GatewaySubnet') and the public IP address to use. Use the following example to create your gateway configuration: --```azurepowershell-interactive -$vnet = Get-AzVirtualNetwork -Name VNet1 -ResourceGroupName TestRG1 -$subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet -$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id -``` --## <a name="CreateGateway"></a>5. Create the VPN gateway --Create the virtual network VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. The following values are used in the example: --* The *-GatewayType* for a site-to-site configuration is *Vpn*. The gateway type is always specific to the configuration that you're implementing. For example, other gateway configurations might require -GatewayType ExpressRoute. -* The *-VpnType* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). For more information about VPN gateway types, see [About VPN Gateway](vpn-gateway-about-vpngateways.md). -* Select the Gateway SKU that you want to use. There are configuration limitations for certain SKUs. For more information, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku). If you get an error when creating the VPN gateway regarding the -GatewaySku, verify that you have installed the latest version of the PowerShell cmdlets. --```azurepowershell-interactive -New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 ` --Location 'East US' -IpConfigurations $gwipconfig -GatewayType Vpn `--VpnType RouteBased -GatewaySku VpnGw2-``` --## <a name="ConfigureVPNDevice"></a>6. Configure your VPN device +## <a name="ConfigureVPNDevice"></a>Configure your VPN device Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following items: -* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use. -* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your virtual network gateway using PowerShell, use the following example. In this example, VNet1GWPIP is the name of the public IP address resource that you created in an earlier step. +* A shared key. You'll use this shared key both when you configure your VPN device, and when you create your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use. The important thing is that the key is the same on both sides of the connection. +* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your virtual network gateway using PowerShell, use the following example. In this example, VNet1GWpip1 is the name of the public IP address resource that you created in an earlier step. ```azurepowershell-interactive- Get-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1 + Get-AzPublicIpAddress -Name VNet1GWpip1 -ResourceGroupName TestRG1 ``` [!INCLUDE [Configure VPN device](../../includes/vpn-gateway-configure-vpn-device-rm-include.md)] -## <a name="CreateConnection"></a>7. Create the VPN connection +## <a name="CreateConnection"></a>Create the VPN connection -Next, create the site-to-site VPN connection between your virtual network gateway and your VPN device. Be sure to replace the values with your own. The shared key must match the value you used for your VPN device configuration. Notice that the '-ConnectionType' for site-to-site is **IPsec**. +Create a site-to-site VPN connection between your virtual network gateway and your on-premises VPN device. If you're using an active-active mode gateway (recommended), each gateway VM instance has a separate IP address. To properly configure [highly available connectivity](vpn-gateway-highlyavailable.md), you must establish a tunnel between each VM instance and your VPN device. Both tunnels are part of the same connection. ++Be sure to replace the values in the examples with your own. The shared key must match the value you used for your VPN device configuration. Notice that the '-ConnectionType' for site-to-site is **IPsec**. 1. Set the variables. Next, create the site-to-site VPN connection between your virtual network gatewa -ConnectionType IPsec -SharedKey 'abc123' ``` -After a short while, the connection will be established. --## <a name="toverify"></a>8. Verify the VPN connection +## <a name="toverify"></a>Verify the VPN connection There are a few different ways to verify your VPN connection. [!INCLUDE [Verify connection](../../includes/vpn-gateway-verify-connection-ps-rm-include.md)] -## <a name="connectVM"></a>To connect to a virtual machine -- ## <a name="modify"></a>To modify IP address prefixes for a local network gateway If the IP address prefixes that you want routed to your on-premises location change, you can modify the local network gateway. When using these examples, modify the values to match your environment. |