Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Spatial Analysis Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-web-app.md | Most of the **Environment Variables** for the IoT Edge Module are already set in } ``` + ### Configure the operation parameters If you are using the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which already has all of the required configurations (operations, recorded video file urls and zones etc.), then you can skip to the **Execute the deployment** section. |
ai-services | Sdk Overview V2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v2-1.md | Here's where to find your Document Intelligence API key in the Azure portal: :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint location in the Azure portal."::: + ### [C#/.NET](#tab/csharp) ```csharp |
ai-services | Sdk Overview V3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md | Here's where to find your Document Intelligence API key in the Azure portal: :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint location in the Azure portal."::: + ### [C#/.NET](#tab/csharp) ```csharp |
ai-services | Sdk Overview V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md | Here's where to find your Document Intelligence API key in the Azure portal: :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint location in the Azure portal."::: + ### [C#/.NET](#tab/csharp) ```csharp |
ai-services | Sdk Overview V4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v4-0.md | Here's where to find your Document Intelligence API key in the Azure portal: :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint location in the Azure portal."::: + ### [C#/.NET](#tab/csharp) ```csharp |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | For more information on Provisioned deployments, see our [Provisioned guidance]( The following models support global batch: | Model | Version | Input format |-||| +|||| +|`gpt-4o-mini`| 2024-07-18 | text + image | |`gpt-4o` | 2024-05-13 |text + image | |`gpt-4` | turbo-2024-04-09 | text | |`gpt-4` | 0613 | text | |
ai-services | Migration Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration-javascript.md | const apiKey = new AzureKeyCredential("your API key"); Authenticating `AzureOpenAI` with an API key involves setting the `AZURE_OPENAI_API_KEY` environment variable or setting the `apiKey` string property in the options object when creating the `AzureOpenAI` client. + ## Constructing the client # [OpenAI JavaScript (new)](#tab/javascript-new) |
ai-services | Use Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md | To enable Microsoft Entra ID for intra-service authentication for your web app, You can enable managed identity for the Azure OpenAI resource and the Azure App Service by navigating to "Identity" and turning on the system assigned managed identity in the Azure portal for each resource. - :::image type="content" source="../media/use-your-data/openai-managed-identity.png" alt-text="Screenshot that shows the application identity configuration in the Azure portal." lightbox="../media/use-your-data/openai-managed-identity.png"::: > [!NOTE] |
ai-services | Text To Speech Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md | Go to your resource in the Azure portal. The **Endpoint and Keys** can be found :::image type="content" source="media/quickstarts/endpoint.png" alt-text="Screenshot of the overview UI for an Azure OpenAI resource in the Azure portal with the endpoint & access keys location highlighted." lightbox="media/quickstarts/endpoint.png"::: +### Environment variables + Create and assign persistent environment variables for your key and endpoint. -### Environment variables # [Command Line](#tab/command-line) |
ai-services | Fine Tune | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md | pip install "openai==0.28.1" requests tiktoken numpy ### Environment variables +Create and assign persistent environment variables for your key and endpoint. ++ # [Command Line](#tab/command-line) ```CMD |
ai-services | Whisper Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md | Go to your resource in the Azure portal. The **Endpoint and Keys** can be found :::image type="content" source="media/quickstarts/endpoint.png" alt-text="Screenshot of the overview UI for an Azure OpenAI resource in the Azure portal with the endpoint & access keys location circled in red." lightbox="media/quickstarts/endpoint.png"::: +### Environment variables + Create and assign persistent environment variables for your key and endpoint. -### Environment variables # [Command Line](#tab/command-line) |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | -> New pricing is in effect for batch transcription by using [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). +> New pricing is in effect for batch transcription that uses the [speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). ## Prerequisites You need a standard (S0) Speech resource. Free resources (F0) aren't supported. ::: zone pivot="rest-api" -To create a transcription, use the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation of the [Speech to text REST API](rest-speech-to-text.md#batch-transcription). Construct the request body according to the following instructions: +To create a batch transcription job, use the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation of the [speech to text REST API](rest-speech-to-text.md#batch-transcription). Construct the request body according to the following instructions: - You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md). - Set the required `locale` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later. Call [Transcriptions_Delete](/rest/api/speechtotext/transcriptions/delete) regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results. > [!TIP]-> You can also try the Batch Transcription API using Python on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/python/python-client/main.py). +> You can also try the Batch Transcription API using Python, C#, or Node.js on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch). ::: zone-end regularly from the service, after you retrieve the results. Alternatively, set t To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions: -- Set the required `content` parameter. You can specify a semi-colon delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).+- Set the required `content` parameter. You can specify a comma delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md). - Set the required `language` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response. - Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response. Here's an example Speech CLI command that creates a transcription job: ```azurecli-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav +spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav ``` You should receive a response body in the following format: curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content- ::: zone pivot="speech-cli" ```azurecli-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf" +spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf" ``` ::: zone-end To use a Whisper model for batch transcription, you need to set the `model` prop > [!IMPORTANT] > For Whisper models, you should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API. -Whisper models by batch transcription are supported in the Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe regions. +Batch transcription using Whisper models is supported in the Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe regions. ::: zone pivot="rest-api" You can make a [Models_ListBaseModels](/rest/api/speechtotext/models/list-base-models) request to get available base models for all locales. The `displayName` property of a Whisper model contains "Whisper" as shown in thi }, ``` -You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region. - ::: zone pivot="rest-api" +You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region. + ```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "contentUrls": [ curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content- ::: zone pivot="speech-cli" +You set the full model URI as shown in this example for the `eastus` region. Replace `eastus` if you're using a different region. + ```azurecli-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950" --api-version v3.2 +spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950" --api-version v3.2 ``` ::: zone-end |
ai-services | How To Recognize Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-speech.md | keywords: speech to text, speech to text software [!INCLUDE [CLI include](includes/how-to/recognize-speech/cli.md)] ::: zone-end -## Next steps +## Related content * [Try the speech to text quickstart](get-started-speech-to-text.md) * [Improve recognition accuracy with custom speech](custom-speech-overview.md) |
ai-services | What Is Custom Text To Speech Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-custom-text-to-speech-avatar.md | The neural text to speech avatar models are trained using deep neural networks b The custom text to speech avatar can work with a prebuilt neural voice or custom neural voice as the avatar's voice. For more information, see [Avatar voice and language](./what-is-text-to-speech-avatar.md#avatar-voice-and-language). -[Custom neural voice](../custom-neural-voice.md) and custom text to speech avatar are separate features. You can use them independently or together. If you plan to also use [custom neural voice](../custom-neural-voice.md) with a text to speech avatar, you need to deploy or [copy](../professional-voice-train-voice.md#copy-your-voice-model-to-another-project) your custom neural voice model to one of the [avatar supported regions](./what-is-text-to-speech-avatar.md#available-locations). +[Custom neural voice](../custom-neural-voice.md) and custom text to speech avatar are separate features. You can use them independently or together. If you choose to use them together, you need to apply for [custom neural voice](https://aka.ms/customneural) and [custom text to speech avatar](https://aka.ms/customneural) separately, and you will be charged separately for custom neural voice and custom text to speech avatar. For more details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Additionally, if you plan to use [custom neural voice](../custom-neural-voice.md) with a text to speech avatar, you need to deploy or [copy](../professional-voice-train-voice.md#copy-your-voice-model-to-another-project) your custom neural voice model to one of the [avatar supported regions](./what-is-text-to-speech-avatar.md#available-locations). ## Next steps |
ai-studio | Fine Tune Model Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md | The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md The following models are available in Azure Marketplace for Llama 3.1 when fine-tuning as a service with pay-as-you-go billing: -- `Meta-Llama-3.1-80B-Instruct` (preview)+- `Meta-Llama-3.1-70B-Instruct` (preview) - `Meta-LLama-3.1-8b-Instruct` (preview) Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3. Different model types require a different format of training data. # [Chat Completion](#tab/chatcompletion) -The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `Llama-3-80B-chat` the fine-tuning dataset must be formatted in the conversational format that is used by the Chat completions API. +The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `Meta-Llama-3.1-70B-Instruct` the fine-tuning dataset must be formatted in the conversational format that is used by the Chat completions API. ### Example file format To fine-tune a LLama 3.1 model: 1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **West US 3** region. 1. On the fine-tune wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.-1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. +1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3.1-70B-Instruct) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. > [!NOTE]- > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). + > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3.1-70B-Instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). 1. Once you sign up the project for the particular Azure Marketplace offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. |
api-management | Api Management Howto Aad B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md | |
api-management | Api Management Howto Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md | For an overview of options to secure the developer portal, see [Secure access to [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] - [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)] |
api-management | Authentication Managed Identity Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md | Both system-assigned identity and any of the multiple user-assigned identities c ```xml <authentication-managed-identity resource="https://database.windows.net/"/> <!--Azure SQL--> ```+```xml +<authentication-managed-identity resource="https://signalr.azure.com"/> <!--Azure SignalR--> +``` ```xml <authentication-managed-identity resource="AD_application_id"/> <!--Application (client) ID of your own Azure AD Application--> |
app-service | App Service App Service Environment Control Inbound Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Create Ilb Ase Resourcemanager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Layered Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Network Architecture Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Network Configuration Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service App Service Environment Securely Connecting To Backend Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service Environment Auto Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service Web Configure An App Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | App Service Web Scale A Web App In An App Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Create External Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Create Ilb Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Firewall Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Forced Tunnel Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | How To Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md | -> This article is about App Service Environment v3, which is used with isolated v2 App Service plans. +> This article is about App Service Environment v3, which is used with Isolated v2 App Service plans. ## Overview |
app-service | Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Management Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Network Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Overview Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-certificates.md | -> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans +> This article is about the App Service Environment v3, which is used with Isolated v2 App Service plans > The App Service Environment is a deployment of the Azure App Service that runs within your Azure virtual network. It can be deployed with an internet accessible application endpoint or an application endpoint that is in your virtual network. If you deploy the App Service Environment with an internet accessible endpoint, that deployment is called an External App Service Environment. If you deploy the App Service Environment with an endpoint in your virtual network, that deployment is called an ILB App Service Environment. You can learn more about the ILB App Service Environment from the [Create and use an ILB App Service Environment](./creation.md) document. |
app-service | Upgrade To Asev3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md | Last updated 6/12/2024 # Upgrade to App Service Environment v3 > [!IMPORTANT]-> If you're currently using App Service Environment v1 or v2, you must migrate your workloads to [App Service Environment v3](overview.md). [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Failure to migrate by that date will result in loss of the environments, running applications, and all application data. +> If you're currently using App Service Environment v1 or v2, you must migrate your workloads to [App Service Environment v3](overview.md). [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). After that date, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. >->At this time, the recommendation for all App Service Environment v1 and v2 users is to migrate to App Service Environment v3. If you'd like to explore the [public multi-tenant offering of App Service](../../app-service/overview.md), you can do so once your migration to App Service Environment v3 is complete. There are feature differences and functionality gaps between App Service Environment v3 and the public multi-tenant offering of App Service. Due to these differences, and with the retirement of App Service Environment v1 and v2 on 31 August 2024, we recommend migrating to App Service Environment v3. -> -> As of [29 January 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/), you can no longer create new App Service Environment v1 and v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > Use the following decision tree to determine which migration path is right for y :::image type="content" source="./media/migration/migration-path-decision-tree.png" alt-text="Screenshot of the decision tree for helping decide which App Service Environment upgrade option to use." lightbox="./media/migration/migration-path-decision-tree-expanded.png"::: +### Post-retirement date activities ++After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Additionally, since these products will be retired, after the official retirement on 31 August 2024, Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production. ++You must complete migration to App Service Environment v3 as soon as possible or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. + ### Cost saving opportunities after upgrading to App Service Environment v3 The App Service plan SKUs available for App Service Environment v3 run on the Isolated v2 (Iv2) tier. The number of cores and amount of RAM are effectively doubled per corresponding tier compared the Isolated tier. When you migrate, your App Service plans are converted to the corresponding tier. For example, your I2 instances are converted to I2v2. While I2 has two cores and 7-GB RAM, I2v2 has four cores and 16-GB RAM. If you expect your capacity requirements to stay the same, you're over-provisioned and paying for compute and memory you're not using. For this scenario, you can scale down your I2v2 instance to I1v2 and end up with a similar number of cores and RAM that you had previously. |
app-service | Using An Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Version Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md | -> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. +> This article includes information about about App Service Environment v1 and v2. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. >-> As of 29 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Zone Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2, which is used with Isolated App Service plans. [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/v2/App-Service-Environment-v1v2-Retirement-Update). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. >-> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. +> After 31 August 2024, decommissioning of the App Service Environment v1 and v2 hardware will begin, and this may affect the availability and performance of your apps and data. Service Level Agreement (SLA) and Service Credits will no longer apply for App Service Environment v1 and v2 workloads that continue to be in production after 31 August 2024. +> +> You must complete migration to App Service Environment v3 before 31 August 2024 or your apps and resources may be deleted. We will attempt to auto-migrate any remaining App Service Environment v1 and v2 on a best-effort basis using the [in-place migration feature](migrate.md), but Microsoft makes no claim or guarantees about application availability after auto-migration. You may need to perform manual configuration to complete the migration and to optimize your App Service plan SKU choice to meet your needs. If auto-migration is not feasible, your resources and associated app data will be deleted. We strongly urge you to act now to avoid either of these extreme scenarios. > > For the most up-to-date information on the App Service Environment v1/v2 retirement, see the [App Service Environment v1 and v2 retirement update](https://github.com/Azure/app-service-announcements/issues/469). > |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | If you already installed Visual Studio 2022: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - <a href="https://www.visualstudio.com/downloads" target="_blank">Visual Studio Code</a>. - The <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack" target="_blank">Azure Tools</a> extension.-- <a href="https://dotnet.microsoft.com/download/dotnet/7.0" target="_blank">The latest .NET 8.0 SDK.</a>+- <a href="https://dotnet.microsoft.com/download/dotnet/8.0" target="_blank">The latest .NET 8.0 SDK.</a> - **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available. :::zone-end If you already installed Visual Studio 2022: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - The <a href="/powershell/azure/install-az-ps" target="_blank">Azure PowerShell</a>.-- <a href="https://dotnet.microsoft.com/download/dotnet/7.0" target="_blank">The latest .NET 8.0 SDK.</a>+- <a href="https://dotnet.microsoft.com/download/dotnet/8.0" target="_blank">The latest .NET 8.0 SDK.</a> :::zone-end If you already installed Visual Studio 2022: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). - The [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd)-- [The latest .NET 8.0 SDK.](https://dotnet.microsoft.com/download/dotnet/7.0)+- [The latest .NET 8.0 SDK.](https://dotnet.microsoft.com/download/dotnet/8.0) :::zone-end |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | ms.devlang: python -# Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service +# Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure App Service [!INCLUDE [regionalization-note](./includes/regionalization-note.md)] |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44 > [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md#azure-arc-enabled-vmware-vsphere).--## Designated IPs used by Arc resource bridge --When Arc resource bridge is deployed, there are designated IPs used exclusively by the appliance VM for the Kubernetes pods and services. These IPs can only be used for Arc resource bridge and canΓÇÖt be used by any other service. If another service already uses an IP address within these ranges, please submit a support ticket. ---| Service|Designated Arc resource bridge IPs| -| -- | -- | -|Arc resource bridge Kubernetes pods |10.244.0.0/16 | -| Arc resource bridge Kubernetes services| 10.96.0.0/12 | +> ## SSL proxy configuration |
azure-cache-for-redis | Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md | Select **Data persistence** to enable, disable, or configure data persistence fo For more information, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). > [!IMPORTANT]-> Redis data persistence is only available for Premium caches. +> Redis data persistence is for Premium caches, Enterprise caches (Preview), and Enterprise Flash caches (Preview). ### Identity |
azure-maps | How To Dev Guide Py Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md | The Azure Maps Python SDK can be integrated with Python applications and librari - [Azure Maps account]. - [Subscription key] or other form of [Authentication with Azure Maps].-- Python on 3.7 or later. It's recommended to use the [latest release]. For more information, see [Azure SDK for Python version support policy].+- Python on 3.8 or later. It's recommended to use the [latest release]. For more information, see [Azure SDK for Python version support policy]. > [!TIP] > You can create an Azure Maps account programmatically, Here's an example using the Azure CLI: pip install azure-maps-search --pre ### Azure Maps services -Azure Maps Python SDK supports Python version 3.7 or later. For more information on future Python versions, see [Azure SDK for Python version support policy]. +Azure Maps Python SDK supports Python version 3.8 or later. For more information on future Python versions, see [Azure SDK for Python version support policy]. | Service name  | PyPi package  | Samples  | |-|-|--| maps_search_client = MapsSearchClient( ) ``` -## Fuzzy Search an Entity +## Geocode an address -The following code snippet demonstrates how, in a simple console application, to import the `Azure.Maps.Search` package and perform a fuzzy search on “Starbucks” near Seattle. This example uses subscription key credentials to authenticate MapsSearchClient. In `demo.py`: +The following code snippet demonstrates how, in a simple console application, to obtain longitude and latitude coordinates for a given address. This example uses subscription key credentials to authenticate MapsSearchClient. In `demo.py`: ```Python-import os -from azure.core.credentials import AzureKeyCredential -from azure.maps.search import MapsSearchClient +import os -def fuzzy_search(): -    # Use Azure Maps subscription key authentication - subscription_key = os.getenv("SUBSCRIPTION_KEY") -    maps_search_client = MapsSearchClient( -        credential=AzureKeyCredential(subscription_key) -    ) -    result = maps_search_client.fuzzy_search( -        query="Starbucks", -        coordinates=(47.61010, -122.34255) -    ) - -    # Print the search results - if len(result.results) > 0: - print("Starbucks search result nearby Seattle:") - for result_item in result.results: - print(f"* {result_item.address.street_number } {result_item.address.street_name }") - print(f" {result_item.address.municipality } {result_item.address.country_code } {result_item.address.postal_code }") - print(f" Coordinate: {result_item.position.lat}, {result_item.position.lon}") --if __name__ == '__main__': -    fuzzy_search() -``` +from azure.core.exceptions import HttpResponseError -This sample code instantiates `AzureKeyCredential` with the Azure Maps subscription key, then uses it to instantiate the `MapsSearchClient` object. The methods provided by `MapsSearchClient` forward the request to the Azure Maps REST endpoints. In the end, the program iterates through the results and prints the address and coordinates for each result. +subscription_key = os.getenv("AZURE_SUBSCRIPTION_KEY", "your subscription key") -After finishing the program, run `python demo.py` from the project folder in PowerShell: +def geocode(): + from azure.core.credentials import AzureKeyCredential + from azure.maps.search import MapsSearchClient -```powershell -python demo.py -``` + maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) + try: + result = maps_search_client.get_geocoding(query="15127 NE 24th Street, Redmond, WA 98052") + if result.get('features', False): + coordinates = result['features'][0]['geometry']['coordinates'] + longitude = coordinates[0] + latitude = coordinates[1] ++ print(longitude, latitude) + else: + print("No results") -You should see a list of Starbucks address and coordinate results: --```text -* 1912 Pike Place - Seattle US 98101 - Coordinate: 47.61016, -122.34248 -* 2118 Westlake Avenue - Seattle US 98121 - Coordinate: 47.61731, -122.33782 -* 2601 Elliott Avenue - Seattle US 98121 - Coordinate: 47.61426, -122.35261 -* 1730 Howell Street - Seattle US 98101 - Coordinate: 47.61716, -122.3298 -* 220 1st Avenue South - Seattle US 98104 - Coordinate: 47.60027, -122.3338 -* 400 Occidental Avenue South - Seattle US 98104 - Coordinate: 47.5991, -122.33278 -* 1600 East Olive Way - Seattle US 98102 - Coordinate: 47.61948, -122.32505 -* 500 Mercer Street - Seattle US 98109 - Coordinate: 47.62501, -122.34687 -* 505 5Th Ave S - Seattle US 98104 - Coordinate: 47.59768, -122.32849 -* 425 Queen Anne Avenue North - Seattle US 98109 - Coordinate: 47.62301, -122.3571 + except HttpResponseError as exception: + if exception.error is not None: + print(f"Error Code: {exception.error.code}") + print(f"Message: {exception.error.message}") ++if __name__ == '__main__': + geocode() ``` -## Search an Address +This sample code instantiates `AzureKeyCredential` with the Azure Maps subscription key, then uses it to instantiate the `MapsSearchClient` object. The methods provided by `MapsSearchClient` forward the request to the Azure Maps REST endpoints. In the end, the program iterates through the results and prints the coordinates for each result. + -Call the `SearchAddress` method to get the coordinate of an address. Modify the Main program from the sample as follows: +## Batch geocode addresses ++This sample demonstrates how to perform batch search address: ```Python import os-from azure.core.credentials import AzureKeyCredential -from azure.maps.search import MapsSearchClient -def search_address(): - subscription_key = os.getenv("SUBSCRIPTION_KEY") +from azure.core.exceptions import HttpResponseError -    maps_search_client = MapsSearchClient( - credential=AzureKeyCredential(subscription_key) - ) +subscription_key = os.getenv("AZURE_SUBSCRIPTION_KEY", "your subscription key") -  result = maps_search_client.search_address( - query="1301 Alaskan Way, Seattle, WA 98101, US" - ) - - # Print reuslts if any - if len(result.results) > 0: -    print(f"Coordinate: {result.results[0].position.lat}, {result.results[0].position.lon}") - else: - print("No address found") +def geocode_batch(): + from azure.core.credentials import AzureKeyCredential + from azure.maps.search import MapsSearchClient -if __name__ == '__main__': -    search_address() -``` + maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) + try: + result = maps_search_client.get_geocoding_batch({ + "batchItems": [ + {"query": "400 Broad St, Seattle, WA 98109"}, + {"query": "15127 NE 24th Street, Redmond, WA 98052"}, + ], + },) -The `SearchAddress` method returns results ordered by confidence score and prints the coordinates of the first result. + if not result.get('batchItems', False): + print("No batchItems in geocoding") + return -## Batch reverse search + for item in result['batchItems']: + if not item.get('features', False): + print(f"No features in item: {item}") + continue -Azure Maps Search also provides some batch query methods. These methods return long-running operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The following examples demonstrate how to call the batched reverse search method. + coordinates = item['features'][0]['geometry']['coordinates'] + longitude, latitude = coordinates + print(longitude, latitude) -Since these return LRO objects, you need the `asyncio` method included in the `aiohttp` package: + except HttpResponseError as exception: + if exception.error is not None: + print(f"Error Code: {exception.error.code}") + print(f"Message: {exception.error.message}") -```powershell -pip install aiohttp +if __name__ == '__main__': + geocode_batch() ``` -```Python -import asyncio ++## Make a Reverse Address Search to translate coordinate location to street address ++You can translate coordinates into human-readable street addresses. This process is also called reverse geocoding. This is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points. ++```python import os-from azure.core.credentials import AzureKeyCredential -from azure.maps.search.aio import MapsSearchClient -async def begin_reverse_search_address_batch(): - subscription_key = os.getenv("SUBSCRIPTION_KEY") +from azure.core.exceptions import HttpResponseError -    maps_search_client = MapsSearchClient(AzureKeyCredential(subscription_key)) +subscription_key = os.getenv("AZURE_SUBSCRIPTION_KEY", "your subscription key") ++def reverse_geocode(): + from azure.core.credentials import AzureKeyCredential + from azure.maps.search import MapsSearchClient ++ maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) + try: + result = maps_search_client.get_reverse_geocoding(coordinates=[-122.138679, 47.630356]) + if result.get('features', False): + props = result['features'][0].get('properties', {}) + if props and props.get('address', False): + print(props['address'].get('formattedAddress', 'No formatted address found')) + else: + print("Address is None") + else: + print("No features available") + except HttpResponseError as exception: + if exception.error is not None: + print(f"Error Code: {exception.error.code}") + print(f"Message: {exception.error.message}") -    async with maps_search_client: -        result = await maps_search_client.begin_reverse_search_address_batch( -            search_queries = [ -                "148.858561,2.294911", -                "47.639765,-122.127896&radius=5000", -                "47.61559,-122.33817&radius=5000", -            ] -        ) -    print(f"Batch_id: {result.batch_id}") if __name__ == '__main__':- # Special handle for Windows platform - if os.name == 'nt': -    asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) -    asyncio.run(begin_reverse_search_address_batch()) + reverse_geocode() ``` -In the above example, three queries are passed to the batched reverse search request. To get the LRO results, the request creates a batch request with a batch ID as result that can be used to fetch batch response later. The LRO results are cached on the server side for 14 days. -The following example demonstrates the process of calling the batch ID and retrieving the operation results of the batch request: +## Batch request for reverse geocoding ++This sample demonstrates how to perform reverse search by given coordinates in batch. ```python-import asyncio import os from azure.core.credentials import AzureKeyCredential-from azure.maps.search.aio import MapsSearchClient --async def begin_reverse_search_address_batch(): - subscription_key = os.getenv("SUBSCRIPTION_KEY") --    maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) --    async with maps_search_client: -        result = await maps_search_client.begin_reverse_search_address_batch( -            search_queries = [ -                "148.858561,2.294911", -                "47.639765,-122.127896&radius=5000", -                "47.61559,-122.33817&radius=5000", -            ] -        ) -    return result --async def begin_reverse_search_address_batch_with_id(batch_id): -    subscription_key = os.getenv("SUBSCRIPTION_KEY") -    maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) -    async with maps_search_client: -        result = await maps_search_client.begin_reverse_search_address_batch( -            batch_id=batch_id, -        ) --    responses = result._polling_method._initial_response.context.get('deserialized_data') -    summary = responses['summary'] -- # Print Batch results -    idx = 1 - print(f"Total Batch Requests: {summary['totalRequests']}, Total Successful Results: {summary['successfulRequests']}") - for items in responses.get('batchItems'): - if items['statusCode'] == 200: - print(f"Request {idx} result:") - for address in items['response']['addresses']: - print(f" {address['address']['freeformAddress']}") +from azure.core.exceptions import HttpResponseError +from azure.maps.search import MapsSearchClient ++subscription_key = os.getenv("AZURE_SUBSCRIPTION_KEY", "your subscription key") ++def reverse_geocode_batch(): + maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) + try: + result = maps_search_client.get_reverse_geocoding_batch({ + "batchItems": [ + {"coordinates": [-122.349309, 47.620498]}, + {"coordinates": [-122.138679, 47.630356]}, + ], + },) ++ if result.get('batchItems', False): + for idx, item in enumerate(result['batchItems']): + features = item['features'] + if features: + props = features[0].get('properties', {}) + if props and props.get('address', False): + print( + props['address'].get('formattedAddress', f'No formatted address for item {idx + 1} found')) + else: + print(f"Address {idx + 1} is None") + else: + print(f"No features available for item {idx + 1}") else:- print(f"Error in request {idx}: {items['response']['error']['message']}") - idx += 1 --async def main(): -    result = await begin_reverse_search_address_batch() -    await begin_reverse_search_address_batch_with_id(result.batch_id) --if __name__ == '__main__': - # Special handle for Windows platform - if os.name == 'nt': - asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) -    asyncio.run(main()) + print("No batch items found") + except HttpResponseError as exception: + if exception.error is not None: + print(f"Error Code: {exception.error.code}") + print(f"Message: {exception.error.message}") +++if __name__ == '__main__': + reverse_geocode_batch() +``` +++## Get polygons for a given location ++This sample demonstrates how to search polygons. ++```python +import os ++from azure.core.exceptions import HttpResponseError +from azure.maps.search import Resolution +from azure.maps.search import BoundaryResultType +++subscription_key = os.getenv("AZURE_SUBSCRIPTION_KEY", "your subscription key") ++def get_polygon(): + from azure.core.credentials import AzureKeyCredential + from azure.maps.search import MapsSearchClient ++ maps_search_client = MapsSearchClient(credential=AzureKeyCredential(subscription_key)) + try: + result = maps_search_client.get_polygon( + coordinates=[-122.204141, 47.61256], + result_type=BoundaryResultType.LOCALITY, + resolution=Resolution.SMALL, + ) ++ if not result.get('geometry', False): + print("No geometry found") + return ++ print(result["geometry"]) + except HttpResponseError as exception: + if exception.error is not None: + print(f"Error Code: {exception.error.code}") + print(f"Message: {exception.error.message}") ++if __name__ == '__main__': + get_polygon() ``` ++## Using V1 SDKs for Search and Render ++To use Search V1 and Render V1 SDK, please refer to Search V1 SDK [package](https://pypi.org/project/azure-maps-search/1.0.0b2/) page and Render V1 SDK [package](https://pypi.org/project/azure-maps-render/1.0.0b2/) for more information. ++ ## Additional information The [Azure Maps Search package client library] in the *Azure SDK for Python Preview* documentation. |
azure-netapp-files | Azacsnap Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md | -Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md). +Download the latest release of the binary for [Linux](https://aka.ms/azacsnap-linux) or [Windows](https://aka.ms/azacsnap-windows) and review how to [get started](azacsnap-get-started.md). For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page. |
azure-relay | Relay Hybrid Connections Dotnet Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-api-overview.md | Last updated 08/10/2023 # Azure Relay Hybrid Connections .NET Standard API overview This article summarizes some of the key Azure Relay Hybrid Connections .NET Standard [client APIs](/dotnet/api/microsoft.azure.relay).++> [!NOTE] +> The sample code in this article uses a connection string to authenticate to an Azure Relay namespace. We recommend that you use Microsoft Entra ID authentication in production environments, rather than using connection strings or shared access signatures, which can be more easily compromised. For detailed information and sample code for using the Microsoft Entra ID authentication, see [Authenticate and authorize an application with Microsoft Entra ID to access Azure Relay entities](authenticate-application.md) and [Authenticate a managed identity with Microsoft Entra ID to access Azure Relay resources](authenticate-managed-identity.md). ## Relay Connection String Builder class |
azure-resource-manager | Bicep Core Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md | If you need more information about a particular diagnostic code, select the **Fe | Code | Level | Description | ||-|-_--|-| BCP001 | Error | The following token is not recognized: "{token}". | -| BCP002 | Error | The multi-line comment at this location is not terminated. Terminate it with the */ character sequence. | -| BCP003 | Error | The string at this location is not terminated. Terminate the string with a single quote character. | -| BCP004 | Error | The string at this location is not terminated due to an unexpected new line character. | -| BCP005 | Error | The string at this location is not terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. | -| BCP006 | Error | The specified escape sequence is not recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. | -| BCP007 | Error | This declaration type is not recognized. Specify a metadata, parameter, variable, resource, or output declaration. | +| BCP001 | Error | The following token isn't recognized: "{token}". | +| BCP002 | Error | The multi-line comment at this location isn't terminated. Terminate it with the */ character sequence. | +| BCP003 | Error | The string at this location isn't terminated. Terminate the string with a single quote character. | +| BCP004 | Error | The string at this location isn't terminated due to an unexpected new line character. | +| BCP005 | Error | The string at this location isn't terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. | +| BCP006 | Error | The specified escape sequence isn't recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. | +| BCP007 | Error | This declaration type isn't recognized. Specify a metadata, parameter, variable, resource, or output declaration. | | BCP008 | Error | Expected the "=" token, or a newline at this location. | | BCP009 | Error | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. | | BCP010 | Error | Expected a valid 64-bit signed integer. | If you need more information about a particular diagnostic code, select the **Fe | BCP015 | Error | Expected a variable identifier at this location. | | BCP016 | Error | Expected an output identifier at this location. | | BCP017 | Error | Expected a resource identifier at this location. |-| BCP018 | Error | Expected the "{character}" character at this location. | +| <a id='BCP018' />[BCP018](./diagnostics/bcp018.md) | Error | Expected the \<character> character at this location. | | BCP019 | Error | Expected a new line character at this location. | | BCP020 | Error | Expected a function or property name at this location. | | BCP021 | Error | Expected a numeric literal at this location. | If you need more information about a particular diagnostic code, select the **Fe | BCP025 | Error | The property "{property}" is declared multiple times in this object. Remove or rename the duplicate properties. | | BCP026 | Error | The output expects a value of type "{expectedType}" but the provided value is of type "{actualType}". | | BCP028 | Error | Identifier "{identifier}" is declared multiple times. Remove or rename the duplicates. |-| BCP029 | Error | The resource type is not valid. Specify a valid resource type of format "\<types>@\<apiVersion>". | -| BCP030 | Error | The output type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | -| BCP031 | Error | The parameter type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | +| BCP029 | Error | The resource type isn't valid. Specify a valid resource type of format "\<types>@\<apiVersion>". | +| BCP030 | Error | The output type isn't valid. Specify one of the following types: {ToQuotedString(validTypes)}. | +| BCP031 | Error | The parameter type isn't valid. Specify one of the following types: {ToQuotedString(validTypes)}. | | BCP032 | Error | The value must be a compile-time constant. | | <a id='BCP033' />[BCP033](./diagnostics/bcp033.md) | Error/Warning | Expected a value of type \<data-type> but the provided value is of type \<data-type>. | | BCP034 | Error/Warning | The enclosing array expected an item of type "{expectedType}", but the provided item was of type "{actualType}". | | <a id='BCP035' />[BCP035](./diagnostics/bcp035.md) | Error/Warning | The specified \<data-type> declaration is missing the following required properties: \<property-name>. | | <a id='BCP036' />[BCP036](./diagnostics/bcp036.md) | Error/Warning | The property \<property-name> expected a value of type \<data-type> but the provided value is of type \<data-type>. |-| <a id='BCP037' />[BCP037](./diagnostics/bcp037.md) | Error/Warning | The property \<property-name> is not allowed on objects of type \<type-definition>. | -| <a id='BCP040' />[BCP040](./diagnostics/bcp040.md) | Error/Warning | String interpolation is not supported for keys on objects of type \<type-definition>. | -| BCP041 | Error | Values of type "{valueType}" cannot be assigned to a variable. | -| BCP043 | Error | This is not a valid expression. | -| BCP044 | Error | Cannot apply operator "{operatorName}" to operand of type "{type}". | -| BCP045 | Error | Cannot apply operator "{operatorName}" to operands of type "{type1}" and "{type2}".{(additionalInfo is null ? string.Empty : " " + additionalInfo)} | +| <a id='BCP037' />[BCP037](./diagnostics/bcp037.md) | Error/Warning | The property \<property-name> isn't allowed on objects of type \<type-definition>. | +| <a id='BCP040' />[BCP040](./diagnostics/bcp040.md) | Error/Warning | String interpolation isn't supported for keys on objects of type \<type-definition>. | +| BCP041 | Error | Values of type "{valueType}" can't be assigned to a variable. | +| BCP043 | Error | This isn't a valid expression. | +| BCP044 | Error | Can't apply operator "{operatorName}" to operand of type "{type}". | +| BCP045 | Error | Can't apply operator "{operatorName}" to operands of type "{type1}" and "{type2}".{(additionalInfo is null? string.Empty : " " + additionalInfo)} | | BCP046 | Error | Expected a value of type "{type}". | | BCP047 | Error | String interpolation is unsupported for specifying the resource type. |-| BCP048 | Error | Cannot resolve function overload. For details, see the documentation. | +| BCP048 | Error | Can't resolve function overload. For details, see the documentation. | | BCP049 | Error | The array index must be of type "{LanguageConstants.String}" or "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | | BCP050 | Error | The specified path is empty. | | BCP051 | Error | The specified path begins with "/". Files must be referenced using relative paths. |-| <a id='BCP052' />[BCP052](./diagnostics/bcp052.md) | Error/Warning | The type \<type-name> does not contain property \<property-name>. | -| <a id='BCP053' />[BCP053](./diagnostics/bcp053.md) | Error/Warning | The type \<type-name> does not contain property \<property-name>. Available properties include \<property-names>. | -| BCP054 | Error | The type "{type}" does not contain any properties. | -| BCP055 | Error | Cannot access properties of type "{wrongType}". An "{LanguageConstants.Object}" type is required. | +| <a id='BCP052' />[BCP052](./diagnostics/bcp052.md) | Error/Warning | The type \<type-name> doesn't contain property \<property-name>. | +| <a id='BCP053' />[BCP053](./diagnostics/bcp053.md) | Error/Warning | The type \<type-name> doesn't contain property \<property-name>. Available properties include \<property-names>. | +| BCP054 | Error | The type "{type}" doesn't contain any properties. | +| <a id='BCP055' />[BCP055](./diagnostics/bcp055.md) | Error | Can't access properties of type "{wrongType}". An "{LanguageConstants.Object}" type is required. | | BCP056 | Error | The reference to name "{name}" is ambiguous because it exists in namespaces {ToQuotedString(namespaces)}. The reference must be fully qualified. |-| BCP057 | Error | The name "{name}" does not exist in the current context. | -| BCP059 | Error | The name "{name}" is not a function. | -| BCP060 | Error | The "variables" function is not supported. Directly reference variables by their symbolic names. | -| BCP061 | Error | The "parameters" function is not supported. Directly reference parameters by their symbolic names. | -| BCP062 | Error | The referenced declaration with name "{name}" is not valid. | -| BCP063 | Error | The name "{name}" is not a parameter, variable, resource or module. | +| <a id='BCP057' />[BCP057](./diagnostics/bcp057.md) | Error | The name \<name> doesn't exist in the current context. | +| BCP059 | Error | The name "{name}" isn't a function. | +| BCP060 | Error | The "variables" function isn't supported. Directly reference variables by their symbolic names. | +| BCP061 | Error | The "parameters" function isn't supported. Directly reference parameters by their symbolic names. | +| <a id='BCP062' />[BCP062](./diagnostics/bcp062.md) | Error | The referenced declaration with name \<type-name> isn't valid. | +| BCP063 | Error | The name "{name}" isn't a parameter, variable, resource, or module. | | BCP064 | Error | Found unexpected tokens in interpolated expression. |-| BCP065 | Error | Function "{functionName}" is not valid at this location. It can only be used as a parameter default value. | -| BCP066 | Error | Function "{functionName}" is not valid at this location. It can only be used in resource declarations. | -| BCP067 | Error | Cannot call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. | +| BCP065 | Error | Function "{functionName}" isn't valid at this location. It can only be used as a parameter default value. | +| BCP066 | Error | Function "{functionName}" isn't valid at this location. It can only be used in resource declarations. | +| BCP067 | Error | Can't call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. | | BCP068 | Error | Expected a resource type string. Specify a valid resource type of format "\<types>@\<apiVersion>". |-| BCP069 | Error | The function "{function}" is not supported. Use the "{@operator}" operator instead. | -| BCP070 | Error | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". | +| BCP069 | Error | The function "{function}" isn't supported. Use the "{@operator}" operator instead. | +| BCP070 | Error | Argument of type "{argumentType}" isn't assignable to parameter of type "{parameterType}". | | BCP071 | Error | Expected {expected}, but got {argumentCount}. |-| <a id='BCP072' />[BCP072](./diagnostics/bcp072.md) | Error | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. | -| <a id='BCP073' />[BCP073](./diagnostics/bcp073.md) | Error/Warning | The property \<property-name> is read-only. Expressions cannot be assigned to read-only properties. | +| <a id='BCP072' />[BCP072](./diagnostics/bcp072.md) | Error | This symbol can't be referenced here. Only other parameters can be referenced in parameter default values. | +| <a id='BCP073' />[BCP073](./diagnostics/bcp073.md) | Error/Warning | The property \<property-name> is read-only. Expressions can't be assigned to read-only properties. | | BCP074 | Error | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | | BCP075 | Error | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". |-| BCP076 | Error | Cannot index over expression of type "{wrongType}". Arrays or objects are required. | -| <a id='BCP077' />[BCP077](./diagnostics/bcp077.md) | Error/Warning | The property \<property-name> on type \<type-name> is write-only. Write-only properties cannot be accessed. | +| BCP076 | Error | Can't index over expression of type "{wrongType}". Arrays or objects are required. | +| <a id='BCP077' />[BCP077](./diagnostics/bcp077.md) | Error/Warning | The property \<property-name> on type \<type-name> is write-only. Write-only properties can't be accessed. | | <a id='BCP078' />[BCP078](./diagnostics/bcp078.md) | Error/Warning | The property \<property-name> requires a value of type \<type-name>, but none was supplied. |-| BCP079 | Error | This expression is referencing its own declaration, which is not allowed. | +| BCP079 | Error | This expression is referencing its own declaration, which isn't allowed. | | BCP080 | Error | The expression is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). |-| BCP081 | Warning | Resource type "{resourceTypeReference.FormatName()}" does not have types available. Bicep is unable to validate resource properties prior to deployment, but this will not block the resource from being deployed. | -| BCP082 | Error | The name "{name}" does not exist in the current context. Did you mean "{suggestedName}"? | -| <a id='BCP083' />[BCP083](./diagnostics/bcp083.md) | Error/Warning | The type \<type-definition> does not contain property \<property-name>. Did you mean \<property-name>? | -| BCP084 | Error | The symbolic name "{name}" is reserved. Please use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. | -| BCP085 | Error | The specified file path contains one ore more invalid path characters. The following are not permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. | -| BCP086 | Error | The specified file path ends with an invalid character. The following are not permitted: {ToQuotedString(forbiddenPathTerminatorChars.OrderBy(x => x).Select(x => x.ToString()))}. | -| BCP087 | Error | Array and object literals are not allowed here. | +| BCP081 | Warning | Resource type "{resourceTypeReference.FormatName()}" doesn't have types available. Bicep is unable to validate resource properties prior to deployment, but this won't block the resource from being deployed. | +| BCP082 | Error | The name "{name}" doesn't exist in the current context. Did you mean "{suggestedName}"? | +| <a id='BCP083' />[BCP083](./diagnostics/bcp083.md) | Error/Warning | The type \<type-definition> doesn't contain property \<property-name>. Did you mean \<property-name>? | +| BCP084 | Error | The symbolic name "{name}" is reserved. Use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. | +| BCP085 | Error | The specified file path contains one ore more invalid path characters. The following aren't permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. | +| BCP086 | Error | The specified file path ends with an invalid character. The following aren't permitted: {ToQuotedString(forbiddenPathTerminatorChars.OrderBy(x => x).Select(x => x.ToString()))}. | +| BCP087 | Error | Array and object literals aren't allowed here. | | <a id='BCP088' />[BCP088](./diagnostics/bcp088.md) | Error/Warning | The property \<property-name> expected a value of type \<type-name> but the provided value is of type \<type-name>. Did you mean \<type-name>? |-| <a id='BCP089' />[BCP089](./diagnostics/bcp089.md) | Error/Warning | The property \<property-name> is not allowed on objects of type \<resource-type>. Did you mean \<property-name>? | +| <a id='BCP089' />[BCP089](./diagnostics/bcp089.md) | Error/Warning | The property \<property-name> isn't allowed on objects of type \<resource-type>. Did you mean \<property-name>? | | BCP090 | Error | This module declaration is missing a file path reference. | | BCP091 | Error | An error occurred reading file. {failureMessage} |-| BCP092 | Error | String interpolation is not supported in file paths. | -| BCP093 | Error | File path "{filePath}" could not be resolved relative to "{parentPath}". | -| BCP094 | Error | This module references itself, which is not allowed. | +| BCP092 | Error | String interpolation isn't supported in file paths. | +| BCP093 | Error | File path "{filePath}" couldn't be resolved relative to "{parentPath}". | +| BCP094 | Error | This module references itself, which isn't allowed. | | BCP095 | Error | The file is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). | | BCP096 | Error | Expected a module identifier at this location. | | BCP097 | Error | Expected a module path string. This should be a relative path to another bicep file, e.g. 'myModule.bicep' or '../parent/myModule.bicep' | | BCP098 | Error | The specified file path contains a "\" character. Use "/" instead as the directory separator character. | | BCP099 | Error | The "{LanguageConstants.ParameterAllowedPropertyName}" array must contain one or more items. |-| BCP100 | Error | The function "if" is not supported. Use the "?:\" (ternary conditional) operator instead, e.g. condition ? ValueIfTrue : ValueIfFalse | -| BCP101 | Error | The "createArray" function is not supported. Construct an array literal using []. | -| BCP102 | Error | The "createObject" function is not supported. Construct an object literal using {}. | -| BCP103 | Error | The following token is not recognized: "{token}". Strings are defined using single quotes in bicep. | +| BCP100 | Error | The function "if" isn't supported. Use the "?:\" (ternary conditional) operator instead, e.g. condition? ValueIfTrue : ValueIfFalse | +| BCP101 | Error | The "createArray" function isn't supported. Construct an array literal using []. | +| BCP102 | Error | The "createObject" function isn't supported. Construct an object literal using {}. | +| BCP103 | Error | The following token isn't recognized: "{token}". Strings are defined using single quotes in bicep. | | BCP104 | Error | The referenced module has errors. | | BCP105 | Error | Unable to load file from URI "{fileUri}". |-| BCP106 | Error | Expected a new line character at this location. Commas are not used as separator delimiters. | -| BCP107 | Error | The function "{name}" does not exist in namespace "{namespaceType.Name}". | -| BCP108 | Error | The function "{name}" does not exist in namespace "{namespaceType.Name}". Did you mean "{suggestedName}"? | -| BCP109 | Error | The type "{type}" does not contain function "{name}". | -| BCP110 | Error | The type "{type}" does not contain function "{name}". Did you mean "{suggestedName}"? | +| BCP106 | Error | Expected a new line character at this location. Commas aren't used as separator delimiters. | +| BCP107 | Error | The function "{name}" doesn't exist in namespace "{namespaceType.Name}". | +| BCP108 | Error | The function "{name}" doesn't exist in namespace "{namespaceType.Name}". Did you mean "{suggestedName}"? | +| BCP109 | Error | The type "{type}" doesn't contain function "{name}". | +| BCP110 | Error | The type "{type}" doesn't contain function "{name}". Did you mean "{suggestedName}"? | | BCP111 | Error | The specified file path contains invalid control code characters. |-| BCP112 | Error | The "{LanguageConstants.TargetScopeKeyword}" cannot be declared multiple times in one file. | +| BCP112 | Error | The "{LanguageConstants.TargetScopeKeyword}" can't be declared multiple times in one file. | | BCP113 | Warning | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeTenant}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include tenant: tenant(), named management group: managementGroup(\<name>), named subscription: subscription(\<subId>), or named resource group in a named subscription: resourceGroup(\<subId>, \<name>). | | BCP114 | Warning | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeManagementGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current management group: managementGroup(), named management group: managementGroup(\<name>), named subscription: subscription(\<subId>), tenant: tenant(), or named resource group in a named subscription: resourceGroup(\<subId>, \<name>). | | BCP115 | Warning | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeSubscription}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current subscription: subscription(), named subscription: subscription(\<subId>), named resource group in same subscription: resourceGroup(\<name>), named resource group in different subscription: resourceGroup(\<subId>, \<name>), or tenant: tenant(). | | BCP116 | Warning | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeResourceGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current resource group: resourceGroup(), named resource group in same subscription: resourceGroup(\<name>), named resource group in a different subscription: resourceGroup(\<subId>, \<name>), current subscription: subscription(), named subscription: subscription(\<subId>) or tenant: tenant(). |-| BCP117 | Error | An empty indexer is not allowed. Specify a valid expression. | +| BCP117 | Error | An empty indexer isn't allowed. Specify a valid expression. | | BCP118 | Error | Expected the "{" character, the "[" character, or the "if" keyword at this location. | | BCP119 | Warning | Unsupported scope for extension resource deployment. Expected a resource reference. | | BCP120 | Error | This expression is being used in an assignment to the "{propertyName}" property of the "{objectTypeName}" type, which requires a value that can be calculated at the start of the deployment. | If you need more information about a particular diagnostic code, select the **Fe | BCP122 | Error | Modules: {ToQuotedString(moduleNames)} are defined with this same name and this same scope in a file. Rename them or split into different modules. | | BCP123 | Error | Expected a namespace or decorator name at this location. | | BCP124 | Error | The decorator "{decoratorName}" can only be attached to targets of type "{attachableType}", but the target has type "{targetType}". |-| BCP125 | Error | Function "{functionName}" cannot be used as a parameter decorator. | -| BCP126 | Error | Function "{functionName}" cannot be used as a variable decorator. | -| BCP127 | Error | Function "{functionName}" cannot be used as a resource decorator. | -| BCP128 | Error | Function "{functionName}" cannot be used as a module decorator. | -| BCP129 | Error | Function "{functionName}" cannot be used as an output decorator. | -| BCP130 | Error | Decorators are not allowed here. | +| BCP125 | Error | Function "{functionName}" can't be used as a parameter decorator. | +| BCP126 | Error | Function "{functionName}" can't be used as a variable decorator. | +| BCP127 | Error | Function "{functionName}" can't be used as a resource decorator. | +| BCP128 | Error | Function "{functionName}" can't be used as a module decorator. | +| BCP129 | Error | Function "{functionName}" can't be used as an output decorator. | +| BCP130 | Error | Decorators aren't allowed here. | | BCP132 | Error | Expected a declaration after the decorator. |-| BCP133 | Error | The unicode escape sequence is not valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. | -| BCP134 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | -| BCP135 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | +| BCP133 | Error | The unicode escape sequence isn't valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. | +| BCP134 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} isn't valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | +| BCP135 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} isn't valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | | BCP136 | Error | Expected a loop item variable identifier at this location. | | BCP137 | Error | Loop expected an expression of type "{LanguageConstants.Array}" but the provided value is of type "{actualType}". |-| BCP138 | Error | For-expressions are not supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. | +| BCP138 | Error | For-expressions aren't supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. | | BCP139 | Warning | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. |-| BCP140 | Error | The multi-line string at this location is not terminated. Terminate it with "'''. | -| BCP141 | Error | The expression cannot be used as a decorator as it is not callable. | -| BCP142 | Error | Property value for-expressions cannot be nested. | -| BCP143 | Error | For-expressions cannot be used with properties whose names are also expressions. | -| BCP144 | Error | Directly referencing a resource or module collection is not currently supported here. Apply an array indexer to the expression. | +| BCP140 | Error | The multi-line string at this location isn't terminated. Terminate it with "'''. | +| BCP141 | Error | The expression can't be used as a decorator as it isn't callable. | +| BCP142 | Error | Property value for-expressions can't be nested. | +| BCP143 | Error | For-expressions can't be used with properties whose names are also expressions. | +| BCP144 | Error | Directly referencing a resource or module collection isn't currently supported here. Apply an array indexer to the expression. | | BCP145 | Error | Output "{identifier}" is declared multiple times. Remove or rename the duplicates. | | BCP147 | Error | Expected a parameter declaration after the decorator. | | BCP148 | Error | Expected a variable declaration after the decorator. | | BCP149 | Error | Expected a resource declaration after the decorator. | | BCP150 | Error | Expected a module declaration after the decorator. | | BCP151 | Error | Expected an output declaration after the decorator. |-| BCP152 | Error | Function "{functionName}" cannot be used as a decorator. | +| BCP152 | Error | Function "{functionName}" can't be used as a decorator. | | BCP153 | Error | Expected a resource or module declaration after the decorator. | | BCP154 | Error | Expected a batch size of at least {limit} but the specified value was "{value}". | | BCP155 | Error | The decorator "{decoratorName}" can only be attached to resource or module collections. | | BCP156 | Error | The resource type segment "{typeSegment}" is invalid. Nested resources must specify a single type segment, and optionally can specify an API version using the format "\<type>@\<apiVersion>". |-| BCP157 | Error | The resource type cannot be determined due to an error in the containing resource. | -| BCP158 | Error | Cannot access nested resources of type "{wrongType}". A resource type is required. | -| BCP159 | Error | The resource "{resourceName}" does not contain a nested resource named "{identifierName}". Known nested resources are: {ToQuotedString(nestedResourceNames)}. | -| BCP160 | Error | A nested resource cannot appear inside of a resource with a for-expression. | +| BCP157 | Error | The resource type can't be determined due to an error in the containing resource. | +| BCP158 | Error | Can't access nested resources of type "{wrongType}". A resource type is required. | +| BCP159 | Error | The resource "{resourceName}" doesn't contain a nested resource named "{identifierName}". Known nested resources are: {ToQuotedString(nestedResourceNames)}. | +| BCP160 | Error | A nested resource can't appear inside of a resource with a for-expression. | | BCP162 | Error | Expected a loop item variable identifier or "(" at this location. | | BCP164 | Error | A child resource's scope is computed based on the scope of its ancestor resource. This means that using the "scope" property on a child resource is unsupported. | | BCP165 | Error | A resource's computed scope must match that of the Bicep file for it to be deployable. This resource's scope is computed from the "scope" property value assigned to ancestor resource "{ancestorIdentifier}". You must use modules to deploy resources to a different scope. | If you need more information about a particular diagnostic code, select the **Fe | BCP168 | Error | Length must not be a negative value. | | BCP169 | Error | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. | | BCP170 | Error | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully-qualified name. |-| BCP171 | Error | Resource type "{resourceType}" is not a valid child resource of parent "{parentResourceType}". | -| BCP172 | Error | The resource type cannot be validated due to an error in parent resource "{resourceName}". | -| BCP173 | Error | The property "{property}" cannot be used in an existing resource declaration. | -| BCP174 | Warning | Type validation is not available for resource types declared containing a "/providers/" segment. Please instead use the "scope" property. | -| BCP176 | Error | Values of the "any" type are not allowed here. | +| BCP171 | Error | Resource type "{resourceType}" isn't a valid child resource of parent "{parentResourceType}". | +| BCP172 | Error | The resource type can't be validated due to an error in parent resource "{resourceName}". | +| BCP173 | Error | The property "{property}" can't be used in an existing resource declaration. | +| BCP174 | Warning | Type validation isn't available for resource types declared containing a "/providers/" segment. Instead use the "scope" property. | +| BCP176 | Error | Values of the "any" type aren't allowed here. | | BCP177 | Error | This expression is being used in the if-condition expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | | BCP178 | Error | This expression is being used in the for-expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | | BCP179 | Warning | Unique resource or deployment name is required when looping. The loop item variable "{itemVariableName}" or the index variable "{indexVariableName}" must be referenced in at least one of the value expressions of the following properties in the loop body: {ToQuotedString(expectedVariantProperties)} |-| BCP180 | Error | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. | +| BCP180 | Error | Function "{functionName}" isn't valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. | | BCP181 | Error | This expression is being used in an argument of the function "{functionName}", which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | | BCP182 | Error | This expression is being used in the for-body of the variable "{variableName}", which requires values that can be calculated at the start of the deployment.{variableDependencyChainClause}{violatingPropertyNameClause}{accessiblePropertiesClause} | | BCP183 | Error | The value of the module "params" property must be an object literal. | | BCP184 | Error | File '{filePath}' exceeded maximum size of {maxSize} {unit}. | | BCP185 | Warning | Encoding mismatch. File was loaded with '{detectedEncoding}' encoding. |-| BCP186 | Error | Unable to parse literal JSON value. Please ensure that it is well-formed. | -| BCP187 | Warning | The property "{property}" does not exist in the resource or type definition, although it might still be valid.{TypeInaccuracyClause} | -| BCP188 | Error | The referenced ARM template has errors. Please see [https://aka.ms/arm-template](https://aka.ms/arm-template) for information on how to diagnose and fix the template. | -| BCP189 | Error | (allowedSchemes.Contains(ArtifactReferenceSchemes.Local, StringComparer.Ordinal), allowedSchemes.Any(scheme => !string.Equals(scheme, ArtifactReferenceSchemes.Local, StringComparison.Ordinal))) switch { (false, false) => "Module references are not supported in this context.", (false, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a module reference using one of the following schemes: {FormatSchemes()}", (true, false) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file.", (true, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file or a module reference using one of the following schemes: {FormatSchemes()}"} | -| BCP190 | Error | The artifact with reference "{artifactRef}" has not been restored. | +| BCP186 | Error | Unable to parse literal JSON value. Ensure that it's well-formed. | +| BCP187 | Warning | The property "{property}" doesn't exist in the resource or type definition, although it might still be valid.{TypeInaccuracyClause} | +| BCP188 | Error | The referenced ARM template has errors. See [https://aka.ms/arm-template](https://aka.ms/arm-template) for information on how to diagnose and fix the template. | +| BCP189 | Error | (allowedSchemes.Contains(ArtifactReferenceSchemes.Local, StringComparer.Ordinal), allowedSchemes.Any(scheme => !string.Equals(scheme, ArtifactReferenceSchemes.Local, StringComparison.Ordinal))) switch { (false, false) => "Module references aren't supported in this context.", (false, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a module reference using one of the following schemes: {FormatSchemes()}", (true, false) => $"The specified module reference scheme \"{badScheme}\" isn't recognized. Specify a path to a local module file.", (true, true) => $"The specified module reference scheme \"{badScheme}\" isn't recognized. Specify a path to a local module file or a module reference using one of the following schemes: {FormatSchemes()}"} | +| BCP190 | Error | The artifact with reference "{artifactRef}" hasn't been restored. | | BCP191 | Error | Unable to restore the artifact with reference "{artifactRef}". |-| BCP192 | Error | Unable to restore the artifact with reference "{artifactRef}": {message} | +| <a id='BCP192' />[BCP192](./diagnostics/bcp192.md) | Error | Unable to restore the artifact with reference \<reference>: \<error-message>. | | BCP193 | Error | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.Oci}:\<artifact-uri>:\<tag>", or "{ArtifactReferenceSchemes.Oci}/\<module-alias>:\<module-name-or-path>:\<tag>". | | BCP194 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.TemplateSpecs}:\<subscription-ID>/\<resource-group-name>/\<template-spec-name>:\<version>", or "{ArtifactReferenceSchemes.TemplateSpecs}/\<module-alias>:\<template-spec-name>:\<version>". |-| BCP195 | Error | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The artifact path segment "{badSegment}" is not valid. Each artifact name path segment must be a lowercase alphanumeric string optionally separated by a ".", "_", or \"-\"." | +| BCP195 | Error | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The artifact path segment "{badSegment}" isn't valid. Each artifact name path segment must be a lowercase alphanumeric string optionally separated by a ".", "_", or \"-\"." | | BCP196 | Error | The module tag or digest is missing. | | BCP197 | Error | The tag "{badTag}" exceeds the maximum length of {maxLength} characters. |-| BCP198 | Error | The tag "{badTag}" is not valid. Valid characters are alphanumeric, ".", "_", or "-" but the tag cannot begin with ".", "_", or "-". | +| BCP198 | Error | The tag "{badTag}" isn't valid. Valid characters are alphanumeric, ".", "_", or "-" but the tag can't begin with ".", "_", or "-". | | BCP199 | Error | Module path "{badRepository}" exceeds the maximum length of {maxLength} characters. | | BCP200 | Error | The registry "{badRegistry}" exceeds the maximum length of {maxLength} characters. | | BCP201 | Error | Expected a provider specification string of with a valid format at this location. Valid formats are "br:\<providerRegistryHost>/\<providerRepositoryPath>@\<providerVersion>" or "br/\<providerAlias>:\<providerName>@\<providerVersion>". | | BCP202 | Error | Expected a provider alias name at this location. | | BCP203 | Error | Using provider statements requires enabling EXPERIMENTAL feature "Extensibility". |-| BCP204 | Error | Provider namespace "{identifier}" is not recognized. | -| BCP205 | Error | Provider namespace "{identifier}" does not support configuration. | +| BCP204 | Error | Provider namespace "{identifier}" isn't recognized. | +| BCP205 | Error | Provider namespace "{identifier}" doesn't support configuration. | | BCP206 | Error | Provider namespace "{identifier}" requires configuration, but none was provided. | | BCP207 | Error | Namespace "{identifier}" is declared multiple times. Remove the duplicates. |-| BCP208 | Error | The specified namespace "{badNamespace}" is not recognized. Specify a resource reference using one of the following namespaces: {ToQuotedString(allowedNamespaces)}. | +| BCP208 | Error | The specified namespace "{badNamespace}" isn't recognized. Specify a resource reference using one of the following namespaces: {ToQuotedString(allowedNamespaces)}. | | BCP209 | Error | Failed to find resource type "{resourceType}" in namespace "{@namespace}". |-| BCP210 | Error | Resource type belonging to namespace "{childNamespace}" cannot have a parent resource type belonging to different namespace "{parentNamespace}". | +| BCP210 | Error | Resource type belonging to namespace "{childNamespace}" can't have a parent resource type belonging to different namespace "{parentNamespace}". | | BCP211 | Error | The module alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |-| BCP212 | Error | The Template Spec module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | -| BCP213 | Error | The OCI artifact module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | -| BCP214 | Error | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "subscription" property cannot be null or undefined. | -| BCP215 | Error | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "resourceGroup" property cannot be null or undefined. | -| BCP216 | Error | The OCI artifact module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. | -| BCP217 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The subscription ID "{subscriptionId}" is not a GUID. | +| BCP212 | Error | The Template Spec module alias name "{aliasName}" doesn't exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP213 | Error | The OCI artifact module alias name "{aliasName}" doesn't exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP214 | Error | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "subscription" property can't be null or undefined. | +| BCP215 | Error | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "resourceGroup" property can't be null or undefined. | +| BCP216 | Error | The OCI artifact module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property can't be null or undefined. | +| BCP217 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The subscription ID "{subscriptionId}" isn't a GUID. | | BCP218 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" exceeds the maximum length of {maximumLength} characters. |-| BCP219 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" is invalid. Valid characters are alphanumeric, unicode characters, ".", "_", "-", "(", or ")", but the resource group name cannot end with ".". | +| BCP219 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" is invalid. Valid characters are alphanumeric, unicode characters, ".", "_", "-", "(", or ")", but the resource group name can't end with ".". | | BCP220 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" exceeds the maximum length of {maximumLength} characters. |-| BCP221 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". | +| BCP221 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name can't end with ".". | | BCP222 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" exceeds the maximum length of {maximumLength} characters. |-| BCP223 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". | -| BCP224 | Error | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The digest "{badDigest}" is not valid. The valid format is a string "sha256:" followed by exactly 64 lowercase hexadecimal digits. | -| BCP225 | Warning | The discriminator property "{propertyName}" value cannot be determined at compilation time. Type checking for this object is disabled. | +| BCP223 | Error | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name can't end with ".". | +| BCP224 | Error | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The digest "{badDigest}" isn't valid. The valid format is a string "sha256:" followed by exactly 64 lowercase hexadecimal digits. | +| BCP225 | Warning | The discriminator property "{propertyName}" value can't be determined at compilation time. Type checking for this object is disabled. | | BCP226 | Error | Expected at least one diagnostic code at this location. Valid format is "#disable-next-line diagnosticCode1 diagnosticCode2 ...". |-| BCP227 | Error | The type "{resourceType}" cannot be used as a parameter or output type. Extensibility types are currently not supported as parameters or outputs. | -| BCP229 | Error | The parameter "{parameterName}" cannot be used as a resource scope or parent. Resources passed as parameters cannot be used as a scope or parent of a resource. | -| BCP230 | Warning | The referenced module uses resource type "{resourceTypeReference.FormatName()}" which does not have types available. Bicep is unable to validate resource properties prior to deployment, but this will not block the resource from being deployed. | +| BCP227 | Error | The type "{resourceType}" can't be used as a parameter or output type. Extensibility types are currently not supported as parameters or outputs. | +| BCP229 | Error | The parameter "{parameterName}" can't be used as a resource scope or parent. Resources passed as parameters can't be used as a scope or parent of a resource. | +| BCP230 | Warning | The referenced module uses resource type "{resourceTypeReference.FormatName()}" which doesn't have types available. Bicep is unable to validate resource properties prior to deployment, but this won't block the resource from being deployed. | | BCP231 | Error | Using resource-typed parameters and outputs requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceTypedParamsAndOutputs)}". | | BCP232 | Error | Unable to delete the module with reference "{moduleRef}" from cache. | | BCP233 | Error | Unable to delete the module with reference "{moduleRef}" from cache: {message} | | BCP234 | Warning | The ARM function "{armFunctionName}" failed when invoked on the value [{literalValue}]: {message} |-| BCP235 | Error | Specified JSONPath does not exist in the given file or is invalid. | +| BCP235 | Error | Specified JSONPath doesn't exist in the given file or is invalid. | | BCP236 | Error | Expected a new line or comma character at this location. | | BCP237 | Error | Expected a comma character at this location. | | BCP238 | Error | Unexpected new line character after a comma. |-| BCP239 | Error | Identifier "{name}" is a reserved Bicep symbol name and cannot be used in this context. | -| BCP240 | Error | The "parent" property only permits direct references to resources. Expressions are not supported. | -| BCP241 | Warning | The "{functionName}" function is deprecated and will be removed in a future release of Bicep. Please add a comment to https://github.com/Azure/bicep/issues/2017 if you believe this will impact your workflow. | +| BCP239 | Error | Identifier "{name}" is a reserved Bicep symbol name and can't be used in this context. | +| BCP240 | Error | The "parent" property only permits direct references to resources. Expressions aren't supported. | +| BCP241 | Warning | The "{functionName}" function is deprecated and will be removed in a future release of Bicep. Add a comment to https://github.com/Azure/bicep/issues/2017 if you believe this will impact your workflow. | | BCP242 | Error | Lambda functions may only be specified directly as function arguments. | | BCP243 | Error | Parentheses must contain exactly one expression. | | BCP244 | Error | {minArgCount == maxArgCount ? $"Expected lambda expression of type "{lambdaType}" with {minArgCount} arguments but received {actualArgCount} arguments." : $"Expected lambda expression of type "{lambdaType}" with between {minArgCount} and {maxArgCount} arguments but received {actualArgCount} arguments."} | | BCP245 | Warning | Resource type "{resourceTypeReference.FormatName()}" can only be used with the 'existing' keyword. | | BCP246 | Warning | Resource type "{resourceTypeReference.FormatName()}" can only be used with the 'existing' keyword at the requested scope. Permitted scopes for deployment: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(writableScopes))}. |-| BCP247 | Error | Using lambda variables inside resource or module array access is not currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. | -| BCP248 | Error | Using lambda variables inside the "{functionName}" function is not currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. | +| BCP247 | Error | Using lambda variables inside resource or module array access isn't currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. | +| BCP248 | Error | Using lambda variables inside the "{functionName}" function isn't currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. | | BCP249 | Error | Expected loop variable block to consist of exactly 2 elements (item variable and index variable), but found {actualCount}. | | BCP250 | Error | Parameter "{identifier}" is assigned multiple times. Remove or rename the duplicates. | | BCP256 | Error | The using declaration is missing a bicep template file path reference. | If you need more information about a particular diagnostic code, select the **Fe | BCP260 | Error | The parameter "{identifier}" expects a value of type "{expectedType}" but the provided value is of type "{actualType}". | | BCP261 | Error | A using declaration must be present in this parameters file. | | BCP262 | Error | More than one using declaration are present |-| BCP263 | Error | The file specified in the using declaration path does not exist | +| BCP263 | Error | The file specified in the using declaration path doesn't exist | | BCP264 | Error | Resource type "{resourceTypeName}" is declared in multiple imported namespaces ({ToQuotedStringWithCaseInsensitiveOrdering(namespaces)}), and must be fully-qualified. |-| BCP265 | Error | The name "{name}" is not a function. Did you mean "{knownFunctionNamespace}.{knownFunctionName}"? | +| BCP265 | Error | The name "{name}" isn't a function. Did you mean "{knownFunctionNamespace}.{knownFunctionName}"? | | BCP266 | Error | Expected a metadata identifier at this location. | | BCP267 | Error | Expected a metadata declaration after the decorator. |-| BCP268 | Error | Invalid identifier: "{name}". Metadata identifiers starting with '_' are reserved. Please use a different identifier. | -| BCP269 | Error | Function "{functionName}" cannot be used as a metadata decorator. | +| BCP268 | Error | Invalid identifier: "{name}". Metadata identifiers starting with '_' are reserved. Use a different identifier. | +| BCP269 | Error | Function "{functionName}" can't be used as a metadata decorator. | | BCP271 | Error | Failed to parse the contents of the Bicep configuration file "{configurationPath}" as valid JSON: {parsingErrorMessage.TrimEnd('.')}. |-| BCP272 | Error | Could not load the Bicep configuration file "{configurationPath}": {loadErrorMessage.TrimEnd('.')}. | +| BCP272 | Error | Couldn't load the Bicep configuration file "{configurationPath}": {loadErrorMessage.TrimEnd('.')}. | | BCP273 | Error | Failed to parse the contents of the Bicep configuration file "{configurationPath}": {parsingErrorMessage.TrimEnd('.')}. | | BCP274 | Warning | Error scanning "{directoryPath}" for bicep configuration: {scanErrorMessage.TrimEnd('.')}. | | BCP275 | Error | Unable to open file at path "{directoryPath}". Found a directory instead. | | BCP276 | Error | A using declaration can only reference a Bicep file. | | BCP277 | Error | A module declaration can only reference a Bicep File, an ARM template, a registry reference or a template spec reference. |-| BCP278 | Error | This parameters file references itself, which is not allowed. | -| BCP279 | Error | Expected a type at this location. Please specify a valid type expression or one of the following types: {ToQuotedString(LanguageConstants.DeclarationTypes.Keys)}. | -| BCP285 | Error | The type expression could not be reduced to a literal value. | -| BCP286 | Error | This union member is invalid because it cannot be assigned to the '{keystoneType}' type. | +| BCP278 | Error | This parameters file references itself, which isn't allowed. | +| BCP279 | Error | Expected a type at this location. Specify a valid type expression or one of the following types: {ToQuotedString(LanguageConstants.DeclarationTypes.Keys)}. | +| BCP285 | Error | The type expression couldn't be reduced to a literal value. | +| BCP286 | Error | This union member is invalid because it can't be assigned to the '{keystoneType}' type. | | BCP287 | Error | '{symbolName}' refers to a value but is being used as a type here. |-| BCP288 | Error | '{symbolName}' refers to a type but is being used as a value here. | -| BCP289 | Error | The type definition is not valid. | +| <a id='BCP288' />[BCP288](./diagnostics/bcp288.md) | Error | \<name> refers to a type but is being used as a value here. | +| BCP289 | Error | The type definition isn't valid. | | BCP290 | Error | Expected a parameter or type declaration after the decorator. | | BCP291 | Error | Expected a parameter or output declaration after the decorator. | | BCP292 | Error | Expected a parameter, output, or type declaration after the decorator. | | BCP293 | Error | All members of a union type declaration must be literal values. |-| BCP294 | Error | Type unions must be reducible to a single ARM type (such as 'string', 'int', or 'bool'). | +| <a id='BCP294' />[BCP294](./diagnostics/bcp294.md) | Error | Type unions must be reducible to a single ARM type (such as 'string', 'int', or 'bool'). | | BCP295 | Error | The '{decoratorName}' decorator may not be used on targets of a union or literal type. The allowed values for this parameter or type definition will be derived from the union or literal type automatically. | | BCP296 | Error | Property names on types must be compile-time constant values. |-| BCP297 | Error | Function "{functionName}" cannot be used as a type decorator. | -| BCP298 | Error | This type definition includes itself as required component, which creates a constraint that cannot be fulfilled. | +| BCP297 | Error | Function "{functionName}" can't be used as a type decorator. | +| BCP298 | Error | This type definition includes itself as required component, which creates a constraint that can't be fulfilled. | | BCP299 | Error | This type definition includes itself as a required component via a cycle ("{string.Join("\" -> \"", cycle)}"). |-| BCP300 | Error | Expected a type literal at this location. Please specify a concrete value or a reference to a literal type. | +| BCP300 | Error | Expected a type literal at this location. Specify a concrete value or a reference to a literal type. | | BCP301 | Error | The type name "{reservedName}" is reserved and may not be attached to a user-defined type. |-| BCP302 | Error | The name "{name}" is not a valid type. Please specify one of the following types: {ToQuotedString(validTypes)}. | +| <a id='BCP302' />[BCP302](./diagnostics/bcp302.md) | Error | The name \<type-name> isn't a valid type. Specify one of the following types: \<type-names>. | | BCP303 | Error | String interpolation is unsupported for specifying the provider. | | BCP304 | Error | Invalid provider specifier string. Specify a valid provider of format "\<providerName>@\<providerVersion>". | | BCP305 | Error | Expected the "with" keyword, "as" keyword, or a new line character at this location. | | BCP306 | Error | The name "{name}" refers to a namespace, not to a type. |-| BCP307 | Error | The expression cannot be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} cannot be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. | +| BCP307 | Error | The expression can't be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} can't be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. | | BCP308 | Error | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a user-defined type. |-| BCP309 | Error | Values of type "{flattenInputType.Name}" cannot be flattened because "{incompatibleType.Name}" is not an array type. | -| BCP311 | Error | The provided index value of "{indexSought}" is not valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. | +| BCP309 | Error | Values of type "{flattenInputType.Name}" can't be flattened because "{incompatibleType.Name}" isn't an array type. | +| BCP311 | Error | The provided index value of "{indexSought}" isn't valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. | | BCP315 | Error | An object type may have at most one additional properties declaration. | | BCP316 | Error | The "{LanguageConstants.ParameterSealedPropertyName}" decorator may not be used on object types with an explicit additional properties type declaration. | | BCP317 | Error | Expected an identifier, a string, or an asterisk at this location. |-| BCP318 | Warning | The value of type "{possiblyNullType}" may be null at the start of the deployment, which would cause this access expression (and the overall deployment with it) to fail. If you do not know whether the value will be null and the template would handle a null value for the overall expression, use a `.?` (safe dereference) operator to short-circuit the access expression if the base expression's value is null: {accessExpression.AsSafeAccess().ToString()}. If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | -| BCP319 | Error | The type at "{errorSource}" could not be resolved by the ARM JSON template engine. Original error message: "{message}" | -| BCP320 | Error | The properties of module output resources cannot be accessed directly. To use the properties of this resource, pass it as a resource-typed parameter to another module and access the parameter's properties therein. | -| BCP321 | Warning | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | +| BCP318 | Warning | The value of type "{possiblyNullType}" may be null at the start of the deployment, which would cause this access expression (and the overall deployment with it) to fail. If you don't know whether the value will be null and the template would handle a null value for the overall expression, use a `.?` (safe dereference) operator to short-circuit the access expression if the base expression's value is null: {accessExpression.AsSafeAccess().ToString()}. If you know the value won't be null, use a non-null assertion operator to inform the compiler that the value won't be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | +| BCP319 | Error | The type at "{errorSource}" couldn't be resolved by the ARM JSON template engine. Original error message: "{message}" | +| BCP320 | Error | The properties of module output resources can't be accessed directly. To use the properties of this resource, pass it as a resource-typed parameter to another module and access the parameter's properties therein. | +| BCP321 | Warning | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". If you know the value won't be null, use a non-null assertion operator to inform the compiler that the value won't be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | | BCP322 | Error | The `.?` (safe dereference) operator may not be used on instance function invocations. | | BCP323 | Error | The `[?]` (safe dereference) operator may not be used on resource or module collections. | | BCP325 | Error | Expected a type identifier at this location. |-| BCP326 | Error | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that cannot be overridden. | +| BCP326 | Error | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that can't be overridden. | | <a id='BCP327' />[BCP327](./diagnostics/bcp327.md) | Error/Warning | The provided value (which will always be greater than or equal to \<value>) is too large to assign to a target for which the maximum allowable value is \<max-value>. | | <a id='BCP328' />[BCP328](./diagnostics/bcp328.md) | Error/Warning | The provided value (which will always be less than or equal to \<value>) is too small to assign to a target for which the minimum allowable value is \<max-value>. | | BCP329 | Warning | The provided value can be as small as {sourceMin} and may be too small to assign to a target with a configured minimum of {targetMin}. | If you need more information about a particular diagnostic code, select the **Fe | <a id='BCP333' />[BCP333](./diagnostics/bcp333.md) | Error/Warning | The provided value (whose length will always be less than or equal to \<string-length>) is too short to assign to a target for which the minimum allowable length is \<min-length>. | | BCP334 | Warning | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. | | BCP335 | Warning | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. |-| BCP337 | Error | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. | +| BCP337 | Error | This declaration type isn't valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. | | <a id='BCP338' />[BCP338](./diagnostics/bcp338.md) | Error | Failed to evaluate parameter \<parameter-name>: \<error-message>` | | BCP339 | Error | The provided array index value of "{indexSought}" is not valid. Array index should be greater than or equal to 0. | | BCP340 | Error | Unable to parse literal YAML value. Please ensure that it is well-formed. | | BCP341 | Error | This expression is being used inside a function declaration, which requires a value that can be calculated at the start of the deployment. {variableDependencyChainClause}{accessiblePropertiesClause} |-| BCP342 | Error | User-defined types are not supported in user-defined function parameters or outputs. | +| BCP342 | Error | User-defined types aren't supported in user-defined function parameters or outputs. | | BCP344 | Error | Expected an assert identifier at this location. | | BCP345 | Error | A test declaration can only reference a Bicep File | | BCP346 | Error | Expected a test identifier at this location. | | BCP347 | Error | Expected a test path string at this location. | | BCP348 | Error | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". | | BCP349 | Error | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". |-| BCP350 | Error | Value of type "{valueType}" cannot be assigned to an assert. Asserts can take values of type 'bool' only. | -| BCP351 | Error | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a parameter. | +| BCP350 | Error | Value of type "{valueType}" can't be assigned to an assert. Asserts can take values of type 'bool' only. | +| BCP351 | Error | Function "{functionName}" isn't valid at this location. It can only be used when directly assigning to a parameter. | | BCP352 | Error | Failed to evaluate variable "{name}": {message} |-| BCP353 | Error | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine is not case sensitive and will not be able to distinguish between them. | +| BCP353 | Error | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine isn't case sensitive and won't be able to distinguish between them. | | BCP354 | Error | Expected left brace ('{') or asterisk ('*') character at this location. | | BCP355 | Error | Expected the name of an exported symbol at this location. | | BCP356 | Error | Expected a valid namespace identifier at this location. | | BCP358 | Error | This declaration is missing a template file path reference. |-| BCP360 | Error | The '{symbolName}' symbol was not found in (or was not exported by) the imported template. | +| BCP360 | Error | The '{symbolName}' symbol wasn't found in (or wasn't exported by) the imported template. | | BCP361 | Error | The "@export()" decorator must target a top-level statement. | | BCP362 | Error | This symbol is imported multiple times under the names {string.Join(", ", importedAs.Select(identifier => $"'{identifier}'"))}. | | BCP363 | Error | The "{LanguageConstants.TypeDiscriminatorDecoratorName}" decorator can only be applied to object-only union types with unique member types. | If you need more information about a particular diagnostic code, select the **Fe | BCP365 | Error | The value "{discriminatorPropertyValue}" for discriminator property "{discriminatorPropertyName}" is duplicated across multiple union member types. The value must be unique across all union member types. | | BCP366 | Error | The discriminator property name must be "{acceptablePropertyName}" on all union member types. | | BCP367 | Error | The "{featureName}" feature is temporarily disabled. |-| BCP368 | Error | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses a reference to a secret value in Azure Key Vault. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | -| BCP369 | Error | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses the default value defined in the template. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | +| BCP368 | Error | The value of the "{targetName}" parameter can't be known until the template deployment has started because it uses a reference to a secret value in Azure Key Vault. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | +| BCP369 | Error | The value of the "{targetName}" parameter can't be known until the template deployment has started because it uses the default value defined in the template. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | | BCP372 | Error | The "@export()" decorator may not be applied to variables that refer to parameters, modules, or resource, either directly or indirectly. The target of this decorator contains direct or transitive references to the following unexportable symbols: {ToQuotedString(nonExportableSymbols)}. | | BCP373 | Error | Unable to import the symbol named "{name}": {message} |-| BCP374 | Error | The imported model cannot be loaded with a wildcard because it contains the following duplicated exports: {ToQuotedString(ambiguousExportNames)}. | +| BCP374 | Error | The imported model can't be loaded with a wildcard because it contains the following duplicated exports: {ToQuotedString(ambiguousExportNames)}. | | BCP375 | Error | An import list item that identifies its target with a quoted string must include an 'as \<alias>' clause. |-| BCP376 | Error | The "{name}" symbol cannot be imported because imports of kind {exportMetadataKind} are not supported in files of kind {sourceFileKind}. | +| BCP376 | Error | The "{name}" symbol can't be imported because imports of kind {exportMetadataKind} aren't supported in files of kind {sourceFileKind}. | | BCP377 | Error | The provider alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |-| BCP378 | Error | The OCI artifact provider alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. | -| BCP379 | Error | The OCI artifact provider alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | -| BCP380 | Error | Artifacts of type: "{artifactType}" are not supported. | -| BCP381 | Warning | Declaring provider namespaces with the "import" keyword has been deprecated. Please use the "provider" keyword instead. | -| BCP383 | Error | The "{typeName}" type is not parameterizable. | +| BCP378 | Error | The OCI artifact provider alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property can't be null or undefined. | +| BCP379 | Error | The OCI artifact provider alias name "{aliasName}" doesn't exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP380 | Error | Artifacts of type: "{artifactType}" aren't supported. | +| BCP381 | Warning | Declaring provider namespaces with the "import" keyword has been deprecated. Use the "provider" keyword instead. | +| BCP383 | Error | The "{typeName}" type isn't parameterizable. | | BCP384 | Error | The "{typeName}" type requires {requiredArgumentCount} argument(s). | | BCP385 | Error | Using resource-derived types requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceDerivedTypes)}". | | BCP386 | Error | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a resource-derived type. | | BCP387 | Error | Indexing into a type requires an integer greater than or equal to 0. |-| BCP388 | Error | Cannot access elements of type "{wrongType}" by index. A tuple type is required. | -| BCP389 | Error | The type "{wrongType}" does not declare an additional properties type. | +| BCP388 | Error | Can't access elements of type "{wrongType}" by index. A tuple type is required. | +| BCP389 | Error | The type "{wrongType}" doesn't declare an additional properties type. | | BCP390 | Error | The array item type access operator ('[*]') can only be used with typed arrays. | | BCP391 | Error | Type member access is only supported on a reference to a named type. |-| BCP392 | Warning | "The supplied resource type identifier "{resourceTypeIdentifier}" was not recognized as a valid resource type name." | -| BCP393 | Warning | "The type pointer segment "{unrecognizedSegment}" was not recognized. Supported pointer segments are: "properties", "items", "prefixItems", and "additionalProperties"." | -| BCP394 | Error | Resource-derived type expressions must derefence a property within the resource body. Using the entire resource body type is not permitted. | -| BCP395 | Error | Declaring provider namespaces using the '\<providerName>@\<version>' expression has been deprecated. Please use an identifier instead. | +| BCP392 | Warning | "The supplied resource type identifier "{resourceTypeIdentifier}" wasn't recognized as a valid resource type name." | +| BCP393 | Warning | "The type pointer segment "{unrecognizedSegment}" wasn't recognized. Supported pointer segments are: "properties", "items", "prefixItems", and "additionalProperties"." | +| BCP394 | Error | Resource-derived type expressions must deference a property within the resource body. Using the entire resource body type isn't permitted. | +| BCP395 | Error | Declaring provider namespaces using the '\<providerName>@\<version>' expression has been deprecated. Use an identifier instead. | | BCP396 | Error | The referenced provider types artifact has been published with malformed content. |-| BCP397 | Error | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is referenced in the "{RootConfiguration.ImplicitProvidersConfigurationKey}" section, but is missing corresponding configuration in the "{RootConfiguration.ProvidersConfigurationKey}" section." | -| BCP398 | Error | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is configured as built-in in the "{RootConfiguration.ProvidersConfigurationKey}" section, but no built-in provider exists." | +| BCP397 | Error | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It's referenced in the "{RootConfiguration.ImplicitProvidersConfigurationKey}" section, but is missing corresponding configuration in the "{RootConfiguration.ProvidersConfigurationKey}" section." | +| BCP398 | Error | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It's configured as built-in in the "{RootConfiguration.ProvidersConfigurationKey}" section, but no built-in provider exists." | | BCP399 | Error | Fetching az types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.DynamicTypeLoading)}". | | BCP400 | Error | Fetching types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ProviderRegistry)}". |-| BCP401 | Error | The spread operator \"{spread.Ellipsis.Text}\" is not permitted in this location. | +| <a id='BCP401' />[BCP401](./diagnostics/bcp401.md) | Error | The spread operator "..." isn't permitted in this location. | | BCP402 | Error | The spread operator \"{spread.Ellipsis.Text}\" can only be used in this context for an expression assignable to type \"{requiredType}\". | | BCP403 | Error/Warning | The enclosing array expects elements of type \"{expectedType}\", but the array being spread contains elements of incompatible type \"{actualType}\". | | BCP404 | Error | The \"{LanguageConstants.ExtendsKeyword}\" declaration is missing a bicepparam file path reference"). | | BCP405 | Error | More than one \"{LanguageConstants.ExtendsKeyword}\" declaration are present") |-| BCP406 | Error | The \"{LanguageConstants.ExtendsKeyword}\" keyword is not supported" | +| BCP406 | Error | The \"{LanguageConstants.ExtendsKeyword}\" keyword isn't supported" | ## Next steps |
azure-resource-manager | Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md | type oneOfSeveralObjects = {foo: 'bar'} | {fizz: 'buzz'} | {snap: 'crackle'} type mixedTypeArray = ('fizz' | 42 | {an: 'object'} | null)[] ``` +Type unions must be reducible to a single ARM type, such as 'string', 'int', or 'bool'. Otherwise, you get the [BCP294](./diagnostics/bcp294.md) error code. For example: ++```bicep +type foo = 'a' | 1 +``` + Any type expression can be used as a sub-type in a union type declaration (between `|` characters). For example, the following examples are all valid: ```bicep |
azure-resource-manager | Bcp018 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp018.md | + + Title: BCP018 +description: Error - Expected the <character> character at this location. ++ Last updated : 08/08/2024+++# Bicep error code - BCP018 ++This error occurs when a character, such as a bracket, is missing. ++## Error description ++`Expected the <character> character at this location.` ++## Solution ++Add the missing character. ++## Examples ++The following example raises the error because the code is missing a _}_. ++```bicep +output tennisBall object = { + type: 'tennis' + color: 'yellow' +``` ++You can fix the error by adding the missing _}_. ++```bicep +output tennisBall object = { + type: 'tennis' + color: 'yellow' +} +``` ++The following example raises the error because the code is missing a _]_. ++```bicep +output colors array = [ + 'red' + 'blue' + 'white' +``` ++You can fix the error by adding the missing _]_. ++```bicep +output colors array = [ + 'red' + 'blue' + 'white' +] +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp053 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp053.md | This error/warning occurs when you reference a property that isn't defined in th ## Solution -Reference the correct property name +Reference the correct property name. ## Examples |
azure-resource-manager | Bcp055 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp055.md | + + Title: BCP055 +description: Error - Cannot access properties of type <type-name>. A <type-name> type is required. ++ Last updated : 08/07/2024+++# Bicep error code - BCP055 ++This error occurs when you reference a nonexistent property of a type. ++## Error description ++`Cannot access properties of type <type-name>. A <type-name> type is required.` ++## Examples ++The following example raises the error because _string.bar_ isn't defined: ++```bicep +type foo = string.bar +``` ++You can fix the error by removing the reference: ++```bicep +type foo = string +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp057 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp057.md | + + Title: BCP057 +description: Error - The name <name> doesn't exist in the current context. ++ Last updated : 08/08/2024+++# Bicep error code - BCP057 ++This error occurs when the referenced name doesn't exist, either because of a typo or because it hasn't been declared. ++## Error description ++`The name <name> does not exist in the current context.` ++## Solution ++Fix the typo or declare the name. ++## Examples ++The following example raises the error because _bar_ has never been declared: ++```bicep +var foo = bar +``` ++The following example raises the error because _bar1_ is a typo: ++```bicep +var bar = 'white' +var foo = bar1 +``` ++You can fix the error by declaring _bar_, or fix the typo. ++```bicep +var bar = 'white' +var foo = bar +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp062 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp062.md | + + Title: BCP062 +description: Error - The referenced declaration with name <type-name> is not valid. ++ Last updated : 08/08/2024+++# Bicep error code - BCP062 ++This error occurs when the referenced declaration has an error. ++## Error description ++`The referenced declaration with name <type-name> is not valid.` ++## Examples ++The following example raises the error because the referenced [user-defined data type](../user-defined-data-types.md) has an error: ++```bicep +type ball = object.bar ++output tennisBall ball = { + name: 'tennis' + color: 'yellow' +} +``` ++You can fix the error by fixing the _ball_ definition: ++```bicep +type ball = object ++output tennisBall ball = { + name: 'tennis' + color: 'yellow' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp089 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp089.md | This error/warning occurs when a property name seems to be a typo. ## Error/warning description -`The property <property-name> is not allowed on objects of type <resource-type>. Did you mean <property-name>?` +`The property <property-name> is not allowed on objects of type <resource-type/type-definition>. Did you mean <property-name>?` ++## Solution ++Fix the typo. ## Examples resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' existing } ``` +The following example raises the error because the property name _color1_ looks like a typo. ++```bicep +type ball = { + name: string + color: string +} ++output tennisBall ball = { + name: 'tennis' + color1: 'yellow' +} +``` ++You can fix the error by correcting the typo: ++```bicep +type ball = { + name: string + color: string +} ++output tennisBall ball = { + name: 'tennis' + color: 'yellow' +} +``` + ## Next steps For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp192 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp192.md | + + Title: BCP192 +description: Error - Unable to restore the artifact with reference <reference>. ++ Last updated : 08/08/2024+++# Bicep error code - BCP192 ++This error occurs when Bicep can't copy the external module to the local cache. For example, an incorrect module reference. For more information about using modules in Bicep and Bicep restore, see [Bicep modules](../modules.md). ++## Error description ++`Unable to restore the artifact with reference <reference>: <error-message>.` ++## Solution ++Fix the module reference. ++## Examples ++The following example raises the error because the public module version doesn't exist: ++```bicep +module storage 'br/public:avm/res/storage/storage-account:0.1.0' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` ++The following example raises the error because there is a typo in the reference: ++```bicep +module storage 'br/public:avm/res/storage/storage-account1:0.11.1' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` ++You can fix the error by correct the AVM reference and the version: ++```bicep +module storage 'br/public:avm/res/storage/storage-account:0.11.1' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp288 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp288.md | + + Title: BCP288 +description: Error - <name> refers to a type but is being used as a value here. ++ Last updated : 08/08/2024+++# Bicep error code - BCP288 ++This error occurs when the name specified is a type, but it's being used as a value. ++## Error description ++`<name> refers to a type but is being used as a value here.` ++## Solution ++Use a name of a value. ++## Examples ++The following example raises the error because _bar_ is a name of a [user-defined data type](../user-defined-data-types.md), not a value: ++```bicep +type bar = 'white' +var foo = bar +``` ++You can fix the error by assigning a value to the variable _foo_: ++```bicep +var bar = 'white' +var foo = bar +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp294 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp294.md | + + Title: BCP294 +description: Error - Type unions must be reducible to a single ARM type (such as 'string', 'int', or 'bool'). ++ Last updated : 08/08/2024+++# Bicep error code - BCP294 ++This error occurs when you use values of different [data types](../data-types.md) in a [union type](../data-types.md#union-types) definition. ++## Error description ++`Type unions must be reducible to a single ARM type (such as 'string', 'int', or 'bool').` ++## Examples ++The following example raises the error because there are different types used in the union type: ++```bicep +type foo = 'a' | 1 +``` ++You can fix the error by using a single data type for the union type definition: ++```bicep +type foo = 'a' | 'b' +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp302 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp302.md | + + Title: BCP302 +description: Error - The name <type-name> is not a valid type. ++ Last updated : 08/08/2024+++# Bicep error code - BCP302 ++This error occurs when you use an invalid [data type](../data-types.md) or [user-defined data type](../user-defined-data-types.md). ++## Error description ++`The name <type-name> is not a valid type. Please specify one of the following types: <type-names>.` ++## Solutions ++Use the correct data type or user-defined data type. ++## Examples ++The following example raises the error because `balla` looks like a typo: ++```bicep +type ball = { + name: string + color: string +} ++output tennisBall balla = { + name: 'tennis' + color: 'yellow' +} +``` ++You can fix the error by correcting the typo: ++```bicep +type ball = { + name: string + color: string +} ++output tennisBall ball = { + name: 'tennis' + color: 'yellow' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp327 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp327.md | The following example raises the error because `13` is greater than maximum allo @minValue(1) @maxValue(12) param month int = 13- ``` You can fix the error by assigning a value within the permitted range: You can fix the error by assigning a value within the permitted range: @minValue(1) @maxValue(12) param month int = 12- ``` ## Next steps |
azure-resource-manager | Bcp328 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp328.md | The following example raises the error because `0` is less than minimum allowabl @minValue(1) @maxValue(12) param month int = 0- ``` You can fix the error by assigning a value within the permitted range: |
azure-resource-manager | Bcp401 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp401.md | + + Title: BCP401 +description: Error - The spread operator "..." is not permitted in this location. ++ Last updated : 08/08/2024+++# Bicep error code - BCP401 ++This error occurs when you use expressions to define resource bodies as the [`Spread`](../operator-spread.md) operator gets converted to a function. It's a limitation in JSON. ++## Error description ++`The spread operator "..." is not permitted in this location.` ++## Examples ++The following example raises the error because the `spread` operator is used to define the resource body: ++```bicep +param location string = resourceGroup().location +param addressPrefix string = '10.0.0.0/24' + +resource vnet 'Microsoft.Network/virtualNetworks@2024-01-01' = { + name: 'vnetName' + location: location + + ...(addressPrefix != '' ? { + properties: { + addressSpace: { + addressPrefixes: [ + addressPrefix + ] + } + } + } : {}) +} +``` ++You can fix the error by using the operator in the lower level: ++```bicep +param location string = resourceGroup().location +param addressPrefix string = '10.0.0.0/24' + +resource vnet 'Microsoft.Network/virtualNetworks@2024-01-01' = { + name: 'vnetName' + location: location + + properties: { + addressSpace: { + ...(addressPrefix != '' ? { + addressPrefixes: [ + addressPrefix + ] + } : {}) + } + } +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md). |
batch | Quick Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md | Title: 'Quickstart: Use the Azure portal to create a Batch account and run a job' description: Follow this quickstart to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool. Previously updated : 06/13/2024 Last updated : 07/30/2024+ |
communication-services | Spotlight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md | zone_pivot_groups: acs-plat-web-ios-android-windows # Spotlight states-In this article, you learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone. +In this article, you learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone. The maximum limit of pinned videos is seven. + Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight. ## Prerequisites Since the video stream resolution of a participant is increased when spotlighted - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) +## Support +The following tables define support for Spotlight in Azure Communication Services. ++### Identities & call types +The following table shows support for call and identity types. ++|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call | +|--|||-|||--| +|Communication Services user | ✔️ | ✔️ | | ✔️ | | ✔️ | +|Microsoft 365 user | ✔️ | ✔️ | | ✔️ | | ✔️ | ++### Operations +The following table shows support for individual APIs in Calling SDK to individual identity types. ++|Operations | Communication Services user | Microsoft 365 user | +|--||-| +| startSpotlight | ✔️ [1] | ✔️ [1] | +| stopSpotlight | ✔️ | ✔️ | +| stopAllSpotlight | ✔️ [1] | ✔️ [1] | +| getSpotlightedParticipants | ✔️ | ✔️ | ++[1] In Teams meeting scenarios, these APIs are only available to users with role organizer, co-organizer, or presenter. ++### SDKs +The following table shows support for Spotlight feature in individual Azure Communication Services SDKs. ++| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | +||--|--|--|--|-|--|| +|Is Supported | ✔️ | ✔️ | ✔️ | | ✔️ | | ✔️ | + ::: zone pivot="platform-web" [!INCLUDE [Spotlight Client-side JavaScript](./includes/spotlight/spotlight-web.md)] ::: zone-end |
container-apps | Java Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-metrics.md | Use the following steps to view metrics visualizations for your container app. :::image type="content" source="media/java-metrics/azure-container-apps-java-metrics-visualization.png" alt-text="Screenshot of Java metrics visualization." lightbox="media/java-metrics/azure-container-apps-java-metrics-visualization.png"::: -You can see Java metric names on Azure Monitor, but the data sets report as empty unless you use the `--enable-java-metrics` parameter to enable Java metrics. +You can see Java metric names on Azure Monitor, but the data sets show as empty unless the feature is enabled. Refer to the [Configuration](#configuration) section for how to enable it. ## Next steps |
container-registry | Container Registry Tasks Reference Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md | Task properties typically appear at the top of an `acr-task.yaml` file, and are | Property | Type | Optional | Description | Override supported | Default value | | -- | - | -- | -- | | - |-| `version` | string | Yes | The version of the `acr-task.yaml` file as parsed by the ACR Tasks service. While ACR Tasks strives to maintain backward compatibility, this value allows ACR Tasks to maintain compatibility within a defined version. If unspecified, defaults to the latest version. | No | None | +| `version` | string | Yes | The version of the `acr-task.yaml` file as parsed by the ACR Tasks service. While ACR Tasks strives to maintain backward compatibility, this value allows ACR Tasks to maintain compatibility within a defined version. If unspecified, defaults to `v1.0.0`. | N/A | `v1.0.0` | | `stepTimeout` | int (seconds) | Yes | The maximum number of seconds a step can run. If the `stepTimeout` property is specified on a task, it sets the default `timeout` property of all the steps. If the `timeout` property is specified on a step, it overrides the `stepTimeout` property provided by the task.<br/><br/>The sum of the step timeout values for a task should equal the value of the task's run `timeout` property (for example, set by passing `--timeout` to the `az acr task create` command). If the tasks's run `timeout` value is smaller, it takes priority. | Yes | 600 (10 minutes) | | `workingDirectory` | string | Yes | The working directory of the container during runtime. If the property is specified on a task, it sets the default `workingDirectory` property of all the steps. If specified on a step, it overrides the property provided by the task. | Yes | `c:\workspace` in Windows or `/workspace` in Linux | | `env` | [string, string, ...] | Yes | Array of strings in `key=value` format that define the environment variables for the task. If the property is specified on a task, it sets the default `env` property of all the steps. If specified on a step, it overrides any environment variables inherited from the task. | Yes | None | |
hdinsight-aks | Sdk Cluster Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/sdk-cluster-creation.md | Click on the Run button. There are extensive ways supported to customize and manage cluster using .NET SDK. Review the following documentation: - [Azure Resource Manager HDInsight Containers](/dotnet/api/overview/azure/resourcemanager.hdinsight.containers-readme) -- [Azure.ResourceManager.HDInsight.Containers GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/hdinsight/Azure.ResourceManager.HDInsight.Containers) +- [Azure.ResourceManager.HDInsight.Containers GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/hdinsight/Azure.ResourceManager.HDInsight) |
hdinsight | Log Analytics Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md | Creating new clusters with classic Azure Monitor integration is not available af ## Release and support timeline -* Classic Azure Monitoring integration isn't unavailable after October 15, 2021. You can't enable classic Azure Monitoring integration after that date. +* Classic Azure Monitoring integration isn't available after October 15, 2021. You can't enable classic Azure Monitoring integration after that date. * Classic Azure Monitoring integration ingestion will not be working after August 31, 2024. * HDInsight clusters with Azure Monitor integration (preview) will not be supported beyond February 1, 2025. * Existing Azure Monitor integration(preview) will continue to work, until January 31, 2025. There will be limited support for the Azure Monitor integration(preview). |
hdinsight | Share Hive Metastore With Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/share-hive-metastore-with-synapse.md | description: Learn how to share existing Azure HDInsight external Hive Metastore keywords: external Hive metastore,share,Synapse Previously updated : 05/22/2024 Last updated : 08/16/2024 # Share Hive Metastore with Synapse Spark Pool (Preview) |
openshift | Azure Redhat Openshift Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md | Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to ## Updates - August 2024 -You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit. +You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit. For instructions on adding large clusters, see [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md). -There's a change in the order of actions performed by Site Reliability Engineers of Azure RedHat OpenShift. To maintain the health of a cluster, a timely action is necessary if control plane resources are over-utilized. Now the control plane is resized proactively to maintain cluster health. After the resize of the control plane, a notification is sent out to you with the details of the changes made to the control plane. Make sure you have the quota available in your subscription for Site Reliability Engineers to perform this action. +There's a change in the order of actions performed by Site Reliability Engineers of Azure RedHat OpenShift. To maintain the health of a cluster, a timely action is necessary if control plane resources are over-utilized. Now the control plane is resized proactively to maintain cluster health. After the resize of the control plane, a notification is sent out to you with the details of the changes made to the control plane. Make sure you have the quota available in your subscription for Site Reliability Engineers to perform the cluster resize action. ## Version 4.14 - May 2024 |
openshift | Howto Large Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-large-clusters.md | + + Title: Deploy a large Azure Red Hat OpenShift cluster +description: Discover how to deploy a large Azure Red Hat OpenShift cluster. ++++ Last updated : 08/15/2024++# Deploy a large Azure Red Hat OpenShift cluster ++This article provides the steps and best practices for deploying large scale Azure Red Hat OpenShift clusters up to 250 nodes. For clusters of that size, a combination of control plane nodes and infrastructure nodes is needed to ensure the cluster functions properly is recommended. ++> [!CAUTION] +> Before deleting a large cluster, descale the cluster to 120 nodes or below. +> ++## Deploy a cluster ++For clusters with over 101 control plane nodes, use the following [virtual machine instance types](support-policies-v4.md#supported-virtual-machine-sizes) size recommendations (or similar, newer generation instance types): ++- Standard_D32s_v3 +- Standard_D32s_v4 +- Standard_D32s_v5 ++Following is a sample script using Azure CLI to deploy a cluster with Standard_D32s_v5 as the control plane node: ++```azurecli +#az aro create \ --resource-group $RESOURCEGROUP \ --name $CLUSTER \ --vnet aro-vnet \ --master-subnet master-subnet \ --worker-subnet worker-subnet --master-vm-size Standard_D32s_v5 +``` ++## Deploy infrastructure nodes for the cluster ++For clusters with over 101 nodes, infrastructure nodes are required to separate cluster workloads (such as prometheus) to minimize contention with other workloads. + +> [!NOTE] +> It's recommended that you deploy three (3) infrastructure nodes per cluster for redundancy and scalability needs. +> ++The following instance types are recommended for infrastructure nodes: ++- Standard_E16as_v5 +- Standard_E16s_v5 ++For instructions on configuring infrastructure nodes, see [Deploy infrastructure nodes in an Azure Red Hat OpenShift cluster](howto-infrastructure-nodes.md). ++## Add IP addresses to the cluster ++A maximum of 20 IP addresses can be added to a load balancer. One (1) IP address is needed per 65 nodes, so a cluster with 250 nodes requires a minimum of four (4) IP addresses. ++To add IP addresses to the load balancer using Azure CLI, run the following command: ++`az aro update -n [clustername] ΓÇôg [resourcegroup] --lb-ip-count 20` ++To add IP addresses through a rest API call: ++```rest +az rest --method patch --url https://management.azure.com/subscriptions/fe16a035-e540-4ab7-80d9-373fa9a3d6ae/resourceGroups/shared-cluster/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/shared-cluster?api-version=2023-07-01-preview --body '{"properties": {"networkProfile": {"loadBalancerProfile": {"managedOutboundIps": {"count": 5}}}}}' --headers "Content-Type=application/json" +``` + |
role-based-access-control | Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/storage.md | Azure service: [Storage](/azure/storage/) ## Microsoft.StorageCache -File caching for high-performance computing (HPC). +File caching and Lustre file system capabilities for high-performance computing (HPC). -Azure service: [Azure HPC Cache](/azure/hpc-cache/) +Azure > [!div class="mx-tableFixed"] > | Action | Description | > | | |-> | Microsoft.StorageCache/register/action | Registers the subscription for the storage cache resource provider and enables creation of Azure HPC Cache resources | +> | Microsoft.StorageCache/register/action | Registers the subscription for the storage cache resource provider and enables creation of Azure HPC Cache and Azure Managed Lustre resources | > | Microsoft.StorageCache/preflight/action | | > | Microsoft.StorageCache/checkAmlFSSubnets/action | Validates the subnets for Amlfilesystem | > | Microsoft.StorageCache/getRequiredAmlFSSubnetsSize/action | Calculate the number of ips needed |-> | Microsoft.StorageCache/unregister/action | Azure HPC Cache resource provider | +> | Microsoft.StorageCache/unregister/action | Azure HPC Cache and Azure Managed Lustre resource provider | > | Microsoft.StorageCache/amlFilesystems/read | Gets the properties of an amlfilesystem | > | Microsoft.StorageCache/amlFilesystems/write | Creates a new amlfilesystem, or updates an existing one | > | Microsoft.StorageCache/amlFilesystems/delete | Deletes the amlfilesystem instance | Azure service: [Storage](/azure/storage/) ## Next steps -- [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types)+- [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types) |
sap | Sap Hana High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md | -[1944799]:https://launchpad.support.sap.com/#/notes/1944799 [1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553 [2178632]:https://launchpad.support.sap.com/#/notes/2178632 Before you begin, read the following SAP Notes and papers: - The required SAP kernel versions for Windows and Linux on Microsoft Azure. - SAP Note [2015553] lists the prerequisites for SAP-supported SAP software deployments in Azure. - SAP Note [2205917] has recommended OS settings for SUSE Linux Enterprise Server 12 (SLES 12) for SAP Applications.-- SAP Note [2684254] has recommended OS settings for SUSE Linux Enterprise Server 15 (SLES 15) for SAP Applications.+- SAP Note [2684254] has recommended OS settings for SUSE Linux Enterprise Server 15 (SLES 15) for SAP Applications. - SAP Note [2235581] has SAP HANA supported Operating systems - SAP Note [2178632] has detailed information about all the monitoring metrics that are reported for SAP in Azure. - SAP Note [2191498] has the required SAP host agent version for Linux in Azure. sudo crm configure primitive rsc_SAPHana_<HANA SID>_HDB<instance number> ocf:sus params SID="<HANA SID>" InstanceNumber="<instance number>" PREFER_SITE_TAKEOVER="true" \ DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false" +# Run the following command if the cluster nodes are running on SLES 12 SP05. sudo crm configure ms msl_SAPHana_<HANA SID>_HDB<instance number> rsc_SAPHana_<HANA SID>_HDB<instance number> \ meta notify="true" clone-max="2" clone-node-max="1" \ target-role="Started" interleave="true" +# Run the following command if the cluster nodes are running on SLES 15 SP03 or later. +sudo crm configure clone msl_SAPHana_<HANA SID>_HDB<instance number> rsc_SAPHana_<HANA SID>_HDB<instance number> \ + meta notify="true" clone-max="2" clone-node-max="1" \ + target-role="Started" interleave="true" promotable="true" + sudo crm resource meta msl_SAPHana_<HANA SID>_HDB<instance number> set priority 100 sudo crm configure primitive rsc_ip_<HANA SID>_HDB<instance number> ocf:heartbeat:IPaddr2 \ |
search | Index Add Suggesters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-suggesters.md | POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2024-07-01 ## Sample code -+ [Add search to a web site (JavaScript)](tutorial-javascript-search-query-integration.md#azure-function-suggestions-from-the-catalog) uses an open source Suggestions package for partial term completion in the client app. ++ [Add search to a web site (C#)](tutorial-csharp-search-query-integration.md) uses an open source Suggestions package for partial term completion in the client app. ## Next steps |
search | Samples Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md | Code samples from the Azure AI Search team demonstrate features and workflows. A | [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.| | [quickstart](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart/v11) | [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md) | Covers the basic workflow for creating, loading, and querying a search index in C# using sample data. | | [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart-semantic-search/) | [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md) | Shows the index schema and query request for invoking semantic ranking. |-| [search-website](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) | [Tutorial: Add search to web apps](tutorial-csharp-overview.md) | Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| +| [search-website](https://github.com/Azure-Samples/azure-search-static-web-app) | [Tutorial: Add search to web apps](tutorial-csharp-overview.md) | Demonstrates an end-to-end search app that includes bulk upload using the push APIs and a rich client for hosting the app and handling search requests.| | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. | | [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index. | [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. | |
search | Samples Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md | Learn about the JavaScript code samples that demonstrate the functionality and w | Target | Link | |--|| | Package download | [www.npmjs.com/package/@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) |-| API reference | [@azure/search-documents](/javascript/api/@azure/search-documents/) | +| API reference | [@azure/search-documents](/javascript/api/@azure/search-documents/) | | API test cases | [github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/test](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/test) |-| Source code | [github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents) | +| Source code | [github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents) | ## SDK samples Code samples from the Azure SDK development team demonstrate API usage. You can ||-| | [indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. | | [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/javascript/dataSourceConnectionOperations.js) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |-| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| -| [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | -| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). | -| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12-bet). | +| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| +| [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | +| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). | +| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12-bet). | ### TypeScript samples Code samples from the Azure SDK development team demonstrate API usage. You can ||-| | [indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. | | [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/dataSourceConnectionOperations.ts) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |-| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| -| [skillSet](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/skillSetOperations.ts) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | -| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/synonymMapOperations.ts) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). | -| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12/typescript/src/vectorSearch.ts) | Demonstrates how to index vectors and send a [vector query](vector-search-how-to-query.md). | +| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| +| [skillSet](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/skillSetOperations.ts) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | +| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/synonymMapOperations.ts) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). | +| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12/typescript/src/vectorSearch.ts) | Demonstrates how to index vectors and send a [vector query](vector-search-how-to-query.md). | ## Doc samples Code samples from the Azure AI Search team demonstrate features and workflows. M | Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/quickstart) | Source code for the JavaScript portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |-| [search-website](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-javascript-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| -+| [bulk-insert](https://github.com/Azure-Samples/azure-search-javascriptn-samples/tree/main/bulk-insert) | Source code for the JavaScript example of how to [use the push APIs](search-how-to-load-search-index.md) to upload and index documents. | +| [azure-functions](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/azure-function) | Source code for the JavaScript example of an Azure function that sends queries to a search service. You can substitute this JavaScript version of the `api` code used in the [Add search to web sites](tutorial-csharp-overview.md) C# sample. | > [!TIP] > Try the [Samples browser](/samples/browse/?languages=javascript&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language. The following samples are also published by the Azure AI Search team, but aren't | Samples | Description | ||-|-| [azure-search-vector-sample.js](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript/readme.md) | Vector search sample using the Azure SDK for JavaScript | -| [azure-search-react-template](https://github.com/dereklegenzoff/azure-search-react-template) | React template for Azure AI Search (github.com) | +| [azure-search-vector-sample.js](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript/readme.md) | Vector search sample using the Azure SDK for JavaScript | |
search | Samples Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md | Code samples from the Azure AI Search team demonstrate features and workflows. M ||| | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart) | Source code for the Python portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. | | [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Semantic-Search) | Source code for the Python portion of [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md). It shows the index schema and query request for invoking semantic ranking. |-| [search-website-functions-v4](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| -<!-- | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. | --> +| [bulk-insert](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/bulk-insert) | Source code for the Python example of how to [use the push APIs](search-how-to-load-search-index.md) to upload and index documents. | +| [azure-functions](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/azure-function) | Source code for the Python example of an Azure function that sends queries to a search service. You can substitute this Python version of the `api` code used in the [Add search to web sites](tutorial-csharp-overview.md) C# sample. | ## Demos |
search | Search Add Autocomplete Suggestions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-add-autocomplete-suggestions.md | Although you could write this code natively, it's easier to use functions from e + [XDSoft Autocomplete plug-in](https://xdsoft.net/jqplugins/autocomplete/) appears in the Autocomplete code snippet. -+ [suggestions](https://www.npmjs.com/package/suggestions) appears in the [JavaScript tutorial](tutorial-javascript-overview.md) and code sample. ++ [suggestions](https://www.npmjs.com/package/suggestions) appears in the [Add search to web sites tutorial](tutorial-csharp-overview.md) and code sample. Use these libraries in the client to create a search box supporting both suggestions and autocomplete. Inputs collected in the search box can then be paired with suggestions and autocomplete actions on the search service. The Autocomplete function takes the search term input. The method creates an [Au The following tutorial demonstrates a search-as-you-type experience. > [!div class="nextstepaction"]-> [Add search to a web site (JavaScript)](tutorial-javascript-search-query-integration.md#azure-function-suggestions-from-the-catalog) +> [Add search to a web site (C#)](tutorial-csharp-overview.md) |
search | Search Faceted Navigation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-faceted-navigation.md | If you build the list of facets dynamically based on untrusted user input, valid We recommend the following samples for faceted navigation. The samples also include filters, suggestions, and autocomplete. These samples use React for the presentation layer. * [C#: Add search to web apps](tutorial-csharp-overview.md)-* [Python: Add search to web apps](tutorial-python-overview.md) -* [JavaScript: Add search to web apps](tutorial-javascript-overview.md) |
search | Search Get Started Semantic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-semantic.md | You can find and manage resources in the portal, using the **All resources** or In this quickstart, you learned how to invoke semantic ranking on an existing index. We recommend trying semantic ranking on your own indexes as a next step. However, if you want to continue with demos, visit the following link. > [!div class="nextstepaction"]-> [Tutorial: Add search to web apps](tutorial-python-overview.md) +> [Tutorial: Add search to web apps](tutorial-csharp-overview.md) |
search | Search What Is Azure Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md | An end-to-end exploration of core search features can be accomplished in four st 1. [**Create a search service**](search-create-service-portal.md) in the Azure portal. -1. [**Start with Import data wizard**](search-get-started-portal.md). Choose a built-in sample or a supported data source to create, load, and query an index in minutes. +1. [**Start with Import data wizard**](search-get-started-portal.md). Choose a built-in sample or a supported data source to create, load, and query an index in minutes. 1. [**Finish with Search Explorer**](search-explorer.md), using a portal client to query the search index you just created. |
search | Tutorial Csharp Create Load Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md | -# Step 2 - Create and load Search Index with .NET +# Step 2 - Create and load the search index Continue to build your search-enabled website by following these steps:-* Create a search resource -* Create a new index -* Import data with .NET using the sample script and Azure SDK [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/). -## Create an Azure AI Search resource +- Create a new index +- Load data +The program uses [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) in the Azure SDK for .NET: -## Prepare the bulk import script for Search +- [NuGet package Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) +- [Reference Documentation](/dotnet/api/overview/azure/search) ++Before you start, make sure you have room on your search service for a new index. The free tier limit is three indexes. The Basic tier limit is 15. -The script uses the Azure SDK for Azure AI Search: +## Prepare the bulk import script for Search -* [NuGet package Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) -* [Reference Documentation](/dotnet/api/overview/azure/search) +1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `azure-search-static-web-app/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK. -1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `search-website-functions-v4/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK: + - YOUR-SEARCH-SERVICE-NAME (not the full URL) + - YOUR-SEARCH-ADMIN-API-KEY (see [Find API keys](search-security-api-keys.md#find-existing-keys)) - * YOUR-SEARCH-RESOURCE-NAME - * YOUR-SEARCH-ADMIN-KEY + :::code language="csharp" source="~/azure-search-static-web-app/bulk-insert/Program.cs" ::: - :::code language="csharp" source="~/azure-search-dotnet-samples/search-website-functions-v4/bulk-insert/Program.cs" ::: +1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `azure-search-static-web-app/bulk-insert`. -1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, then run the following command to install the dependencies. +1. Run the following command to install the dependencies. ```bash dotnet restore The script uses the Azure SDK for Azure AI Search: ## Run the bulk import script for Search -1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, to run the following bash command to run the `Program.cs` script: +1. Still in the same subdirectory (`azure-search-static-web-app/bulk-insert`), run the program: ```bash dotnet run ``` -1. As the code runs, the console displays progress. -1. When the upload is complete, the last statement printed to the console is "Finished bulk inserting book data". +1. As the code runs, the console displays progress. You should see the following output. ++ ```bash + Creating (or updating) search index + Status: 201, Value: Azure.Search.Documents.Indexes.Models.SearchIndex + Download data file + Reading and parsing raw CSV data + Uploading bulk book data + Finished bulk inserting book data + ``` ++## Review the new search index ++Once the upload completes, the search index is ready to use. Review your new index in Azure portal. ++1. In Azure portal, [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). ++1. On the left, select **Search Management > Indexes**, and then select the good-books index. ++ :::image type="content" source="media/tutorial-csharp-create-load-index/azure-portal-indexes-page.png" lightbox="media/tutorial-csharp-create-load-index/azure-portal-indexes-page.png" alt-text="Expandable screenshot of Azure portal showing the index." border="true"::: -## Review the new Search Index +1. By default, the index opens in the **Search Explorer** tab. Select **Search** to return documents from the index. + :::image type="content" source="media/tutorial-csharp-create-load-index/azure-portal-search-explorer.png" lightbox="media/tutorial-csharp-create-load-index/azure-portal-search-explorer.png" alt-text="Expandable screenshot of Azure portal showing search results" border="true"::: ## Rollback bulk import file changes +Use the following git command in the Visual Studio Code integrated terminal at the `bulk-insert` directory to roll back the changes to the `Program.cs` file. They aren't needed to continue the tutorial and you shouldn't save or push your API keys or search service name to your repo. -## Copy your Search resource name +```git +git checkout . +``` ## Next steps |
search | Tutorial Csharp Deploy Static Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-deploy-static-web-app.md | Title: "Deploy search app (.NET tutorial)" -description: Deploy search-enabled website with .NET apis to Azure Static web app. +description: Deploy search-enabled website with .NET APIs to Azure Static web app. Previously updated : 04/25/2024 Last updated : 08/16/2024 - devx-track-csharp - devx-track-dotnet ms.devlang: csharp # Step 3 - Deploy the search-enabled .NET website +Deploy the search-enabled website as an Azure Static Web Apps site. This deployment includes both the React app for the web pages, and the Function app for search operations. ++The static web app pulls the information and files for deployment from GitHub using your fork of the azure-search-static-web-app repository. ++## Create a Static Web App in Visual Studio Code ++1. In Visual Studio Code, make sure you're at the repository root, and not the bulk-insert folder (for example, `azure-search-static-web-app`). ++1. Select **Azure** from the Activity Bar, then open **Resources** from the side bar. ++1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**. If you don't see this option, verify that you have the Azure Functions extension for Visual Studio Code. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Screenshot of Visual Studio Code, with the Azure Static Web Apps explorer showing the option to create an advanced static web app."::: ++1. If you see a pop-up window asking you to commit your changes, don't do this. The secrets from the bulk import step shouldn't be committed to the repository. ++ To roll back the changes, in Visual Studio Code select the Source Control icon in the Activity bar, then select each changed file in the Changes list and select the **Discard changes** icon. ++1. Follow the prompts to create the static web app: ++ |Prompt|Enter| + |--|--| + |Select a resource group for new resources. | Create a new resource group for the static app.| + |Enter the name for the new Static Web App. | Give your static app a name, such as `my-demo-static-web-app`. | + |Select a SKU | Select the free SKU for this tutorial.| + |Select a location for new resources. | Choose a region near you. | + |Choose build preset to configure default project structure. |Select **Custom**. | + |Select the location of your client application code | `client`<br><br>This is the path, from the root of the repository, to your static web app. | + |Enter the path of your build output... | `build`<br><br>This is the path, from your static web app, to your generated files.| ++ If you get an error about an incorrect region, make sure the resource group and static web app resource are in one of the supported regions listed in the error response. ++1. When the static web app is created, a GitHub workflow YML file is also created locally and on GitHub in your fork. This workflow executes in your fork, building and deploying the static web app and functions. ++ Check the status of static web app deployment using any of these approaches: ++ * Select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo. + * Select the **Actions** tab in your forked repository. You should see a list of all workflows on your fork. + * Select the **Azure: Activity Log** in Visual Code. You should see a message similar to the following screenshot. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/visual-studio-code-azure-activity-log.png" alt-text="Screenshot of the Activity Log in Visual Studio Code." border="true"::: ++## Get the Azure AI Search query key in Visual Studio Code ++While you might be tempted to reuse your search admin key for query purposes that isn't following the principle of least privilege. The Azure Function should use the query key to conform to least privilege. ++1. In Visual Studio Code, open a new terminal window. ++1. Get the query API key with this Azure CLI command: ++ ```azurecli + az search query-key list --resource-group YOUR-SEARCH-SERVICE-RESOURCE-GROUP --service-name YOUR-SEARCH-SERVICE-NAME + ``` ++1. Keep this query key to use in the next section. The query key authorizes read access to a search index. ++## Add environment variables in Azure portal ++The Azure Function app won't return search data until the search secrets are in settings. ++1. Select **Azure** from the Activity Bar. ++1. Right-click on your Static Web Apps resource then select **Open in Portal**. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Screenshot of Visual Studio Code showing Azure Static Web Apps explorer with the Open in Portal option shown."::: ++1. Select **Environment variables** then select **+ Add application setting**. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Screenshot of the static web app's environment variables page in the Azure portal."::: ++1. Add each of the following settings: ++ |Setting|Your Search resource value| + |--|--| + |SearchApiKey|Your search query key| + |SearchServiceName|Your search resource name| + |SearchIndexName|`good-books`| + |SearchFacets|`authors*,language_code`| ++ Azure AI Search requires different syntax for filtering collections than it does for strings. Add a `*` after a field name to denote that the field is of type `Collection(Edm.String)`. This allows the Azure Function to add filters correctly to queries. ++1. Check your settings to make sure they look like the following screenshot. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Screenshot of browser showing Azure portal with the button to save the settings for your app."::: ++1. Return to Visual Studio Code. ++1. Refresh your static web app to see the application settings and functions. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/visual-studio-code-extension-fresh-resource-2.png" alt-text="Screenshot of Visual Studio Code showing the Azure Static Web Apps explorer with the new application settings." border="true"::: ++If you don't see the application settings, revisit the steps for updating and relaunching the GitHub workflow. ++## Use search in your static web app ++1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon. ++1. In the Side bar, **right-click on your Azure subscription** under the `Static Web Apps` area and find the static web app you created for this tutorial. ++1. Right-click the static web app name and select **Browse site**. ++ :::image type="content" source="media/tutorial-csharp-static-web-app/visual-studio-code-browse-static-web-app.png" alt-text="Screenshot of Visual Studio Code showing the Azure Static Web Apps explorer showing the **Browse site** option."::: ++1. Select **Open** in the pop-up dialog. ++1. In the website search bar, enter a search query such as `code`, so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query. ++1. Review the results then select one of the books to see more details. ++## Troubleshooting ++If the web app didn't deploy or work, use the following list to determine and fix the issue: ++* **Did the deployment succeed?** ++ In order to determine if your deployment succeeded, you need to go to _your_ fork of the sample repo and review the success or failure of the GitHub action. There should be only one action and it should have static web app settings for the `app_location`, `api_location`, and `output_location`. If the action didn't deploy successfully, dive into the action logs and look for the last failure. ++* **Does the client (front-end) application work?** ++ You should be able to get to your web app and it should successfully display. If the deployment succeeded but the website doesn't display, this might be an issue with how the static web app is configured for rebuilding the app, once it is on Azure. ++* **Does the API (serverless back-end) application work?** ++ You should be able to interact with the client app, searching for books and filtering. If the form doesn't return any values, open the browser's developer tools, and determine if the HTTP calls to the API were successful. If the calls weren't successful, the most likely reason if the static web app configurations for the API endpoint name and search query key are incorrect. ++ If the path to the Azure function code (`api_location`) isn't correct in the YML file, the application loads but won't call any of the functions that provide integration with Azure AI Search. Revisit the deployment section to make sure paths are correct. ++## Clean up resources ++To clean up the resources created in this tutorial, delete the resource group or individual resources. ++1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon. ++1. In the Side bar, **right-click on your Azure subscription** under the `Static Web Apps` area and find the app you created for this tutorial. ++1. Right-click the app name then select **Delete**. ++1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the repository. ++1. To delete Azure AI Search, [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) and select **Delete** at the top of the page. ## Next steps |
search | Tutorial Csharp Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md | -# Step 1 - Overview of adding search to a website with .NET +# Step 1 - Overview of adding search to a static web app with .NET This tutorial builds a website to search through a catalog of books and then deploys the website to an Azure static web app. ## What does the sample do? +This sample website provides access to a catalog of 10,000 books. You can search the catalog by entering text in the search bar. While you enter text, the website uses the search index's [\suggestion feature to autocomplete the text. Once the query finishes, the list of books is displayed with a portion of the details. You can select a book to see all the details, stored in the search index, of the book. +++The search experience includes: ++- [Search](search-query-create.md) ΓÇô provides search functionality for the application. +- [Suggest](search-add-autocomplete-suggestions.md) ΓÇô provides suggestions as the user is typing in the search bar. +- [Facets and filters](search-faceted-navigation.md) - provides a faceted navigation structure that filters by author or language. +- [Paginated results](search-pagination-page-layout.md) - provides paging controls for scrolling through results. +- [Document Lookup](search-query-overview.md#document-look-up) ΓÇô looks up a document by ID to retrieve all of its contents for the details page. ## How is the sample organized? -The [sample code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) includes the following folders: +The [sample code](https://github.com/Azure-Samples/azure-search-static-web-app) includes the following components: |App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4/client)| -|Server|Azure .NET Function app (business layer) - calls the Azure AI Search API using .NET SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4/api)| -|Bulk insert|.NET file to create the index and add documents to it.|[/search-website-functions-v4/bulk-insert](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4/bulk-insert)| +|client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/azure-search-static-web-app/client](https://github.com/Azure-Samples/azure-search-static-web-app/tree/main/client)| +|api|Azure .NET Function app (business layer) - calls the Azure AI Search API using .NET SDK |[/azure-search-static-web-app/api](https://github.com/Azure-Samples/azure-search-static-web-app/tree/main/api)| +|bulk insert|.NET project to create the index and add documents to it.|[/azure-search-static-web-app/bulk-insert](https://github.com/Azure-Samples/azure-search-static-web-app/tree/main/bulk-insert)| ## Set up your development environment -Install the following software for your local development environment. +Create services and install the following software for your local development environment. +- [Azure AI Search](search-create-service-portal.md), any region or tier - [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0) or later - [Git](https://git-scm.com/downloads)-- [Visual Studio Code](https://code.visualstudio.com/) and the following extensions- - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - - Use the integrated terminal for command line operations. -- Optional:- - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash#install-the-azure-functions-core-tools). +- [Visual Studio Code](https://code.visualstudio.com/) +- [C# Dev Tools extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csdevkit) +- [Azure Static Web App extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) ++This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash#install-the-azure-functions-core-tools). ## Fork and clone the search sample with git Forking the sample repository is critical to be able to deploy the Static Web App. The web apps determine the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with Azure Static Web Apps reading from the code in your forked sample. -1. On GitHub, fork the [sample repository](https://github.com/Azure-Samples/azure-search-dotnet-samples). +1. On GitHub, fork the [azure-search-static-web-app repository](https://github.com/Azure-Samples/azure-search-static-web-app). - Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App. + Complete the [fork process](https://docs.github.com/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App. 1. At a Bash terminal, download your forked sample application to your local computer. Replace `YOUR-GITHUB-ALIAS` with your GitHub alias. ```bash- git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-dotnet-samples + git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-static-web-app.git ``` 1. At the same Bash terminal, go into your forked repository for this website search example: ```bash- cd azure-search-dotnet-samples + cd azure-search-static-web-app ``` 1. Use the Visual Studio Code command, `code .` to open your forked repository. The remaining tasks are accomplished from Visual Studio Code, unless specified. Forking the sample repository is critical to be able to deploy the Static Web Ap code . ``` -## Create a resource group for your Azure resources -- ## Next steps -* [Create a Search Index and load with documents](tutorial-csharp-create-load-index.md) -* [Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md) +- [Create an index and load it with documents](tutorial-csharp-create-load-index.md) +- [Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md) |
search | Tutorial Csharp Search Query Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-search-query-integration.md | ms.devlang: csharp # Step 4 - Explore the .NET search code -In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know. --The application is available: -* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) -* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) +In the previous lessons, you added search to a static web app. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know. ## Azure SDK Azure.Search.Documents The Function app uses the Azure SDK for Azure AI Search: * NuGet: [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) * Reference Documentation: [Client Library](/dotnet/api/overview/azure/search) -The Function app authenticates through the SDK to the cloud-based Azure AI Search API using your resource name, resource key, and index name. The secrets are stored in the Static Web App settings and pulled in to the Function as environment variables. +The function app authenticates through the SDK to the cloud-based Azure AI Search API using your resource name, resource key, and index name. The secrets are stored in the static web app settings and pulled in to the function as environment variables. ## Configure secrets in a local.settings.json file ## Azure Function: Search the catalog -The `Search` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/search-website-functions-v4/api/Search.cs) takes a search term and searches across the documents in the Search Index, returning a list of matches. +The [Search API](https://github.com/Azure-Samples/azure-search-static-web-app/api/Search.cs) takes a search term and searches across the documents in the search index, returning a list of matches. -The Azure Function pulls in the Search configuration information, and fulfills the query. +The Azure function pulls in the search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog -The `Suggest` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/search-website-functions-v4/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches. +The [Suggest API](https://github.com/Azure-Samples/azure-search-static-web-app/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches. -The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/search-website-functions-v4/bulk-insert/BookSearchIndex.cs) used during bulk upload. +The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-static-web-app/bulk-insert/BookSearchIndex.cs) used during bulk upload. ## Client: Suggestions from the catalog The Suggest function API is called in the React app at `\client\src\components\SearchBar\SearchBar.js` as part of component initialization: ## Azure Function: Get specific document -The `Lookup` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/search-website-functions-v4/api/Lookup.cs) takes an ID and returns the document object from the Search Index. +The [Document Lookup API](https://github.com/Azure-Samples/azure-search-static-web-app/api/Lookup.cs) takes an ID and returns the document object from the Search Index. ## Client: Get specific document This function API is called in the React app at `\client\src\pages\Details\Detail.js` as part of component initialization: ## C# models to support function app The following models are used to support the functions in this app. ## Next steps +To continue learning more about Azure AI Search development, try this next tutorial about indexing: + * [Index Azure SQL data](search-indexer-tutorial.md) |
search | Tutorial Javascript Create Load Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md | - Title: "Load an index (JavaScript tutorial)"- -description: Create index and import CSV data into Search index with JavaScript using the npm SDK @azure/search-documents. ----- Previously updated : 07/22/2024-- - devx-track-js - - devx-track-azurecli - - devx-track-azurepowershell - - ignite-2023 ---# 2 - Create and load Search Index with JavaScript --Continue to build your search-enabled website by following these steps: --* Create a search resource -* Create a new index -* Import data with JavaScript using the [bulk_insert_books script](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/bulk-insert/bulk_insert_books.js) and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents). --## Create an Azure AI Search resource ---## Prepare the bulk import script for Search --The ESM script uses the Azure SDK for Azure AI Search: --* [npm package @azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) -* [Reference Documentation](/javascript/api/overview/azure/search-documents-readme) --1. In Visual Studio Code, open the `bulk_insert_books.js` file in the subdirectory, `search-website-functions-v4/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK: -- * YOUR-SEARCH-RESOURCE-NAME - * YOUR-SEARCH-ADMIN-KEY -- :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" ::: --1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, and run the following command to install the dependencies. -- ```bash - npm install - ``` --## Run the bulk import script for Search --1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, to run the `bulk_insert_books.js` script: -- ```javascript - npm start - ``` --1. As the code runs, the console displays progress. -1. When the upload is complete, the last statement printed to the console is "done". --## Review the new search index ---## Rollback bulk import file changes ---## Copy your Search resource name ---## Next steps --[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md) |
search | Tutorial Javascript Deploy Static Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-deploy-static-web-app.md | - Title: "Deploy search app (JavaScript tutorial)"- -description: Deploy search-enabled website to Azure Static Web Apps. ----- Previously updated : 07/22/2024-- - devx-track-js - - ignite-2023 ---# 3 - Deploy the search-enabled website ---## Next steps --[Explore the JavaScript search code](tutorial-javascript-search-query-integration.md) |
search | Tutorial Javascript Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md | - Title: "Add search to web sites (JavaScript tutorial)"- -description: Technical overview and setup for adding search to a website and deploying to an Azure Static Web Apps. ----- Previously updated : 07/22/2024-- - devx-track-js - - ignite-2023 ---# 1 - Overview of adding search to a website --In this Azure AI Search tutorial, create a web app that searches through a catalog of books, and then deploy the website to an Azure Static Web Apps resource. --This tutorial is for JavaScript developers who want to create a frontend client app that includes search interactions like faceted navigation, typeahead, and pagination. It also demonstrates the `@azure/search-documents` library in the Azure SDK for JavaScript for calls to Azure AI Search for indexing and query workflows on the backend. --## What does the sample do? ---## How is the sample organized? --The [sample code](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) includes the following components: --|App|Purpose|GitHub<br>Repository<br>Location| -|--|--|--| -|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4/client)| -|Server|Azure Function app (business layer) - calls the Azure AI Search API using JavaScript SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4/api)| -|Bulk insert|JavaScript file to create the index and add documents to it.|[/search-website-functions-v4/bulk-insert](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4/bulk-insert)| --## Set up your development environment --Install the following software in your local development environment. --- [Node.js LTS](https://nodejs.org/en/download)- - Select latest runtime and version from this [list of supported language versions](../azure-functions/functions-versions.md?pivots=programming-language-javascript&tabs=azure-cli%2clinux%2cin-process%2cv4#languages). - - If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (`nvm`) or a Docker container. -- [Git](https://git-scm.com/downloads)-- [Visual Studio Code](https://code.visualstudio.com/) and the following extensions- - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - - Use the integrated terminal for command line operations. --- Optional:- - This tutorial doesn't run the Azure Function API locally. If you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash) globally with the following bash command: - - ```bash - npm install -g azure-functions-core-tools@4 - ``` --## Fork and clone the search sample with git --Forking the sample repository is critical to be able to deploy the Static Web App. The static web app determines the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with the static web app reading from the code in your forked sample. --1. On GitHub, [fork the sample repository](https://github.com/Azure-Samples/azure-search-javascript-samples/fork). -- Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App. --1. At a bash terminal, download your forked sample application to your local computer. -- Replace `YOUR-GITHUB-ALIAS` with your GitHub alias. -- ```bash - git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-javascript-samples - ``` --1. At the same bash terminal, go into your forked repository for this website search example: -- ```bash - cd azure-search-javascript-samples - ``` --1. Use the Visual Studio Code command, `code .` to open your forked repository. The remaining tasks are accomplished from Visual Studio Code, unless specified. -- ```bash - code . - ``` --## Create a resource group for your Azure resources ---## Next steps --[Create a Search Index and load with documents](tutorial-javascript-create-load-index.md) |
search | Tutorial Javascript Search Query Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-search-query-integration.md | - Title: "Explore code (JavaScript tutorial)"- -description: Understand the JavaScript SDK Search integration queries used in the Search-enabled website with this cheat sheet. ----- Previously updated : 07/22/2024-- - devx-track-js - - ignite-2023 ---# 4 - Explore the JavaScript search code --In the previous lessons, you added search to a static web app. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know. --The source code is available in the [azure-search-javascript-samples](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) GitHub repository. --## Azure SDK @azure/search-documents --The Function app uses the Azure SDK for Azure AI Search: --* NPM: [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) -* Reference Documentation: [Client Library](/javascript/api/overview/azure/search-documents-readme) --The Function app authenticates through the SDK to the cloud-based Azure AI Search API using your resource name, [API key](search-security-api-keys.md), and index name. The secrets are stored in the static web app settings and pulled in to the function as environment variables. --## Configure secrets in a configuration file ---## Azure Function: Search the catalog --The [Search API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/api/src/functions/search.js) takes a search term and searches across the documents in the search index, returning a list of matches. --The Azure Function pulls in the search configuration information, and fulfills the query. ---## Client: Search from the catalog --Call the Azure Function in the React client with the following code. ---## Client: Facets from the catalog --This React component includes the search textbox and the [**facets**](search-faceted-navigation.md) associated with the search results. Facets need to be thought out and designed as part of the search schema when the search data is loaded. Then the facets are used in the search query, along with the search text, to provide the faceted navigation experience. ---## Client: Pagination from the catalog --When the search results expand beyond a trivial few (8), the `@mui/material/TablePagination` component provides **pagination** across the results. ---When the user changes the page, that value is sent to the parent `Search.js` page from the `handleChangePage` function. The function sends a new request to the search API for the same query and the new page. The API response updates the facets, results, and pager components. --## Azure Function: Suggestions from the catalog --The [Suggest API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/api/src/functions/suggest.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches. --The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/bulk-insert/good-books-index.json) used during bulk upload. ---## Client: Suggestions from the catalog --The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization: ---This React component uses the `@mui/material/Autocomplete` component to provide a search textbox, which also supports displaying suggestions (using the `renderInput` function). Autocomplete starts after the first several characters are entered. As each new character is entered, it's sent as a query to the search engine. The results are displayed as a short list of suggestions. --This autocomplete functionality is a common feature but this specific implementation has an additional use case. The customer can enter text and select from the suggestions _or_ submit their entered text. The input from the suggestion list as well as the input from the textbox must be tracked for changes, which impact how the form is rendered and what is sent to the **search** API when the form is submitted. --If your use case for search allows your user to select only from the suggestions, that will reduce the scope of complexity of the control but limit the user experience. --## Azure Function: Get specific document --The [Lookup API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/api/src/functions/lookup.js) takes an ID and returns the document object from the search index. ---## Client: Get specific document --This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization: ---If your client app can use pregenerated content, this page is a good candidate for autogeneration because the content is static, pulled directly from the search index. --## Next steps --In this tutorial series, you learned how to create and load a search index in JavaScript, and you built a web app that provides a search experience that includes a search bar, faceted navigation and filters, suggestions, pagination, and document lookup. --As a next step, you can extend this sample in several directions: --* Add [autocomplete](search-add-autocomplete-suggestions.md) for more typeahead. -* Add or modify [facets](search-faceted-navigation.md) and [filters](search-filters.md). -* Change the authentication and authorization model, using [Microsoft Entra ID](search-security-rbac.md) instead of [key-based authentication](search-security-api-keys.md). -* Change the [indexing methodology](search-what-is-data-import.md). Instead of pushing JSON to a search index, preload a blob container with the good-books dataset and [set up a blob indexer](search-howto-indexing-azure-blob-storage.md) to ingest the data. Knowing how to work with indexers gives you more options for data ingestion and [content enrichment](cognitive-search-concept-intro.md) during indexing. |
search | Tutorial Python Create Load Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md | - Title: "Load an index (Python tutorial)"- -description: Create index and import CSV data into Search index with Python using the PYPI package SDK azure-search-documents. ----- Previously updated : 04/25/2024-- - devx-track-python - - devx-track-azurecli - - ignite-2023 ---# 2 - Create and load Search Index with Python --Continue to build your search-enabled website by following these steps: -* Create a search resource -* Create a new index -* Import data with Python using the [sample script](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/bulk-upload/bulk-upload.py) and Azure SDK [azure-search-documents](https://pypi.org/project/azure-search-documents/). --## Create an Azure AI Search resource ---## Prepare the bulk import script for Search --The script uses the Azure SDK for Azure AI Search: --* [PYPI package azure-search-documents](https://pypi.org/project/azure-search-documents/) -* [Reference Documentation](/python/api/azure-search-documents) --1. In Visual Studio Code, open the `bulk_upload.py` file in the subdirectory, `search-website-functions-v4/bulk-upload`, replace the following variables with your own values to authenticate with the Azure Search SDK: -- * YOUR-SEARCH-SERVICE-NAME - * YOUR-SEARCH-SERVICE-ADMIN-API-KEY -- :::code language="python" source="~/azure-search-python-samples/search-website-functions-v4/bulk-upload/bulk-upload.py" ::: --1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, and run the following command to install the dependencies. -- # [macOS/Linux](#tab/linux-install) - - ```bash - python3 -m pip install -r requirements.txt - ``` - - # [Windows](#tab/windows-install) -- ```bash - py -m pip install -r requirements.txt - ``` --## Run the bulk import script for Search --1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, to run the following bash command to run the `bulk_upload.py` script: -- # [macOS/Linux](#tab/linux-run) - - ```bash - python3 bulk-upload.py - ``` - - # [Windows](#tab/windows-run) -- ```bash - py bulk-upload.py - ``` ---1. As the code runs, the console displays progress. -1. When the upload is complete, the last statement printed to the console is "Done! Upload complete". --## Review the new Search Index ---## Rollback bulk import file changes ---## Copy your Search resource name ---## Next steps --[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md) |
search | Tutorial Python Deploy Static Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-deploy-static-web-app.md | - Title: "Deploy search app (Python tutorial)"- -description: Deploy search-enabled Python website to Azure Static web app. ----- Previously updated : 04/25/2024-- - devx-track-python - - ignite-2023 ---# 3 - Deploy the search-enabled Python website ---## Next steps --* [Understand Search integration for the search-enabled website](tutorial-python-search-query-integration.md) |
search | Tutorial Python Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-overview.md | - Title: "Add search to web sites (Python tutorial)"- -description: Technical overview and setup for adding search to a website with Python and deploying to Azure Static Web App. ----- Previously updated : 04/25/2024-- - devx-track-python - - ignite-2023 ---# 1 - Overview of adding search to a website with Python --This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App. --## What does the sample do? ---## How is the sample organized? --The [sample code](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) includes the following: --|App|Purpose|GitHub<br>Repository<br>Location| -|--|--|--| -|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4/client)| -|Server|Azure Function app (business layer) - calls the Azure AI Search API using Python SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4/api)| -|Bulk insert|Python file to create the index and add documents to it.|[/search-website-functions-v4/bulk-upload](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4/bulk-upload)| --## Set up your development environment --Install the following for your local development environment. --- [Python 3.9](https://www.python.org/downloads/)-- [Git](https://git-scm.com/downloads)-- [Visual Studio Code](https://code.visualstudio.com/) and the following extensions- - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - - Use the integrated terminal for command line operations. -- Optional:- - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash). --## Fork and clone the search sample with git --Forking the sample repository is critical to be able to deploy the static web app. The web apps determine the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with Azure static web apps reading from the code in your forked sample. --1. On GitHub, fork the [sample repository](https://github.com/Azure-Samples/azure-search-python-samples). -- Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App. --1. At a bash terminal, download the sample application to your local computer. -- Replace `YOUR-GITHUB-ALIAS` with your GitHub alias. -- ```bash - git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-python-samples.git - ``` --1. In Visual Studio Code, open your local folder of the cloned repository. The remaining tasks are accomplished from Visual Studio Code, unless specified. --## Create a resource group for your Azure resources ---## Next steps --* [Create a Search Index and load with documents](tutorial-python-create-load-index.md) -* [Deploy your Static Web App](tutorial-python-deploy-static-web-app.md) |
search | Tutorial Python Search Query Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-search-query-integration.md | - Title: "Explore code (Python tutorial)"- -description: Understand the Python SDK Search integration queries used in the Search-enabled website with this cheat sheet. ----- Previously updated : 04/25/2024-- - devx-track-python - - ignite-2023 ---# 4 - Explore the Python search code --In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your Python app, this article explains what you need to know. --The application is available: -* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) -* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) --## Azure SDK azure-search-documents --The Function app uses the Azure SDK for Azure AI Search: --* [PYPI package azure-search-documents](https://pypi.org/project/azure-search-documents/) -* [Reference Documentation](/python/api/azure-search-documents) --The Function app authenticates through the SDK to the cloud-based Azure AI Search API using your resource name, API key, and index name. The secrets are stored in the Static Web App settings and pulled in to the Function as environment variables. --## Configure secrets in a configuration file --The Azure Function app settings environment variables are pulled in from a file, `__init__.py`, shared between the three API functions. ---## Azure Function: Search the catalog --The Search [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/api/search.py) takes a search term and searches across the documents in the Search Index, returning a list of matches. --The Azure Function pulls in the search configuration information, and fulfills the query. ---## Client: Search from the catalog --Call the Azure Function in the React client with the following code. ---## Azure Function: Suggestions from the catalog --The `Suggest` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/api/suggest.py) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches. --The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/bulk-upload/good-books-index.json) used during bulk upload. ---## Client: Suggestions from the catalog --The Suggest function API is called in the React app at `client\src\components\SearchBar\SearchBar.js` as part of component initialization: ---## Azure Function: Get specific document --The `Lookup` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/api/lookup.py) takes an ID and returns the document object from the Search Index. ---## Client: Get specific document --This function API is called in the React app at `client\src\pages\Details\Detail.js` as part of component initialization: ---## Next steps --* [Index Azure SQL data](search-indexer-tutorial.md) |
sentinel | Connect Mdti Data Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-mdti-data-connector.md | Title: Enable data connector for Microsoft's threat intelligence-+ +keywords: premium, TI, STIX objects, relationships, threat actor, watchlist, license description: Learn how to ingest Microsoft's threat intelligence into your Sentinel workspace to generate high fidelity alerts and incidents. Previously updated : 3/14/2024 Last updated : 8/16/2024 appliesto: - Microsoft Sentinel in the Azure portal-Bring high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace. The MDTI data connector ingests these IOCs with a simple one-click setup. Then monitor, alert and hunt based on the threat intelligence in the same way you utilize other feeds. +Bring public, open source and high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace with the MDTI data connectors. With a simple one-click setup, use the TI from the standard and premium MDTI data connectors to monitor, alert and hunt. > [!IMPORTANT]-> The Microsoft Defender Threat Intelligence data connector is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> The Microsoft Defender Threat Intelligence data connector and the Premium Microsoft Defender Threat Intelligence data connector are currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] +For more information about the benefits of the standard and premium MDTI data connectors, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-microsoft-defender-threat-intelligence-data-connector). + ## Prerequisites - In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.-- To configure this data connector, you must have read and write permissions to the Microsoft Sentinel workspace.+- To configure these data connectors, you must have read and write permissions to the Microsoft Sentinel workspace. ## Install the Threat Intelligence solution in Microsoft Sentinel -To import threat indicators into Microsoft Sentinel from MDTI, follow these steps: +To import threat indicators into Microsoft Sentinel from standard and premium MDTI, follow these steps: 1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. For more information about how to manage the solution components, see [Discover 1. Find and select the Microsoft Defender Threat Intelligence data connector > **Open connector page** button. - :::image type="content" source="mediti-data-connector-config.png"::: + :::image type="content" source="mediti-data-connector/premium-microsoft-defender-threat-intelligence-data-connector-config.png"::: 1. Enable the feed by selecting the **Connect** button - :::image type="content" source="mediti-data-connector-connect.png"::: + :::image type="content" source="mediti-data-connector/microsoft-defender-threat-intelligence-data-connector-connect.png"::: 1. When MDTI indicators start populating the Microsoft Sentinel workspace, the connector status displays **Connected**. At this point, the ingested indicators are now available for use in the *TI map...* analytics rules. For more information, see [Use threat indicators in analytics rules](use-threat-indicators-in-analytics-rules.md). -You can find the new indicators in the **Threat intelligence** blade or directly in **Logs** by querying the **ThreatIntelligenceIndicator** table. For more information, see [Work with threat indicators](work-with-threat-indicators.md). +Find the new indicators in the **Threat intelligence** blade or directly in **Logs** by querying the **ThreatIntelligenceIndicator** table. For more information, see [Work with threat indicators](work-with-threat-indicators.md). ## Related content In this document, you learned how to connect Microsoft Sentinel to Microsoft's threat intelligence feed with the MDTI data connector. To learn more about Microsoft Defender for Threat Intelligence see the following articles. - Learn about [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti).-- Get started with the MDTI community portal [MDTI portal](https://ti.defender.microsoft.com).+- Get started with the MDTI portal [MDTI portal](/defender/threat-intelligence/learn-how-to-access-microsoft-defender-threat-intelligence-and-make-customizations-in-your-portal). - Use MDTI in analytics [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md). |
sentinel | Understand Threat Intelligence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md | Threat Intelligence also provides useful context within other Microsoft Sentinel Just like all the other event data in Microsoft Sentinel, threat indicators are imported using data connectors. Here are the data connectors in Microsoft Sentinel provided specifically for threat indicators. -- **Microsoft Defender Threat Intelligence data connector** to ingest Microsoft's threat indicators +- **Microsoft Defender Threat Intelligence data connector** to ingest Microsoft's threat indicators +- **Premium Microsoft Defender Threat Intelligence data connector** to ingest MDTI's premium intelligence feed - **Threat Intelligence - TAXII** for industry-standard STIX/TAXII feeds - **Threat Intelligence upload indicators API** for integrated and curated TI feeds using a REST API to connect - **Threat Intelligence Platform data connector** also connects TI feeds using a REST API, but is on the path for deprecation Also, see this catalog of [threat intelligence integrations](threat-intelligence ### Add threat indicators to Microsoft Sentinel with the Microsoft Defender Threat Intelligence data connector -Bring high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace. The MDTI data connector ingests these IOCs with a simple one-click setup. Then monitor, alert and hunt based on the threat intelligence in the same way you utilize other feeds. +Bring public, open source and high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace with the MDTI data connectors. With a simple one-click setup, use the TI from the standard and premium MDTI data connectors to monitor, alert and hunt. -For more information on MDTI data connector, see [Enable MDTI data connector](connect-mdti-data-connector.md). +The freely available MDTI threat analytics rule gives you a taste of what the premium MDTI data connector provides. However with matching analytics, only indicators that match the rule are actually ingested into your environment. The premium MDTI data connector brings the premium TI and allows analytics for more data sources with greater flexibility and understanding of that threat intelligence. Here's a table showing what to expect when you license and enable the premium MDTI data connector. ++| Free | Premium | +|-|-| +| Public indicators of compromise (IOCs) | | +| Open-source intelligence (OSINT) | | +| | Microsoft IOCs | +| | Microsoft-enriched OSINT | ++For more information see the following articles: +- To learn how to get a premium license and explore all the differences between the standard and premium versions, see the [Microsoft Defender Threat Intelligence product page](https://www.microsoft.com/security/business/siem-and-xdr/microsoft-defender-threat-intelligence). +- To learn more about the free MDTI experience, see [Introducing MDTI free experience for Microsoft Defender XDR](https://techcommunity.microsoft.com/t5/microsoft-defender-threat/introducing-mdti-free-experience-for-microsoft-defender-xdr/ba-p/3976635). +- To learn how to enable the MDTI and the PMDTI data connectors, see [Enable MDTI data connector](connect-mdti-data-connector.md). +- To learn about matching analytics, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md). ### Add threat indicators to Microsoft Sentinel with the Threat Intelligence Upload Indicators API data connector |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou ## August 2024 +- [Premium Microsoft Defender Threat Intelligence data connector (Preview)](#premium-microsoft-defender-threat-intelligence-data-connector-preview) - [Unified AMA-based connectors for syslog ingestion](#unified-ama-based-connectors-for-syslog-ingestion) - [Better visibility for Windows security events](#better-visibility-for-windows-security-events) - [New Auxiliary logs retention plan (Preview)](#new-auxiliary-logs-retention-plan-preview) - [Create summary rules for large sets of data (Preview)](#create-summary-rules-in-microsoft-sentinel-for-large-sets-of-data-preview) +### Premium Microsoft Defender Threat Intelligence data connector (Preview) ++Your premium license for Microsoft Defender Threat Intelligence (MDTI) now unlocks the ability to ingest all premium indicators directly into your workspace. The premium MDTI data connector adds more to your hunting and research capabilities within Microsoft Sentinel. ++For more information, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-microsoft-defender-threat-intelligence-data-connector). + ### Unified AMA-based connectors for syslog ingestion With the impending retirement of the Log Analytics Agent, Microsoft Sentinel has consolidated the collection and ingestion of syslog, CEF, and custom-format log messages into three multi-purpose data connectors based on the Azure Monitor Agent (AMA): |
service-fabric | Service Fabric Common Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-common-questions.md | There are many commonly asked questions about what Service Fabric can do and how ### How do I roll back my Service Fabric cluster certificate? -Rolling back any upgrade to your application requires health failure detection prior to your Service Fabric cluster quorum committing the change; committed changes can only be rolled forward. Escalation engineerΓÇÖs through Customer Support Services, may be required to recover your cluster, if an unmonitored breaking certificate change has been introduced. [Service FabricΓÇÖs application upgrade](./service-fabric-application-upgrade.md) applies [Application upgrade parameters](./service-fabric-application-upgrade-parameters.md), and delivers zero downtime upgrade promise. Following our recommended application upgrade monitored mode, automatic progress through update domains is based upon health checks passing, rolling back automatically if updating a default service fails. +Rolling back any upgrade to your application requires health failure detection before your Service Fabric cluster quorum commits the change; committed changes can only be rolled forward. Escalation engineerΓÇÖs through Customer Support Services, may be required to recover your cluster, if something introduces an unmonitored breaking certificate change. [Service FabricΓÇÖs application upgrade](./service-fabric-application-upgrade.md) applies [Application upgrade parameters](./service-fabric-application-upgrade-parameters.md), and delivers zero downtime upgrade promise. Following our recommended application upgrade monitored mode, automatic progress through update domains is based upon health checks passing, rolling back automatically if updating a default service fails. -If your cluster is still leveraging the classic Certificate Thumbprint property in your Resource Manager template, it's recommended you [Change cluster from certificate thumbprint to common name](./service-fabric-cluster-change-cert-thumbprint-to-cn.md), to leverage modern secrets management features. +If your cluster is still using the classic Certificate Thumbprint property in your Resource Manager template, we recommend you [Change cluster from certificate thumbprint to common name](./service-fabric-cluster-change-cert-thumbprint-to-cn.md), to apply modern secrets management features. ### Can I create a cluster that spans multiple Azure regions or my own datacenters? Yes. The core Service Fabric clustering technology can be used to combine machines running anywhere in the world, so long as they have network connectivity to each other. However, building and running such a cluster can be complicated. -If you are interested in this scenario, we encourage you to get in contact either through the [Service Fabric GitHub Issues List](https://github.com/azure/service-fabric-issues) or through your support representative in order to obtain additional guidance. The Service Fabric team is working to provide additional clarity, guidance, and recommendations for this scenario. +If you're interested in this scenario, we encourage you to get in contact either through the [Service Fabric GitHub Issues List](https://github.com/azure/service-fabric-issues) or through your support representative in order to obtain additional guidance. The Service Fabric team is working to provide additional clarity, guidance, and recommendations for this scenario. Some things to consider: -1. The Service Fabric cluster resource in Azure is regional today, as are the virtual machine scale sets that the cluster is built on. This means that in the event of a regional failure you may lose the ability to manage the cluster via the Azure Resource Manager or the Azure portal. This can happen even though the cluster remains running and you'd be able to interact with it directly. In addition, Azure today does not offer the ability to have a single virtual network that is usable across regions. This means that a multi-region cluster in Azure requires either [Public IP Addresses for each VM in the virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine) or [Azure VPN Gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md). These networking choices have different impacts on costs, performance, and to some degree application design, so careful analysis and planning is required before standing up such an environment. -2. The maintenance, management, and monitoring of these machines can become complicated, especially when spanned across _types_ of environments, such as between different cloud providers or between on-premises resources and Azure. Care must be taken to ensure that upgrades, monitoring, management, and diagnostics are understood for both the cluster and the applications before running production workloads in such an environment. If you already have experience solving these problems in Azure or within your own datacenters, then it is likely that those same solutions can be applied when building out or running your Service Fabric cluster. +1. The Service Fabric cluster resource in Azure is regional today, as are the virtual machine scale sets that the cluster is built on. This means that in the event of a regional failure you may lose the ability to manage the cluster via the Azure Resource Manager or the Azure portal. This can happen even though the cluster remains running and you'd be able to interact with it directly. In addition, Azure today doesn't offer the ability to have a single virtual network that is usable across regions. This means that a multi-region cluster in Azure requires either [Public IP Addresses for each VM in the virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine) or [Azure VPN Gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md). These networking choices have different impacts on costs, performance, and to some degree application design, so careful analysis and planning is required before standing up such an environment. +2. The maintenance, management, and monitoring of these machines can become complicated, especially when spanned across _types_ of environments, such as between different cloud providers or between on-premises resources and Azure. Care must be taken to ensure that upgrades, monitoring, management, and diagnostics are understood for both the cluster and the applications before running production workloads in such an environment. If you already have experience solving these problems in Azure or within your own datacenters, then it's likely that those same solutions can be applied when building out or running your Service Fabric cluster. ### Do Service Fabric nodes automatically receive OS updates? For clusters that are NOT run in Azure, we have [provided an application](servic **Short answer** - No. -**Long Answer** - Although the large virtual machine scale sets allow you to scale a virtual machine scale set up to 1000 VM instances, it does so by the use of Placement Groups (PGs). Fault domains (FDs) and upgrade domains (UDs) are only consistent within a placement group Service fabric uses FDs and UDs to make placement decisions of your service replicas/Service instances. Since the FDs and UDs are comparable only within a placement group, SF cannot use it. For example, If VM1 in PG1 has a topology of FD=0 and VM9 in PG2 has a topology of FD=4, it does not mean that VM1 and VM2 are on two different Hardware Racks, hence SF cannot use the FD values in this case to make placement decisions. +**Long Answer** - Although the large virtual machine scale sets allow you to scale a virtual machine scale set up to 1000 VM instances, it does so by the use of Placement Groups (PGs). Fault domains (FDs) and upgrade domains (UDs) are only consistent within a placement group Service fabric uses FDs and UDs to make placement decisions of your service replicas/Service instances. Since the FDs and UDs are comparable only within a placement group, SF can't use it. For example, If VM1 in PG1 has a topology of FD=0 and VM9 in PG2 has a topology of FD=4, it doesn't mean that VM1 and VM2 are on two different Hardware Racks, hence SF can't use the FD values in this case to make placement decisions. There are other issues with large virtual machine scale sets currently, like the lack of level-4 Load balancing support. Refer to for [details on Large scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) We require a production cluster to have at least five nodes because of the follo 2. We always place one replica of a service per node, so cluster size is the upper limit for the number of replicas a service (actually a partition) can have. 3. Since a cluster upgrade will bring down at least one node, we want to have a buffer of at least one node, therefore, we want a production cluster to have at least two nodes *in addition* to the bare minimum. The bare minimum is the quorum size of a system service as explained below. -We want the cluster to be available in the face of simultaneous failure of two nodes. For a Service Fabric cluster to be available, the system services must be available. Stateful system services like naming service and failover manager service, that track what services have been deployed to the cluster and where they're currently hosted, depend on strong consistency. That strong consistency, in turn, depends on the ability to acquire a *quorum* for any given update to the state of those services, where a quorum represents a strict majority of the replicas (N/2 +1) for a given service. Thus if we want to be resilient against simultaneous loss of two nodes (thus simultaneous loss of two replicas of a system service), we must have ClusterSize - QuorumSize >= 2, which forces the minimum size to be five. To see that, consider the cluster has N nodes and there are N replicas of a system service -- one on each node. The quorum size for a system service is (N/2 + 1). The above inequality looks like N - (N/2 + 1) >= 2. There are two cases to consider: when N is even and when N is odd. If N is even, say N = 2\*m where m >= 1, the inequality looks like 2\*m - (2\*m/2 + 1) >= 2 or m >= 3. The minimum for N is 6 and that is achieved when m = 3. On the other hand, if N is odd, say N = 2\*m+1 where m >= 1, the inequality looks like 2\*m+1 - ( (2\*m+1)/2 + 1 ) >= 2 or 2\*m+1 - (m+1) >= 2 or m >= 2. The minimum for N is 5 and that is achieved when m = 2. Therefore, among all values of N that satisfy the inequality ClusterSize - QuorumSize >= 2, the minimum is 5. +We want the cluster to be available in the face of simultaneous failure of two nodes. For a Service Fabric cluster to be available, the system services must be available. Stateful system services like naming service and failover manager service, that track what services have been deployed to the cluster and where they're currently hosted, depend on strong consistency. That strong consistency, in turn, depends on the ability to acquire a *quorum* for any given update to the state of those services, where a quorum represents a strict majority of the replicas (N/2 +1) for a given service. Thus, if we want to be resilient against simultaneous loss of two nodes (simultaneous loss of two replicas of a system service), we must have ClusterSize - QuorumSize >= 2, which forces the minimum size to be five. -Note, in the above argument we have assumed that every node has a replica of a system service, thus the quorum size is computed based on the number of nodes in the cluster. However, by changing *TargetReplicaSetSize* we could make the quorum size less than (N/2+1) which might give the impression that we could have a cluster smaller than 5 nodes and still have 2 extra nodes above the quorum size. For example, in a 4 node cluster, if we set the TargetReplicaSetSize to 3, the quorum size based on TargetReplicaSetSize is (3/2 + 1) or 2, thus we have ClusterSize - QuorumSize = 4-2 >= 2. However, we cannot guarantee that the system service will be at or above quorum if we lose any pair of nodes simultaneously, it could be that the two nodes we lost were hosting two replicas, so the system service will go into quorum loss (having only a single replica left) and will become unavailable. +Note, in the above argument we have assumed that every node has a replica of a system service; thus, the quorum size is computed based on the number of nodes in the cluster. However, by changing *TargetReplicaSetSize* we could make the quorum size less than (N/2+1) which might give the impression that we could have a cluster smaller than 5 nodes and still have 2 extra nodes above the quorum size. For example, in a 4 node cluster, if we set the TargetReplicaSetSize to 3, the quorum size based on TargetReplicaSetSize is (3/2 + 1) or 2, thus we have ClusterSize - QuorumSize = 4-2 >= 2. However, we can't guarantee that the system service will be at or above quorum if we lose any pair of nodes simultaneously, it could be that the two nodes we lost were hosting two replicas, so the system service goes into quorum loss (having only a single replica left) and will become unavailable. With that background, let's examine some possible cluster configurations: -**One node**: this option does not provide high availability since the loss of the single node for any reason means the loss of the entire cluster. +**One node**: this option doesn't provide high availability since the loss of the single node for any reason means the loss of the entire cluster. -**Two nodes**: a quorum for a service deployed across two nodes (N = 2) is 2 (2/2 + 1 = 2). When a single replica is lost, it is impossible to create a quorum. Since performing a service upgrade requires temporarily taking down a replica, this is not a useful configuration. +**Two nodes**: a quorum for a service deployed across two nodes (N = 2) is 2 (2/2 + 1 = 2). When a single replica is lost, it's impossible to create a quorum. Since performing a service upgrade requires temporarily taking down a replica, this isn't a useful configuration. **Three nodes**: with three nodes (N=3), the requirement to create a quorum is still two nodes (3/2 + 1 = 2). This means that you can lose an individual node and still maintain quorum, but simultaneous failure of two nodes will drive the system services into quorum loss and will cause the cluster to become unavailable. For production workloads, you must be resilient to simultaneous failure of at le ### Can I turn off my cluster at night/weekends to save costs? -In general, no. Service Fabric stores state on local, ephemeral disks, meaning that if the virtual machine is moved to a different host, the data does not move with it. In normal operation, that is not a problem as the new node is brought up to date by other nodes. However, if you stop all nodes and restart them later, there is a significant possibility that most of the nodes start on new hosts and make the system unable to recover. +In general, no. Service Fabric stores state on local, ephemeral disks, meaning that if the virtual machine is moved to a different host, the data doesn't move with it. In normal operation, that isn't a problem as the new node is brought up to date by other nodes. However, if you stop all nodes and restart them later, there is a significant possibility that most of the nodes start on new hosts and make the system unable to recover. -If you would like to create clusters for testing your application before it is deployed, we recommend that you dynamically create those clusters as part of your [continuous integration/continuous deployment pipeline](service-fabric-tutorial-deploy-app-with-cicd-vsts.md). +If you would like to create clusters for testing your application before it's deployed, we recommend that you dynamically create those clusters as part of your [continuous integration/continuous deployment pipeline](service-fabric-tutorial-deploy-app-with-cicd-vsts.md). ### How do I upgrade my Operating System (for example from Windows Server 2012 to Windows Server 2016)? -While we're working on an improved experience, today, you are responsible for the upgrade. You must upgrade the OS image on the virtual machines of the cluster one VM at a time. +While we're working on an improved experience, today, you're responsible for the upgrade. You must upgrade the OS image on the virtual machines of the cluster one VM at a time. ### Can I encrypt attached data disks in a cluster node type (virtual machine scale set)? Yes. For more information, see [Create a cluster with attached data disks](../virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md#create-a-service-fabric-cluster-with-attached-data-disks) and [Azure Disk Encryption for Virtual Machine Scale Sets](../virtual-machine-scale-sets/disk-encryption-overview.md). Reliable collections are typically [partitioned](service-fabric-concepts-partiti ### What's the best way to query data across my actors? -Actors are designed to be independent units of state and compute, so it is not recommended to perform broad queries of actor state at runtime. If you have a need to query across the full set of actor state, you should consider either: +Actors are designed to be independent units of state and compute, so it isn't recommended to perform broad queries of actor state at runtime. If you have a need to query across the full set of actor state, you should consider either: - Replacing your actor services with stateful reliable services, so that the number of network requests to gather all data from the number of actors to the number of partitions in your service. - Designing your actors to periodically push their state to an external store for easier querying. As above, this approach is only viable if the queries you're performing are not required for your runtime behavior. Keeping in mind that each object must be stored three times (one primary and two This calculation also assumes: -- That the distribution of data across the partitions is roughly uniform or that you're reporting load metrics to the Cluster Resource Manager. By default, Service Fabric loads balance based on replica count. In the preceding example, that would put 10 primary replicas and 20 secondary replicas on each node in the cluster. That works well for load that is evenly distributed across the partitions. If load is not even, you must report load so that the Resource Manager can pack smaller replicas together and allow larger replicas to consume more memory on an individual node.+- That the distribution of data across the partitions is roughly uniform or that you're reporting load metrics to the Cluster Resource Manager. By default, Service Fabric loads balance based on replica count. In the preceding example, that would put 10 primary replicas and 20 secondary replicas on each node in the cluster. That works well for load that is evenly distributed across the partitions. If load isn't even, you must report load so that the Resource Manager can pack smaller replicas together and allow larger replicas to consume more memory on an individual node. - That the reliable service in question is the only one storing state in the cluster. Since you can deploy multiple services to a cluster, you need to be mindful of the resources that each needs to run and manage its state. -- That the cluster itself is not growing or shrinking. If you add more machines, Service Fabric will rebalance your replicas to leverage the additional capacity until the number of machines surpasses the number of partitions in your service, since an individual replica cannot span machines. By contrast, if you reduce the size of the cluster by removing machines, your replicas are packed more tightly and have less overall capacity.+- That the cluster itself isn't growing or shrinking. If you add more machines, Service Fabric will rebalance your replicas to leverage the additional capacity until the number of machines surpasses the number of partitions in your service, since an individual replica can't span machines. By contrast, if you reduce the size of the cluster by removing machines, your replicas are packed more tightly and have less overall capacity. ### How much data can I store in an actor? As with reliable services, the amount of data that you can store in an actor ser ### Where does Azure Service Fabric Resource Provider store customer data? -Azure Service Fabric Resource Provider doesnΓÇÖt move or store customer data out of the region it is deployed in. +Azure Service Fabric Resource Provider doesnΓÇÖt move or store customer data out of the region it's deployed in. ## Other questions |
storage | Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md | The following table compares Azure Storage services and shows example scenarios | **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. | | **Azure Elastic SAN** | Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability. | You want large scale storage that is interoperable with multiple types of compute resources (such as SQL, MariaDB, Azure virtual machines, and Azure Kubernetes Services) accessed via the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.| | **Azure Disks** | Allows data to be persistently stored and accessed from an attached virtual hard disk. | You want to "lift and shift" applications that use native file system APIs to read and write data to persistent disks.<br/><br/>You want to store data that isn't required to be accessed from outside the virtual machine to which the disk is attached. |-| **Azure Container Storage** (preview) | Azure Container Storage (preview) is a volume management, deployment, and orchestration service that integrates with Kubernetes and is built natively for containers. | You want to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters. | +| **Azure Container Storage**| Azure Container Storage is a volume management, deployment, and orchestration service that integrates with Kubernetes and is built natively for containers. | You want to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters. | | **Azure Queues** | Allows for asynchronous message queueing between application components. | You want to decouple application components and use asynchronous messaging to communicate between them.<br><br>For guidance around when to use Queue Storage versus Service Bus queues, see [Storage queues and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md). | | **Azure Tables** | Allows you to store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. | You want to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. <br/><br/>For guidance around when to use Table Storage versus Azure Cosmos DB for Table, see [Developing with Azure Cosmos DB for Table and Azure Table Storage](/azure/cosmos-db/table-support). | |
virtual-machines | Auto Shutdown Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/auto-shutdown-vm.md | Sign in to the [Azure portal](https://portal.azure.com/). 2. In the virtual machine's detail page, select "Auto-shutdown" under the **Operations** section. 3. In the "Auto-shutdown" configuration screen, toggle the switch to "On." 4. Set the time you want the virtual machine to shut down.-5. Select "Save" to save the auto-shutdown configuration. +5. If you want to receive notification before shutdown, select "Yes" in the "Send notification before shutdown" option and provide details in "Email Address" or "Webhook URL" as per your choice. +6. Select "Save" to save the auto-shutdown configuration. ### [Azure CLI](#tab/azure-cli) |
virtual-machines | Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md | There are limits, per subscription, for deploying resources using Azure Compute - 1,000 image definitions, per subscription, per region - 10,000 image versions, per subscription, per region - 100 replicas per image version however 50 replicas should be sufficient for most use cases-- Any disk attached to the image must be less than or equal to 1 TB in size+- Any disk attached to the image must be less than or equal to 2 TB in size - Resource move isn't supported for Azure compute gallery resources For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage. |
virtual-machines | Disks Copy Incremental Snapshot Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-copy-incremental-snapshot-across-regions.md | This article covers copying an incremental snapshot from one region to another. - If you use the REST API, you must use version 2020-12-01 or newer of the Azure Compute REST API. - You can only copy one incremental snapshot of a particular disk at a time. - Snapshots must be copied in the order they were created.+- Only incremental snapshots can be copied across regions. Full snapshots can't be copied across regions. + ## Managed copy |
virtual-machines | Enable Nvme Interface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md | For more information about enabling the NVMe interface on virtual machines creat | Almalinux 8.x (currently 8.7) | almalinux: almalinux:8-gen2: latest | | Almalinux 9.x (currently 9.1) | almalinux: almalinux:9-gen2: latest | | Debian 11 | Debian: debian-11:11-gen2: latest |+| Debian 12 | Debian: debian-12:12-gen2: latest | RHEL 7.9 | RedHat: RHEL:79-gen2: latest | | RHEL 8.6 | RedHat: RHEL:86-gen2: latest | | RHEL 8.7 | RedHat: RHEL:87-gen2: latest | |
virtual-machines | Disks Upload Vhd To Managed Disk Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md | description: Learn how to upload a VHD to an Azure managed disk and copy a manag Previously updated : 10/17/2023 Last updated : 08/15/2024 If you'd prefer to upload disks through a GUI, you can do so using Azure Storage - If you intend to upload a VHD from on-premises: A fixed size VHD that [has been prepared for Azure](../windows/prepare-for-upload-vhd-image.md), stored locally. - Or, a managed disk in Azure, if you intend to perform a copy action. -To upload your VHD to Azure, you'll need to create an empty managed disk that is configured for this upload process. Before you create one, there's some additional information you should know about these disks. +To upload your VHD to Azure, you need to create an empty managed disk that is configured for this upload process. Before you create one, there's some additional information you should know about these disks. This kind of managed disk has two unique states: This kind of managed disk has two unique states: ## Create an empty managed disk -Before you can create an empty standard HDD for uploading, you'll need the file size of the VHD you want to upload, in bytes. To get that, you can use either `wc -c <yourFileName>.vhd` or `ls -al <yourFileName>.vhd`. This value is used when specifying the **--upload-size-bytes** parameter. +Before you can create an empty standard HDD for uploading, you need the file size of the VHD you want to upload, in bytes. To get that, you can use either `wc -c <yourFileName>.vhd` or `ls -al <yourFileName>.vhd`. This value is used when specifying the **--upload-size-bytes** parameter. Create an empty standard HDD for uploading by specifying both the **-ΓÇôfor-upload** parameter and the **--upload-size-bytes** parameter in a [disk create](/cli/azure/disk#az-disk-create) cmdlet: If you would like to upload a different disk type, replace **standard_lrs** with ### (Optional) Grant access to the disk -If you're using Microsoft Entra ID to secure uploads, you'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-cli.md) to grant access to the disk and generate a writeable SAS. +If you're using Microsoft Entra ID to secure uploads, you need to [assign RBAC permissions](../../role-based-access-control/role-assignments-cli.md) to grant access to the disk and generate a writeable SAS. ```azurecli az role assignment create --assignee "{assignee}" \ az role assignment create --assignee "{assignee}" \ ### Generate writeable SAS -Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for your upload. +Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you need a writeable SAS, so that you can reference it as the destination for your upload. -To generate a writable SAS of your empty managed disk, replace `<yourdiskname>`and `<yourresourcegroupname>`, then use the following command: ++To generate a writable SAS of your empty managed disk, replace `<yourdiskname>` and `<yourresourcegroupname>`, then use the following command: ```azurecli az disk grant-access -n <yourdiskname> -g <yourresourcegroupname> --access-level Write --duration-in-seconds 86400 Now that you have a SAS for your empty managed disk, you can use it to set your Use AzCopy v10 to upload your local VHD or VHDX file to a managed disk by specifying the SAS URI you generated. -This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a throughput of up to 500 MiB/s. +This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you'll have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you'll have a throughput of up to 500 MiB/s. ```bash AzCopy.exe copy "c:\somewhere\mydisk.vhd"ΓÇ»"sas-URI" --blob-type PageBlob ``` -After the upload is complete, and you no longer need to write any more data to the disk, revoke the SAS. Revoking the SAS will change the state of the managed disk and allow you to attach the disk to a VM. +After the upload is complete, and you no longer need to write any more data to the disk, revoke the SAS. Revoking the SAS changes the state of the managed disk and allow you to attach the disk to a VM. Replace `<yourdiskname>`and `<yourresourcegroupname>`, then use the following command to make the disk usable: az disk revoke-access -n <yourdiskname> -g <yourresourcegroupname> Direct upload also simplifies the process of copying a managed disk. You can either copy within the same region or cross-region (to another region). -The following script will do this for you, the process is similar to the steps described earlier, with some differences since you're working with an existing disk. +The following script does this for you. The process is similar to the steps described earlier, with some differences since you're working with an existing disk. > [!IMPORTANT] > You need to add an offset of 512 when you're providing the disk size in bytes of a managed disk from Azure. This is because Azure omits the footer when returning the disk size. The copy will fail if you don't do this. The following script already does this for you. az disk revoke-access -n $targetDiskName -g $targetRG Now that you've successfully uploaded a VHD to a managed disk, you can attach the disk as a [data disk to an existing VM](add-disk.md) or [attach the disk to a VM as an OS disk](upload-vhd.md#create-the-vm), to create a new VM. -If you've additional questions, see the [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) section in the FAQ. +If you've more questions, see the [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) section in the FAQ. |
virtual-machines | Download Vhd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md | Your snapshot will be created shortly, and can then be used to download or creat To download the VHD file, you need to generate a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md?toc=/azure/virtual-machines/windows/toc.json) URL. When the URL is generated, an expiration time is assigned to the URL. + # [Portal](#tab/azure-portal) 1. On the menu of the page for the VM, select **Disks**. |
virtual-machines | Tutorial Lemp Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-lemp-stack.md | Results: The following example creates a VM named `$MY_VM_NAME` and creates SSH keys if they don't already exist in a default key location. The command also sets `$MY_VM_USERNAME` as an administrator user name. -To improve the security of Linux virtual machines in Azure, you can integrate with Azure Active Directory authentication. Now you can use Azure AD as a core authentication platform. You can also SSH into the Linux VM by using Azure AD and OpenSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control and Conditional Access policies. +To improve the security of Linux virtual machines in Azure, you can integrate with Microsoft Entra ID authentication. Now you can use Microsoft Entra ID as a core authentication platform. You can also SSH into the Linux VM by using Microsoft Entra ID and OpenSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control and Conditional Access policies. Create a VM with the [az vm create](/cli/azure/vm#az-vm-create) command. done ``` <!---## Assign Azure AD RBAC for Azure AD login for Linux Virtual Machine +## Assign Microsoft Entra ID RBAC for Microsoft Entra ID login for Linux Virtual Machine The below command uses [az role assignment create](https://learn.microsoft.com/cli/azure/role/assignment#az-role-assignment-create) to assign the `Virtual Machine Administrator Login` role to the VM for your current Azure user. ```bash export MY_RESOURCE_GROUP_ID=$(az group show --resource-group $MY_RESOURCE_GROUP_NAME --query id -o tsv) Results: <!-- ## Export the SSH configuration for use with SSH clients that support OpenSSH-Login to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration. That means you can use any SSH clients that support OpenSSH-based certificates to sign in through Azure AD. The following example exports the configuration for all IP addresses assigned to the VM: +Login to Azure Linux VMs with Microsoft Entra ID supports exporting the OpenSSH certificate and configuration. That means you can use any SSH clients that support OpenSSH-based certificates to sign in through Microsoft Entra ID. The following example exports the configuration for all IP addresses assigned to the VM: ```bash az ssh config --file ~/.ssh/azure-config --name $MY_VM_NAME --resource-group $MY_RESOURCE_GROUP_NAME ``` --> -## Enable Azure AD login for a Linux Virtual Machine in Azure +## Enable Microsoft Entra ID login for a Linux Virtual Machine in Azure -The following installs the extension to enable Azure AD login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. +The following installs the extension to enable Microsoft Entra ID login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. ```bash az vm extension set \ |
virtual-machines | Setup Infiniband | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-infiniband.md | + + Title: Set up InfiniBand on HPC VMs - Azure Virtual Machines | Microsoft Docs +description: Learn how to set up InfiniBand on Azure HPC VMs. +++ Last updated : 08/05/2024++++++# Set up InfiniBand ++> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). ++**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets ++> [!TIP] +> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload. ++This article shares some information on RDMA-capable instances to be used over an InfiniBand (IB) network. ++## RDMA-capable instances ++Most of the HPC VM sizes feature a network interface for remote direct memory access (RDMA) connectivity. Selected [N-series](./nc-series.md) sizes designated with 'r' are also RDMA-capable. This interface is in addition to the standard Azure Ethernet network interface available in the other VM sizes. ++This secondary interface allows the RDMA-capable instances to communicate over an InfiniBand (IB) network, operating at HDR rates for HBv4, HBv3, HBv2, EDR rates for HB, HC, HX, NDv2, and FDR rates for H16r, H16mr, and other RDMA-capable N-series virtual machines. These RDMA capabilities can boost the scalability and performance of Message Passing Interface (MPI) based applications. ++> [!NOTE] +> **SR-IOV support**: In Azure HPC, currently there are two classes of VMs depending on whether they are SR-IOV enabled for InfiniBand. Currently, almost all the newer generation, RDMA-capable or InfiniBand enabled VMs on Azure are SR-IOV enabled except for H16r, H16mr, and NC24r. +> RDMA is only enabled over the InfiniBand (IB) network and is supported for all RDMA-capable VMs. +> IP over IB is only supported on the SR-IOV enabled VMs. +> RDMA is not enabled over the Ethernet network. ++- **Operating System** - Linux distributions such as CentOS, RHEL, AlmaLinux, Ubuntu, SUSE are commonly used. Windows Server 2016 and newer versions are supported on all the HPC series VMs. Note that [Windows Server 2012 R2 is not supported on HBv2 onwards as VM sizes with more than 64 (virtual or physical) cores](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows). See [VM Images](./workloads/hpc/configure.md) for a list of supported Linux VM images on the Azure Marketplace and how they can be configured appropriately. The respective VM size pages also list out the software stack support. ++- **InfiniBand and Drivers** - On InfiniBand enabled VMs, the appropriate drivers are required to enable RDMA. See [enabling InfiniBand](./workloads/hpc/enable-infiniband.md) to learn about VM extensions or manual installation of InfiniBand drivers. ++- **MPI** - The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. See [Setup MPI for HPC](./workloads/hpc/setup-mpi.md) for more details on setting up MPI on HPC VMs on Azure. ++ > [!NOTE] + > **RDMA network address space**: The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network. ++## Cluster configuration options ++Azure provides several options to create clusters of HPC VMs that can communicate using the RDMA network, including: ++- **Virtual machines** - Deploy the RDMA-capable HPC VMs in the same scale set or availability set (when you use the Azure Resource Manager deployment model). If you use the classic deployment model, deploy the VMs in the same cloud service. ++- **Virtual machine scale sets** - In a virtual machine scale set, ensure that you limit the deployment to a single placement group for InfiniBand communication within the scale set. For example, in a Resource Manager template, set the `singlePlacementGroup` property to `true`. Note that the maximum scale set size that can be spun up with `singlePlacementGroup=true` is capped at 100 VMs by default. If your HPC job scale needs are higher than 100 VMs in a single tenant, you may request an increase, [open an online customer support request](../azure-portal/supportability/how-to-create-azure-support-request.md) at no charge. The limit on the number of VMs in a single scale set can be increased to 300. Note that when deploying VMs using Availability Sets the maximum limit is at 200 VMs per Availability Set. ++ > [!NOTE] + > **MPI among virtual machines**: If RDMA (e.g. using MPI communication) is required between virtual machines (VMs), ensure that the VMs are in the same virtual machine scale set or availability set. ++- **Azure CycleCloud** - Create an HPC cluster using [Azure CycleCloud](/azure/cyclecloud/) to run MPI jobs. ++- **Azure Batch** - Create an [Azure Batch](../batch/index.yml) pool to run MPI workloads. To use compute-intensive instances when running MPI applications with Azure Batch, see [Use multi-instance tasks to run Message Passing Interface (MPI) applications in Azure Batch](../batch/batch-mpi.md). ++- **Microsoft HPC Pack** - [HPC Pack](/powershell/high-performance-computing/overview) includes a runtime environment for MS-MPI that uses the Azure RDMA network when deployed on RDMA-capable Linux VMs. For example deployments, see [Set up a Linux RDMA cluster with HPC Pack to run MPI applications](/powershell/high-performance-computing/hpcpack-linux-openfoam). ++## Deployment considerations ++- **Azure subscription** ΓÇô To deploy more than a few compute-intensive instances, consider a pay-as-you-go subscription or other purchase options. If you're using an [Azure free account](https://azure.microsoft.com/free/), you can use only a limited number of Azure compute cores. ++- **Pricing and availability** - Check [VM pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [availability](https://azure.microsoft.com/global-infrastructure/services/) by Azure regions. ++- **Cores quota** ΓÇô You might need to increase the cores quota in your Azure subscription from the default value. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the H-series. To request a quota increase, [open an online customer support request](../azure-portal/supportability/how-to-create-azure-support-request.md) at no charge. (Default limits may vary depending on your subscription category.) ++ > [!NOTE] + > Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity guarantees. Regardless of your quota, you are only charged for cores that you use. + +- **Virtual network** ΓÇô An Azure [virtual network](../virtual-network/index.yml) is not required to use the compute-intensive instances. However, for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if you need to access on-premises resources. When needed, create a new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported. ++- **Resizing** ΓÇô Because of their specialized hardware, you can only resize compute-intensive instances within the same size family (H-series or N-series). For example, you can only resize an H-series VM from one H-series size to another. Additional considerations around InfiniBand driver support and NVMe disks may need to be considered for certain VMs. +++## Next steps ++- Learn more about [configuring your VMs](./workloads/hpc/configure.md), [enabling InfiniBand](./workloads/hpc/enable-infiniband.md), [setting up MPI](./workloads/hpc/setup-mpi.md) and optimizing HPC applications for Azure at [HPC Workloads](./workloads/hpc/overview.md). +- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md). +- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). +- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/). |
virtual-machines | Dc Family | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dc-family.md | -> 'DC' family VMs are specialized for confidential computing scenarios. If your workload doesn't require confidential compute and you're looking for general purpose VMs with similar specs, consider the [the standard D-family size series](./d-family.md). +> 'DC' family VMs are for confidential computing scenarios. If your workload doesn't require confidential compute and you're looking for general purpose VMs with similar specs, consider the [the standard D-family size series](./d-family.md). [!INCLUDE [dc-family-summary](./includes/dc-family-summary.md)] -### DCsv2-series --[View the full DCsv2-series page](./dcsv2-series.md). ----### DCsv3 and DCdsv3-series -#### [DCsv3-series](#tab/dcsv3) --[View the full DCsv3-series page](./dcsv3-series.md). ---#### [DCdsv3-series](#tab/dcdsv3) --[View the full DCdsv3-series page](./dcdsv3-series.md). --- ### DCasv5 and DCadsv5-series #### [DCasv5-series](#tab/dcasv5) [!INCLUDE [dcasv5-series-summary](./includes/dcasv5-series-summary.md)]+### DCsv3 and DCdsv3-series +#### [DCsv3-series](#tab/dcsv3) ++[View the full DCsv3-series page](./dcsv3-series.md). +++#### [DCdsv3-series](#tab/dcdsv3) ++[View the full DCdsv3-series page](./dcdsv3-series.md). ++++### DCsv2-series ++[View the full DCsv2-series page](./dcsv2-series.md). ++++ ### Previous-generation DC family series For older sizes, see [previous generation sizes](../previous-gen-sizes-list.md#general-purpose-previous-gen-sizes). |
virtual-machines | Ec Family | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/ec-family.md | -> 'EC' family VMs are specialized for confidential computing scenarios. If your workload doesn't require confidential compute and you're looking for memory-optimized VMs with similar specifications, consider the [standard E-family size series](./e-family.md). +> 'EC' family VMs are for confidential computing scenarios. If your workload doesn't require confidential compute and you're looking for memory-optimized VMs with similar specifications, consider the [standard E-family size series](./e-family.md). [!INCLUDE [ec-family-summary](./includes/ec-family-summary.md)] -### Ecasv5 and Ecadsv5-series +### ECasv5 and ECadsv5-series [!INCLUDE [ecasv5-ecadsv5-series-summary](./includes/ecasv5-ecadsv5-series-summary.md)] [View the full Ecasv5 and Ecadsv5-series page](../../ecasv5-ecadsv5-series.md). [!INCLUDE [ecasv5-ecadsv5-series-specs](./includes/ecasv5-ecadsv5-series-specs.md)] --### Ecasccv5 and Ecadsccv5-series --[View the full Ecasccv5 and Ecadsccv5-series page](../../ecasccv5-ecadsccv5-series.md). ----### Ecesv5 and Ecedsv5-series ++### ECesv5 and ECedsv5-series [!INCLUDE [ecesv5-ecedsv5-series-summary](./includes/ecesv5-ecedsv5-series-summary.md)] [View the full Ecesv5 and Ecedsv5-series page](../../ecesv5-ecedsv5-series.md). [!INCLUDE [ecesv5-ecedsv5-series-specs](./includes/ecesv5-ecedsv5-series-specs.md)] ++### ECas_ccv5 and ECads_ccv5-series ++[View the full Ecasccv5 and Ecadsccv5-series page](../../ecasccv5-ecadsccv5-series.md). + ### Previous-generation EC family series For older sizes, see [previous generation sizes](../previous-gen-sizes-list.md#memory-optimized-previous-gen-sizes). |
virtual-machines | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/overview.md | List of memory optimized VM sizes with links to each series' family page section |-||| | [E-family](./memory-optimized/e-family.md) | Relational databases <br> Medium to large caches <br> In-memory analytics |[Epsv6 and Epdsv6-series](./memory-optimized/e-family.md#epsv6-and-epdsv6-series)<br> [Easv6 and Eadsv6-series](./memory-optimized/e-family.md#easv6-and-eadsv6-series)<br> [Ev5 and Esv5-series](./memory-optimized/e-family.md#ev5-and-esv5-series)<br> [Edv5 and Edsv5-series](./memory-optimized/e-family.md#edv5-and-edsv5-series)<br> [Easv5 and Eadsv5-series](./memory-optimized/e-family.md#easv5-and-eadsv5-series)<br> [Epsv5 and Epdsv5-series](./memory-optimized/e-family.md#epsv5-and-epdsv5-series)<br> [Previous-gen families](./previous-gen-sizes-list.md#memory-optimized-previous-gen-sizes) | | [Eb-family](./memory-optimized/e-family.md) | E-family with High remote storage performance | [Ebdsv5 and Ebsv5-series](./memory-optimized/eb-family.md#ebdsv5-and-ebsv5-series) |-| [EC-family](./memory-optimized/ec-family.md) | E-family with confidential computing | [ECasv5 and ECadsv5-series](./memory-optimized/ec-family.md#ecasv5-and-ecadsv5-series)<br> [ECas_cc_v5 and ECads_cc_v5-series](./memory-optimized/ec-family.md#ecasccv5-and-ecadsccv5-series)<br> [ECesv5 and ECedsv5-series](./memory-optimized/ec-family.md#ecesv5-and-ecedsv5-series) | +| [EC-family](./memory-optimized/ec-family.md) | E-family with confidential computing | [ECasv5 and ECadsv5-series](./memory-optimized/ec-family.md#ecasv5-and-ecadsv5-series)<br> [ECas_cc_v5 and ECads_cc_v5-series](./memory-optimized/ec-family.md#ecas_ccv5-and-ecads_ccv5-series)<br> [ECesv5 and ECedsv5-series](./memory-optimized/ec-family.md#ecesv5-and-ecedsv5-series) | | [M-family](./memory-optimized/m-family.md) | Extremely large databases <br> Large amounts of memory | [Msv3 and Mdsv3-series](./memory-optimized/m-family.md#msv3-and-mdsv3-series)<br> [Mv2-series](./memory-optimized/m-family.md#mv2-series)<br> [Msv2 and Mdsv2-series](./memory-optimized/m-family.md#msv2-and-mdsv2-series) | | Other families | Older generation memory optimized sizes | [Previous-gen families](./previous-gen-sizes-list.md#memory-optimized-previous-gen-sizes) | |
virtual-machines | Disks Upload Vhd To Managed Disk Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md | Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 10/17/2023 Last updated : 08/15/2024 linux If you would like to upload a different disk type, replace **Standard_LRS** with Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for your upload. -To generate a writable SAS of your empty managed disk, replace `<yourdiskname>`and `<yourresourcegroupname>`, then use the following commands: ++To generate a writable SAS of your empty managed disk, replace `<yourdiskname>` and `<yourresourcegroupname>`, then use the following commands: ```powershell $diskSas = Grant-AzDiskAccess -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -DurationInSecond 86400 -Access 'Write' |
virtual-machines | Download Vhd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md | Your snapshot will be created shortly, and can then be used to download or creat To download the VHD file, you need to generate a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md?toc=/azure/virtual-machines/windows/toc.json) URL. When the URL is generated, an expiration time is assigned to the URL. + # [Portal](#tab/azure-portal) 1. On the page for the VM, click **Disks** in the left menu. |
virtual-network | Tutorial Restrict Network Access To Resources Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md | - Title: Restrict network access to PaaS resources - Azure CLI -description: This article teaches you how to use the Azure CLI to restrict network access to Azure resources like Azure Storage and Azure SQL Database with virtual network service endpoints. --- Previously updated : 08/11/2024---# Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account. ---# Restrict network access to PaaS resources with virtual network service endpoints using the Azure CLI --Virtual network service endpoints enable you to limit network access to some Azure service resources to a virtual network subnet. You can also remove internet access to the resources. Service endpoints provide direct connection from your virtual network to supported Azure services, allowing you to use your virtual network's private address space to access the Azure services. Traffic destined to Azure resources through service endpoints always stays on the Microsoft Azure backbone network. In this article, you learn how to: --* Create a virtual network with one subnet -* Add a subnet and enable a service endpoint -* Create an Azure resource and allow network access to it from only a subnet -* Deploy a virtual machine (VM) to each subnet -* Confirm access to a resource from a subnet -* Confirm access is denied to a resource from a subnet and the internet ----- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a virtual network --Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *test-rg* in the *eastus* location. --```azurecli-interactive -az group create \ - --name test-rg \ - --location eastus -``` --Create a virtual network with one subnet with [az network vnet create](/cli/azure/network/vnet). --```azurecli-interactive -az network vnet create \ - --name vnet-1 \ - --resource-group test-rg \ - --address-prefix 10.0.0.0/16 \ - --subnet-name subnet-public \ - --subnet-prefix 10.0.0.0/24 -``` --## Enable a service endpoint --You can enable service endpoints only for services that support service endpoints. View service endpoint-enabled services available in an Azure location with [az network vnet list-endpoint-services](/cli/azure/network/vnet). The following example returns a list of service-endpoint-enabled services available in the *eastus* region. The list of services returned will grow over time, as more Azure services become service endpoint enabled. --```azurecli-interactive -az network vnet list-endpoint-services \ - --location eastus \ - --out table -``` --Create another subnet in the virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). In this example, a service endpoint for `Microsoft.Storage` is created for the subnet: --```azurecli-interactive -az network vnet subnet create \ - --vnet-name vnet-1 \ - --resource-group test-rg \ - --name subnet-private \ - --address-prefix 10.0.1.0/24 \ - --service-endpoints Microsoft.Storage -``` --## Restrict network access for a subnet --Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *nsg-private*. --```azurecli-interactive -az network nsg create \ - --resource-group test-rg \ - --name nsg-private -``` --Associate the network security group to the *subnet-private* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). The following example associates the *nsg-private* network security group to the *subnet-private* subnet: --```azurecli-interactive -az network vnet subnet update \ - --vnet-name vnet-1 \ - --name subnet-private \ - --resource-group test-rg \ - --network-security-group nsg-private -``` --Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The rule that follows allows outbound access to the public IP addresses assigned to the Azure Storage service: --```azurecli-interactive -az network nsg rule create \ - --resource-group test-rg \ - --nsg-name nsg-private \ - --name Allow-Storage-All \ - --access Allow \ - --protocol "*" \ - --direction Outbound \ - --priority 100 \ - --source-address-prefix "VirtualNetwork" \ - --source-port-range "*" \ - --destination-address-prefix "Storage" \ - --destination-port-range "*" -``` --Each network security group contains several [default security rules](./network-security-groups-overview.md#default-security-rules). The rule that follows overrides a default security rule that allows outbound access to all public IP addresses. The `destination-address-prefix "Internet"` option denies outbound access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. --```azurecli-interactive -az network nsg rule create \ - --resource-group test-rg \ - --nsg-name nsg-private \ - --name Deny-Internet-All \ - --access Deny \ - --protocol "*" \ - --direction Outbound \ - --priority 110 \ - --source-address-prefix "VirtualNetwork" \ - --source-port-range "*" \ - --destination-address-prefix "Internet" \ - --destination-port-range "*" -``` --The following rule allows SSH traffic inbound to the subnet from anywhere. The rule overrides a default security rule that denies all inbound traffic from the internet. SSH is allowed to the subnet so that connectivity can be tested in a later step. --```azurecli-interactive -az network nsg rule create \ - --resource-group test-rg \ - --nsg-name nsg-private \ - --name Allow-SSH-All \ - --access Allow \ - --protocol Tcp \ - --direction Inbound \ - --priority 120 \ - --source-address-prefix "*" \ - --source-port-range "*" \ - --destination-address-prefix "VirtualNetwork" \ - --destination-port-range "22" -``` --## Restrict network access to a resource --The steps necessary to restrict network access to resources created through Azure services enabled for service endpoints varies across services. See the documentation for individual services for specific steps for each service. The remainder of this article includes steps to restrict network access for an Azure Storage account, as an example. --### Create a storage account --Create an Azure storage account with [az storage account create](/cli/azure/storage/account). Replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. --```azurecli-interactive -storageAcctName="<replace-with-your-unique-storage-account-name>" --az storage account create \ - --name $storageAcctName \ - --resource-group test-rg \ - --sku Standard_LRS \ - --kind StorageV2 -``` --After the storage account is created, retrieve the connection string for the storage account into a variable with [az storage account show-connection-string](/cli/azure/storage/account). The connection string is used to create a file share in a later step. --For the purposes of this tutorial, the connection string is used to connect to the storage account. Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a high degree of trust in the application, and carries risks that aren't present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable. --For more information about connecting to a storage account using a managed identity, see [Use a managed identity to access Azure Storage](/entra/identity/managed-identities-azure-resources/tutorial-linux-managed-identities-vm-access?pivots=identity-linux-mi-vm-access-storage). --```azurecli-interactive -saConnectionString=$(az storage account show-connection-string \ - --name $storageAcctName \ - --resource-group test-rg \ - --query 'connectionString' \ - --out tsv) -``` --<a name="account-key"></a>View the contents of the variable and note the value for **AccountKey** returned in the output, because it's used in a later step. --```azurecli-interactive -echo $saConnectionString -``` --### Create a file share in the storage account --Create a file share in the storage account with [az storage share create](/cli/azure/storage/share). In a later step, this file share is mounted to confirm network access to it. --```azurecli-interactive -az storage share create \ - --name file-share \ - --quota 2048 \ - --connection-string $saConnectionString > -``` --### Deny all network access to a storage account --By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [az storage account update](/cli/azure/storage/account). Once network access is denied, the storage account isn't accessible from any network. --```azurecli-interactive -az storage account update \ - --name $storageAcctName \ - --resource-group test-rg \ - --default-action Deny -``` --### Enable network access from a subnet --Allow network access to the storage account from the *subnet-private* subnet with [az storage account network-rule add](/cli/azure/storage/account/network-rule). --```azurecli-interactive -az storage account network-rule add \ - --resource-group test-rg \ - --account-name $storageAcctName \ - --vnet-name vnet-1 \ - --subnet subnet-private -``` -## Create virtual machines --To test network access to a storage account, deploy a VM to each subnet. --### Create the first virtual machine --Create a VM in the *subnet-public* subnet with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-public \ - --image Ubuntu2204 \ - --vnet-name vnet-1 \ - --subnet subnet-public \ - --admin-username azureuser \ - --generate-ssh-keys -``` --The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example: --```azurecli -{ - "fqdns": "", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-public", - "location": "eastus", - "macAddress": "00-0D-3A-23-9A-49", - "powerState": "VM running", - "privateIpAddress": "10.0.0.4", - "publicIpAddress": "203.0.113.24", - "resourceGroup": "test-rg" -} -``` --### Create the second virtual machine --```azurecli-interactive -az vm create \ - --resource-group test-rg \ - --name vm-private \ - --image Ubuntu2204 \ - --vnet-name vnet-1 \ - --subnet subnet-private \ - --admin-username azureuser \ - --generate-ssh-keys -``` --The VM takes a few minutes to create. After creation, take note of the **publicIpAddress** in the output returned. This address is used to access the VM from the internet in a later step. --## Confirm access to storage account --SSH into the *vm-private* VM. --Run the following command to store the IP address of the VM as an environment variable: --```bash -export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-private --query publicIps --output tsv) -``` --```bash -ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS -``` --Create a folder for a mount point: --```bash -sudo mkdir /mnt/file-share -``` --Mount the Azure file share to the directory you created. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account). --```bash -sudo mount --types cifs //<storage-account-name>.file.core.windows.net/my-file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino -``` --You receive the `user@vm-private:~$` prompt. The Azure file share successfully mounted to */mnt/file-share*. --Confirm that the VM has no outbound connectivity to any other public IP addresses: --```bash -ping bing.com -c 4 -``` --You receive no replies, because the network security group associated to the *subnet-private* subnet doesn't allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service. --Exit the SSH session to the *vm-private* VM. --## Confirm access is denied to storage account --SSH into the *vm-public* VM. --Run the following command to store the IP address of the VM as an environment variable: --```bash -export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-public --query publicIps --output tsv) -``` --```bash -ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS -``` --Create a directory for a mount point: --```bash -sudo mkdir /mnt/file-share -``` --Attempt to mount the Azure file share to the directory you created. This article assumes you deployed the latest version of Ubuntu. If you're using earlier versions of Ubuntu, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for more instructions about mounting file shares. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account): --```bash -sudo mount --types cifs //storage-account-name>.file.core.windows.net/file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino -``` --Access is denied, and you receive a `mount error(13): Permission denied` error, because the *vm-public* VM is deployed within the *subnet-public* subnet. The *subnet-public* subnet doesn't have a service endpoint enabled for Azure Storage, and the storage account only allows network access from the *subnet-private* subnet, not the *subnet-public* subnet. --Exit the SSH session to the *vm-public* VM. --From your computer, attempt to view the shares in your storage account with [az storage share list](/cli/azure/storage/share). Replace `<account-name>` and `<account-key>` with the storage account name and key from [Create a storage account](#create-a-storage-account): --```azurecli-interactive -az storage share list \ - --account-name <account-name> \ - --account-key <account-key> -``` --Access is denied and you receive a **This request isn't authorized to perform this operation** error, because your computer isn't in the *subnet-private* subnet of the *vnet-1* virtual network. --## Clean up resources --When no longer needed, use [az group delete](/cli/azure) to remove the resource group and all of the resources it contains. --```azurecli-interactive -az group delete \ - --name test-rg \ - --yes \ - --no-wait -``` --## Next steps --In this article, you enabled a service endpoint for a virtual network subnet. You learned that service endpoints can be enabled for resources deployed with multiple Azure services. You created an Azure Storage account and limited network access to the storage account to only resources within a virtual network subnet. To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md). --If you have multiple virtual networks in your account, you might want to connect two virtual networks together so the resources within each virtual network can communicate with each other. To learn how, see [Connect virtual networks](tutorial-connect-virtual-networks-cli.md). |
virtual-network | Tutorial Restrict Network Access To Resources Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md | - Title: Restrict network access to PaaS resources - Azure PowerShell -description: In this article, you learn how to limit and restrict network access to Azure resources, such as Azure Storage and Azure SQL Database, with virtual network service endpoints using Azure PowerShell. ----- Previously updated : 03/14/2018---# Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account. ---# Restrict network access to PaaS resources with virtual network service endpoints using PowerShell --Virtual network service endpoints enable you to limit network access to some Azure service resources to a virtual network subnet. You can also remove internet access to the resources. Service endpoints provide direct connection from your virtual network to supported Azure services, allowing you to use your virtual network's private address space to access the Azure services. Traffic destined to Azure resources through service endpoints always stays on the Microsoft Azure backbone network. In this article, you learn how to: --* Create a virtual network with one subnet -* Add a subnet and enable a service endpoint -* Create an Azure resource and allow network access to it from only a subnet -* Deploy a virtual machine (VM) to each subnet -* Confirm access to a resource from a subnet -* Confirm access is denied to a resource from a subnet and the internet --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. --## Create a virtual network --Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myResourceGroup*: --```azurepowershell-interactive -New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS -``` --Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *myVirtualNetwork* with the address prefix *10.0.0.0/16*. --```azurepowershell-interactive -$virtualNetwork = New-AzVirtualNetwork ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name myVirtualNetwork ` - -AddressPrefix 10.0.0.0/16 -``` --Create a subnet configuration with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). The following example creates a subnet configuration for a subnet named *Public*: --```azurepowershell-interactive -$subnetConfigPublic = Add-AzVirtualNetworkSubnetConfig ` - -Name Public ` - -AddressPrefix 10.0.0.0/24 ` - -VirtualNetwork $virtualNetwork -``` --Create the subnet in the virtual network by writing the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork): --```azurepowershell-interactive -$virtualNetwork | Set-AzVirtualNetwork -``` --## Enable a service endpoint --You can enable service endpoints only for services that support service endpoints. View service endpoint-enabled services available in an Azure location with [Get-AzVirtualNetworkAvailableEndpointService](/powershell/module/az.network/get-azvirtualnetworkavailableendpointservice). The following example returns a list of service-endpoint-enabled services available in the *eastus* region. The list of services returned will grow over time as more Azure services become service endpoint enabled. --```azurepowershell-interactive -Get-AzVirtualNetworkAvailableEndpointService -Location eastus | Select Name -``` --Create an additional subnet in the virtual network. In this example, a subnet named *Private* is created with a service endpoint for *Microsoft.Storage*: --```azurepowershell-interactive -$subnetConfigPrivate = Add-AzVirtualNetworkSubnetConfig ` - -Name Private ` - -AddressPrefix 10.0.1.0/24 ` - -VirtualNetwork $virtualNetwork ` - -ServiceEndpoint Microsoft.Storage --$virtualNetwork | Set-AzVirtualNetwork -``` --## Restrict network access for a subnet --Create network security group security rules with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig). The following rule allows outbound access to the public IP addresses assigned to the Azure Storage service: --```azurepowershell-interactive -$rule1 = New-AzNetworkSecurityRuleConfig ` - -Name Allow-Storage-All ` - -Access Allow ` - -DestinationAddressPrefix Storage ` - -DestinationPortRange * ` - -Direction Outbound ` - -Priority 100 ` - -Protocol * ` - -SourceAddressPrefix VirtualNetwork ` - -SourcePortRange * -``` --The following rule denies access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. --```azurepowershell-interactive -$rule2 = New-AzNetworkSecurityRuleConfig ` - -Name Deny-Internet-All ` - -Access Deny ` - -DestinationAddressPrefix Internet ` - -DestinationPortRange * ` - -Direction Outbound ` - -Priority 110 ` - -Protocol * ` - -SourceAddressPrefix VirtualNetwork ` - -SourcePortRange * -``` --The following rule allows Remote Desktop Protocol (RDP) traffic inbound to the subnet from anywhere. Remote desktop connections are allowed to the subnet, so that you can confirm network access to a resource in a later step. --```azurepowershell-interactive -$rule3 = New-AzNetworkSecurityRuleConfig ` - -Name Allow-RDP-All ` - -Access Allow ` - -DestinationAddressPrefix VirtualNetwork ` - -DestinationPortRange 3389 ` - -Direction Inbound ` - -Priority 120 ` - -Protocol * ` - -SourceAddressPrefix * ` - -SourcePortRange * -``` --Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). The following example creates a network security group named *myNsgPrivate*. --```azurepowershell-interactive -$nsg = New-AzNetworkSecurityGroup ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name myNsgPrivate ` - -SecurityRules $rule1,$rule2,$rule3 -``` --Associate the network security group to the *Private* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) and then write the subnet configuration to the virtual network. The following example associates the *myNsgPrivate* network security group to the *Private* subnet: --```azurepowershell-interactive -Set-AzVirtualNetworkSubnetConfig ` - -VirtualNetwork $VirtualNetwork ` - -Name Private ` - -AddressPrefix 10.0.1.0/24 ` - -ServiceEndpoint Microsoft.Storage ` - -NetworkSecurityGroup $nsg --$virtualNetwork | Set-AzVirtualNetwork -``` --## Restrict network access to a resource --The steps necessary to restrict network access to resources created through Azure services enabled for service endpoints varies across services. See the documentation for individual services for specific steps for each service. The remainder of this article includes steps to restrict network access for an Azure Storage account, as an example. --### Create a storage account --Create an Azure storage account with [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount). Replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. --```azurepowershell-interactive -$storageAcctName = '<replace-with-your-unique-storage-account-name>' --New-AzStorageAccount ` - -Location EastUS ` - -Name $storageAcctName ` - -ResourceGroupName myResourceGroup ` - -SkuName Standard_LRS ` - -Kind StorageV2 -``` --After the storage account is created, retrieve the key for the storage account into a variable with [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey): --```azurepowershell-interactive -$storageAcctKey = (Get-AzStorageAccountKey ` - -ResourceGroupName myResourceGroup ` - -AccountName $storageAcctName).Value[0] -``` --The key is used to create a file share in a later step. Enter `$storageAcctKey` and note the value, as you'll also need to manually enter it in a later step when you map the file share to a drive in a VM. --### Create a file share in the storage account --Create a context for your storage account and key with [New-AzStorageContext](/powershell/module/az.storage/new-AzStoragecontext). The context encapsulates the storage account name and account key: --```azurepowershell-interactive -$storageContext = New-AzStorageContext $storageAcctName $storageAcctKey -``` --Create a file share with [New-AzStorageShare](/powershell/module/az.storage/new-azstorageshare): --$share = New-AzStorageShare my-file-share -Context $storageContext --### Deny all network access to a storage account --By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [Update-AzStorageAccountNetworkRuleSet](/powershell/module/az.storage/update-azstorageaccountnetworkruleset). Once network access is denied, the storage account is not accessible from any network. --```azurepowershell-interactive -Update-AzStorageAccountNetworkRuleSet ` - -ResourceGroupName "myresourcegroup" ` - -Name $storageAcctName ` - -DefaultAction Deny -``` --### Enable network access from a subnet --Retrieve the created virtual network with [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and then retrieve the private subnet object into a variable with [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig): --```azurepowershell-interactive -$privateSubnet = Get-AzVirtualNetwork ` - -ResourceGroupName "myResourceGroup" ` - -Name "myVirtualNetwork" ` - | Get-AzVirtualNetworkSubnetConfig ` - -Name "Private" -``` --Allow network access to the storage account from the *Private* subnet with [Add-AzStorageAccountNetworkRule](/powershell/module/az.network/add-aznetworksecurityruleconfig). --```azurepowershell-interactive -Add-AzStorageAccountNetworkRule ` - -ResourceGroupName "myresourcegroup" ` - -Name $storageAcctName ` - -VirtualNetworkResourceId $privateSubnet.Id -``` --## Create virtual machines --To test network access to a storage account, deploy a VM to each subnet. --### Create the first virtual machine --Create a virtual machine in the *Public* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). When running the command that follows, you are prompted for credentials. The values that you enter are configured as the user name and password for the VM. The `-AsJob` option creates the VM in the background, so that you can continue to the next step. --```azurepowershell-interactive -New-AzVm ` - -ResourceGroupName "myResourceGroup" ` - -Location "East US" ` - -VirtualNetworkName "myVirtualNetwork" ` - -SubnetName "Public" ` - -Name "myVmPublic" ` - -AsJob -``` --Output similar to the following example output is returned: --```powershell -Id Name PSJobTypeName State HasMoreData Location Command - - -- -- -- - -1 Long Running... AzureLongRun... Running True localhost New-AzVM -``` --### Create the second virtual machine --Create a virtual machine in the *Private* subnet: --```azurepowershell-interactive -New-AzVm ` - -ResourceGroupName "myResourceGroup" ` - -Location "East US" ` - -VirtualNetworkName "myVirtualNetwork" ` - -SubnetName "Private" ` - -Name "myVmPrivate" -``` --It takes a few minutes for Azure to create the VM. Do not continue to the next step until Azure finishes creating the VM and returns output to PowerShell. --## Confirm access to storage account --Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to return the public IP address of a VM. The following example returns the public IP address of the *myVmPrivate* VM: --```azurepowershell-interactive -Get-AzPublicIpAddress ` - -Name myVmPrivate ` - -ResourceGroupName myResourceGroup ` - | Select IpAddress -``` --Replace `<publicIpAddress>` in the following command, with the public IP address returned from the previous command, and then enter the following command: --```powershell -mstsc /v:<publicIpAddress> -``` --A Remote Desktop Protocol (.rdp) file is created and downloaded to your computer. Open the downloaded rdp file. If prompted, select **Connect**. Enter the user name and password you specified when creating the VM. You may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM. Select **OK**. You may receive a certificate warning during the sign-in process. If you receive the warning, select **Yes** or **Continue**, to proceed with the connection. --On the *myVmPrivate* VM, map the Azure file share to drive Z using PowerShell. Before running the commands that follow, replace `<storage-account-key>` and `<storage-account-name>` with values from you supplied or retrieved in [Create a storage account](#create-a-storage-account). --```powershell -$acctKey = ConvertTo-SecureString -String "<storage-account-key>" -AsPlainText -Force -$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "Azure\<storage-account-name>", $acctKey -New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\my-file-share" -Credential $credential -``` --PowerShell returns output similar to the following example output: --```powershell -Name Used (GB) Free (GB) Provider Root -- -- --Z FileSystem \\vnt.file.core.windows.net\my-f... -``` --The Azure file share successfully mapped to the Z drive. --Confirm that the VM has no outbound connectivity to any other public IP addresses: --```powershell -ping bing.com -``` --You receive no replies, because the network security group associated to the *Private* subnet does not allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service. --Close the remote desktop session to the *myVmPrivate* VM. --## Confirm access is denied to storage account --Get the public IP address of the *myVmPublic* VM: --```azurepowershell-interactive -Get-AzPublicIpAddress ` - -Name myVmPublic ` - -ResourceGroupName myResourceGroup ` - | Select IpAddress -``` --Replace `<publicIpAddress>` in the following command, with the public IP address returned from the previous command, and then enter the following command: --```powershell -mstsc /v:<publicIpAddress> -``` --On the *myVmPublic* VM, attempt to map the Azure file share to drive Z. Before running the commands that follow, replace `<storage-account-key>` and `<storage-account-name>` with values from you supplied or retrieved in [Create a storage account](#create-a-storage-account). --```powershell -$acctKey = ConvertTo-SecureString -String "<storage-account-key>" -AsPlainText -Force -$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "Azure\<storage-account-name>", $acctKey -New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\my-file-share" -Credential $credential -``` --Access to the share is denied, and you receive a `New-PSDrive : Access is denied` error. Access is denied because the *myVmPublic* VM is deployed in the *Public* subnet. The *Public* subnet does not have a service endpoint enabled for Azure Storage, and the storage account only allows network access from the *Private* subnet, not the *Public* subnet. --Close the remote desktop session to the *myVmPublic* VM. --From your computer, attempt to view the file shares in the storage account with the following command: --```powershell-interactive -Get-AzStorageFile ` - -ShareName my-file-share ` - -Context $storageContext -``` --Access is denied, and you receive a *Get-AzStorageFile : The remote server returned an error: (403) Forbidden. HTTP Status Code: 403 - HTTP Error Message: This request is not authorized to perform this operation* error, because your computer is not in the *Private* subnet of the *MyVirtualNetwork* virtual network. --## Clean up resources --When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains: --```azurepowershell-interactive -Remove-AzResourceGroup -Name myResourceGroup -Force -``` --## Next steps --In this article, you enabled a service endpoint for a virtual network subnet. You learned that service endpoints can be enabled for resources deployed with multiple Azure services. You created an Azure Storage account and limited network access to the storage account to only resources within a virtual network subnet. To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md). --If you have multiple virtual networks in your account, you may want to connect two virtual networks together so the resources within each virtual network can communicate with each other. To learn how, see [Connect virtual networks](tutorial-connect-virtual-networks-powershell.md). |
virtual-network | Tutorial Restrict Network Access To Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md | + - template-tutorial + - devx-track-azurecli + - devx-track-azurepowershell +content_well_notification: + - AI-contribution +ai-usage: ai-assisted + # Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account. In this tutorial, you learn how to: > * Confirm access to a resource from a subnet > * Confirm access is denied to a resource from a subnet and the internet -This tutorial uses the Azure portal. You can also complete it using the [Azure CLI](tutorial-restrict-network-access-to-resources-cli.md) or [PowerShell](tutorial-restrict-network-access-to-resources-powershell.md). - ## Prerequisites +### [Portal](#tab/portal) + - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -## Sign in to Azure +### [PowerShell](#tab/powershell) -Sign in to the [Azure portal](https://portal.azure.com). +If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ++### [CLI](#tab/cli) ++++- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ++ ## Enable a service endpoint +### [Portal](#tab/portal) ++ Service endpoints are enabled per service, per subnet. 1. In the search box at the top of the portal page, search for **Virtual network**. Select **Virtual networks** in the search results. Service endpoints are enabled per service, per subnet. 1. Select **+ Subnet**. -1. On the **Add subnet** page, enter or select the following information: +1. On the **Add subnet** page, enter, or select the following information: | Setting | Value | | | | Service endpoints are enabled per service, per subnet. > [!CAUTION] > Before enabling a service endpoint for an existing subnet that has resources in it, see [Change subnet settings](virtual-network-manage-subnet.md#change-subnet-settings). +### [PowerShell](#tab/powershell) ++## Create a virtual network ++1. Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *test-rg*: ++ ```azurepowershell-interactive + $rg = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + } + New-AzResourceGroup @rg + ``` ++1. Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *vnet-1* with the address prefix *10.0.0.0/16*. ++ ```azurepowershell-interactive + $vnet = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + Name = "vnet-1" + AddressPrefix = "10.0.0.0/16" + } + $virtualNetwork = New-AzVirtualNetwork @vnet + ``` ++1. Create a subnet configuration with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). The following example creates a subnet configuration for a subnet named *subnet-public*: ++ ```azurepowershell-interactive + $subpub = @{ + Name = "subnet-public" + AddressPrefix = "10.0.0.0/24" + VirtualNetwork = $virtualNetwork + } + $subnetConfigPublic = Add-AzVirtualNetworkSubnetConfig @subpub + ``` ++1. Create the subnet in the virtual network by writing the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork): ++ ```azurepowershell-interactive + $virtualNetwork | Set-AzVirtualNetwork + ``` ++1. Create another subnet in the virtual network. In this example, a subnet named *subnet-private* is created with a service endpoint for *Microsoft.Storage*: ++ ```azurepowershell-interactive + $subpriv = @{ + Name = "subnet-private" + AddressPrefix = "10.0.2.0/24" + VirtualNetwork = $virtualNetwork + ServiceEndpoint = "Microsoft.Storage" + } + $subnetConfigPrivate = Add-AzVirtualNetworkSubnetConfig @subpriv ++ $virtualNetwork | Set-AzVirtualNetwork + ``` ++## Deploy Azure Bastion ++Azure Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Bastion, see [What is Azure Bastion?](/azure/bastion/bastion-overview). ++ [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)] ++1. Configure a Bastion subnet for your virtual network. This subnet is reserved exclusively for Bastion resources and must be named **AzureBastionSubnet**. ++ ```azurepowershell-interactive + $subnet = @{ + Name = 'AzureBastionSubnet' + VirtualNetwork = $virtualNetwork + AddressPrefix = '10.0.1.0/26' + } + $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet + ``` ++1. Set the configuration: ++ ```azurepowershell-interactive + $virtualNetwork | Set-AzVirtualNetwork + ``` ++1. Create a public IP address for Bastion. The Bastion host uses the public IP to access SSH and RDP over port 443. ++ ```azurepowershell-interactive + $ip = @{ + ResourceGroupName = 'test-rg' + Name = 'public-ip' + Location = 'westus2' + AllocationMethod = 'Static' + Sku = 'Standard' + Zone = 1,2,3 + } + New-AzPublicIpAddress @ip + ``` ++1. Use the [New-AzBastion](/powershell/module/az.network/new-azbastion) command to create a new standard Bastion host in **AzureBastionSubnet**: ++ ```azurepowershell-interactive + $bastion = @{ + Name = 'bastion' + ResourceGroupName = 'test-rg' + PublicIpAddressRgName = 'test-rg' + PublicIpAddressName = 'public-ip' + VirtualNetworkRgName = 'test-rg' + VirtualNetworkName = 'vnet-1' + Sku = 'Basic' + } + New-AzBastion @bastion -AsJob + ``` ++ It takes about 10 minutes to deploy the Bastion resources. You can create VMs in the next section while Bastion deploys to your virtual network. ++### [CLI](#tab/cli) ++## Create a virtual network ++1. Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *test-rg* in the *westus2* location. ++ ```azurecli-interactive + az group create \ + --name test-rg \ + --location westus2 + ``` ++1. Create a virtual network with one subnet with [az network vnet create](/cli/azure/network/vnet). ++ ```azurecli-interactive + az network vnet create \ + --name vnet-1 \ + --resource-group test-rg \ + --address-prefix 10.0.0.0/16 \ + --subnet-name subnet-public \ + --subnet-prefix 10.0.0.0/24 + ``` ++1. You can enable service endpoints only for services that support service endpoints. View service endpoint-enabled services available in an Azure location with [az network vnet list-endpoint-services](/cli/azure/network/vnet). The following example returns a list of service-endpoint-enabled services available in the *westus2* region. The list of services returned will grow over time, as more Azure services become service endpoint enabled. ++ ```azurecli-interactive + az network vnet list-endpoint-services \ + --location westus2 \ + --out table + ``` ++1. Create another subnet in the virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). In this example, a service endpoint for `Microsoft.Storage` is created for the subnet: ++ ```azurecli-interactive + az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name subnet-private \ + --address-prefix 10.0.1.0/24 \ + --service-endpoints Microsoft.Storage + ``` +++ ## Restrict network access for a subnet +### [Portal](#tab/portal) + By default, all virtual machine instances in a subnet can communicate with any resources. You can limit communication to and from all resources in a subnet by creating a network security group, and associating it to the subnet. 1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results. 1. In **Network security groups**, select **+ Create**. -1. In the **Basics** tab of **Create network security group**, enter or select the following information: +1. In the **Basics** tab of **Create network security group**, enter, or select the following information: | Setting | Value | | - | -- | By default, all virtual machine instances in a subnet can communicate with any r 1. Select **Review + create**, then select **Create**. -### Create outbound NSG rules +### [PowerShell](#tab/powershell) ++1. Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). The following example creates a network security group named *nsg-private*. ++ ```azurepowershell-interactive + $nsgpriv = @{ + ResourceGroupName = 'test-rg' + Location = 'westus2' + Name = 'nsg-private' + } + $nsg = New-AzNetworkSecurityGroup @nsgpriv + ``` ++### [CLI](#tab/cli) ++Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *nsg-private*. ++```azurecli-interactive +az network nsg create \ + --resource-group test-rg \ + --name nsg-private +``` ++++### Create outbound Network Security Group (NSG) rules ++### [Portal](#tab/portal) 1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results. By default, all virtual machine instances in a subnet can communicate with any r | Destination | Select **Service Tag**. | | Destination service tag | Select **Storage**. | | Service | Leave default of **Custom**. |- | Destination port ranges | Enter **445**. </br> SMB protocol is used to connect to a file share created in a later step. | + | Destination port ranges | Enter **445**. | | Protocol | Select **Any**. | | Action | Select **Allow**. | | Priority | Leave the default of **100**. | By default, all virtual machine instances in a subnet can communicate with any r 1. Select **Add**. -### Associate the network security group to a subnet - 1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results. 1. Select **nsg-storage**. By default, all virtual machine instances in a subnet can communicate with any r 1. Select **OK**. +### [PowerShell](#tab/powershell) ++1. Create network security group security rules with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig). The following rule allows outbound access to the public IP addresses assigned to the Azure Storage service: ++ ```azurepowershell-interactive + $r1 = @{ + Name = "Allow-Storage-All" + Access = "Allow" + DestinationAddressPrefix = "Storage" + DestinationPortRange = "*" + Direction = "Outbound" + Priority = 100 + Protocol = "*" + SourceAddressPrefix = "VirtualNetwork" + SourcePortRange = "*" + } ++ $rule1 = New-AzNetworkSecurityRuleConfig @r1 + ``` ++1. The following rule denies access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. ++ ```azurepowershell-interactive + $r2 = @{ + Name = "Deny-Internet-All" + Access = "Deny" + DestinationAddressPrefix = "Internet" + DestinationPortRange = "*" + Direction = "Outbound" + Priority = 110 + Protocol = "*" + SourceAddressPrefix = "VirtualNetwork" + SourcePortRange = "*" + } + $rule2 = New-AzNetworkSecurityRuleConfig @r2 + ``` ++1. Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to retrieve the network security group object into a variable. Use [Set-AzNetworkSecurityRuleConfig](/powershell/module/az.network/set-aznetworksecurityruleconfig) to add the rules to the network security group. ++ ```azurepowershell-interactive + # Retrieve the existing network security group + $nsgpriv = @{ + ResourceGroupName = 'test-rg' + Name = 'nsg-private' + } + $nsg = Get-AzNetworkSecurityGroup @nsgpriv ++ # Add the new rules to the security group + $nsg.SecurityRules += $rule1 + $nsg.SecurityRules += $rule2 ++ # Update the network security group with the new rules + Set-AzNetworkSecurityGroup -NetworkSecurityGroup $nsg + ``` ++1. Associate the network security group to the *subnet-private* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) and then write the subnet configuration to the virtual network. The following example associates the *nsg-private* network security group to the *subnet-private* subnet: ++ ```azurepowershell-interactive + $subnet = @{ + VirtualNetwork = $VirtualNetwork + Name = "subnet-private" + AddressPrefix = "10.0.2.0/24" + ServiceEndpoint = "Microsoft.Storage" + NetworkSecurityGroup = $nsg + } + Set-AzVirtualNetworkSubnetConfig @subnet ++ $virtualNetwork | Set-AzVirtualNetwork + ``` ++### [CLI](#tab/cli) ++1. Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The following rule allows outbound access to the public IP addresses assigned to the Azure Storage service: ++ ```azurecli-interactive + az network nsg rule create \ + --resource-group test-rg \ + --nsg-name nsg-private \ + --name Allow-Storage-All \ + --access Allow \ + --protocol "*" \ + --direction Outbound \ + --priority 100 \ + --source-address-prefix "VirtualNetwork" \ + --source-port-range "*" \ + --destination-address-prefix "Storage" \ + --destination-port-range "*" + ``` ++1. Each network security group contains several [default security rules](./network-security-groups-overview.md#default-security-rules). The rule that follows overrides a default security rule that allows outbound access to all public IP addresses. The `destination-address-prefix "Internet"` option denies outbound access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. ++ ```azurecli-interactive + az network nsg rule create \ + --resource-group test-rg \ + --nsg-name nsg-private \ + --name Deny-Internet-All \ + --access Deny \ + --protocol "*" \ + --direction Outbound \ + --priority 110 \ + --source-address-prefix "VirtualNetwork" \ + --source-port-range "*" \ + --destination-address-prefix "Internet" \ + --destination-port-range "*" + ``` ++1. The following rule allows SSH traffic inbound to the subnet from anywhere. The rule overrides a default security rule that denies all inbound traffic from the internet. SSH is allowed to the subnet so that connectivity can be tested in a later step. ++ ```azurecli-interactive + az network nsg rule create \ + --resource-group test-rg \ + --nsg-name nsg-private \ + --name Allow-SSH-All \ + --access Allow \ + --protocol Tcp \ + --direction Inbound \ + --priority 120 \ + --source-address-prefix "*" \ + --source-port-range "*" \ + --destination-address-prefix "VirtualNetwork" \ + --destination-port-range "22" + ``` ++1. Associate the network security group to the *subnet-private* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). The following example associates the *nsg-private* network security group to the *subnet-private* subnet: ++ ```azurecli-interactive + az network vnet subnet update \ + --vnet-name vnet-1 \ + --name subnet-private \ + --resource-group test-rg \ + --network-security-group nsg-private + ``` +++ ## Restrict network access to a resource +### [Portal](#tab/portal) + The steps required to restrict network access to resources created through Azure services, which are enabled for service endpoints vary across services. See the documentation for individual services for specific steps for each service. The rest of this tutorial includes steps to restrict network access for an Azure Storage account, as an example. [!INCLUDE [create-storage-account.md](~/reusable-content/ce-skilling/azure/includes/create-storage-account.md)] +### [PowerShell](#tab/powershell) ++1. Create an Azure storage account with [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount). Replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. ++ ```azurepowershell-interactive + $storageAcctName = '<replace-with-your-unique-storage-account-name>' ++ $storage = @{ + Location = 'westus2' + Name = $storageAcctName + ResourceGroupName = 'test-rg' + SkuName = 'Standard_LRS' + Kind = 'StorageV2' + } + New-AzStorageAccount @storage + ``` ++1. After the storage account is created, retrieve the key for the storage account into a variable with [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey): ++ ```azurepowershell-interactive + $storagekey = @{ + ResourceGroupName = 'test-rg' + AccountName = $storageAcctName + } + $storageAcctKey = (Get-AzStorageAccountKey @storagekey).Value[0] + ``` ++ For the purposes of this tutorial, the connection string is used to connect to the storage account. Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a high degree of trust in the application, and carries risks that aren't present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable. ++ For more information about connecting to a storage account using a managed identity, see [Use a managed identity to access Azure Storage](/entra/identity/managed-identities-azure-resources/tutorial-linux-managed-identities-vm-access?pivots=identity-linux-mi-vm-access-storage). ++ The key is used to create a file share in a later step. Enter `$storageAcctKey` and note the value. You manually enter it in a later step when you map the file share to a drive in a virtual machine. ++### [CLI](#tab/cli) ++The steps necessary to restrict network access to resources created through Azure services enabled for service endpoints varies across services. See the documentation for individual services for specific steps for each service. The remainder of this article includes steps to restrict network access for an Azure Storage account, as an example. ++### Create a storage account ++1. Create an Azure storage account with [az storage account create](/cli/azure/storage/account). Replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. ++ ```azurecli-interactive + storageAcctName="<replace-with-your-unique-storage-account-name>" ++ az storage account create \ + --name $storageAcctName \ + --resource-group test-rg \ + --sku Standard_LRS \ + --kind StorageV2 + ``` ++1. After the storage account is created, retrieve the connection string for the storage account into a variable with [az storage account show-connection-string](/cli/azure/storage/account). The connection string is used to create a file share in a later step. ++ For the purposes of this tutorial, the connection string is used to connect to the storage account. Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a high degree of trust in the application, and carries risks that aren't present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable. ++ For more information about connecting to a storage account using a managed identity, see [Use a managed identity to access Azure Storage](/entra/identity/managed-identities-azure-resources/tutorial-linux-managed-identities-vm-access?pivots=identity-linux-mi-vm-access-storage). ++ ```azurecli-interactive + saConnectionString=$(az storage account show-connection-string \ + --name $storageAcctName \ + --resource-group test-rg \ + --query 'connectionString' \ + --out tsv) + ``` +++ ### Create a file share in the storage account +### [Portal](#tab/portal) + 1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results. 1. In **Storage accounts**, select the storage account you created in the previous step. The steps required to restrict network access to resources created through Azure 1. Select **Review + create**, then select **Create**. -### Restrict network access to a subnet +### [PowerShell](#tab/powershell) ++1. Create a context for your storage account and key with [New-AzStorageContext](/powershell/module/az.storage/new-AzStoragecontext). The context encapsulates the storage account name and account key: ++ ```azurepowershell-interactive + $storagecontext = @{ + StorageAccountName = $storageAcctName + StorageAccountKey = $storageAcctKey + } + $storageContext = New-AzStorageContext @storagecontext + ``` ++1. Create a file share with [New-AzStorageShare](/powershell/module/az.storage/new-azstorageshare): ++ ```azurepowershell-interactive + $fs = @{ + Name = "file-share" + Context = $storageContext + } + $share = New-AzStorageShare @fs + ``` ++### [CLI](#tab/cli) ++1. Create a file share in the storage account with [az storage share create](/cli/azure/storage/share). In a later step, this file share is mounted to confirm network access to it. ++ ```azurecli-interactive + az storage share create \ + --name file-share \ + --quota 2048 \ + --connection-string $saConnectionString > + ``` ++++## Restrict network access to a subnet ++### [Portal](#tab/portal) By default, storage accounts accept network connections from clients in any network, including the internet. You can restrict network access from the internet, and all other subnets in all virtual networks (except the **subnet-private** subnet in the **vnet-1** virtual network.) To restrict network access to a subnet: :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/restrict-network-access-save.png" alt-text="Screenshot of storage account screen and confirmation of subnet restriction."::: -## Create virtual machines +### [PowerShell](#tab/powershell) ++1. By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [Update-AzStorageAccountNetworkRuleSet](/powershell/module/az.storage/update-azstorageaccountnetworkruleset). Once network access is denied, the storage account isn't accessible from any network. ++ ```azurepowershell-interactive + $storagerule = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName + DefaultAction = "Deny" + } + Update-AzStorageAccountNetworkRuleSet @storagerule + ``` ++1. Retrieve the created virtual network with [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and then retrieve the private subnet object into a variable with [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig): ++ ```azurepowershell-interactive + $subnetpriv = @{ + ResourceGroupName = "test-rg" + Name = "vnet-1" + } + $privateSubnet = Get-AzVirtualNetwork @subnetpriv | Get-AzVirtualNetworkSubnetConfig -Name "subnet-private" + ``` ++1. Allow network access to the storage account from the *subnet-private* subnet with [Add-AzStorageAccountNetworkRule](/powershell/module/az.network/add-aznetworksecurityruleconfig). ++ ```azurepowershell-interactive + $storagenetrule = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName + VirtualNetworkResourceId = $privateSubnet.Id + } + Add-AzStorageAccountNetworkRule @storagenetrule + ``` ++### [CLI](#tab/cli) ++1. By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [az storage account update](/cli/azure/storage/account). Once network access is denied, the storage account isn't accessible from any network. ++ ```azurecli-interactive + az storage account update \ + --name $storageAcctName \ + --resource-group test-rg \ + --default-action Deny + ``` ++1. Allow network access to the storage account from the *subnet-private* subnet with [az storage account network-rule add](/cli/azure/storage/account/network-rule). ++ ```azurecli-interactive + az storage account network-rule add \ + --resource-group test-rg \ + --account-name $storageAcctName \ + --vnet-name vnet-1 \ + --subnet subnet-private + ``` ++++## Deploy virtual machines to subnets ++### [Portal](#tab/portal) To test network access to a storage account, deploy a virtual machine to each subnet. To test network access to a storage account, deploy a virtual machine to each su ### Create the second virtual machine -1. Repeat the steps in the previous section to create a second virtual machine. Replace the following values in **Create a virtual machine**: +1. Create a second virtual machine repeating the steps in the previous section. Replace the following values in **Create a virtual machine**: | Setting | Value | | - | -- | To test network access to a storage account, deploy a virtual machine to each su > [!WARNING] > Do not continue to the next step until the deployment is completed. +### [PowerShell](#tab/powershell) ++### Create the first virtual machine ++Create a virtual machine in the *subnet-public* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). When running the command that follows, you're prompted for credentials. The values that you enter are configured as the user name and password for the VM. ++```azurepowershell-interactive +$vm1 = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-public" + Name = "vm-public" + PublicIpAddressName = $null +} +New-AzVm @vm1 +``` ++### Create the second virtual machine ++Create a virtual machine in the *subnet-private* subnet: ++```azurepowershell-interactive +$vm2 = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-private" + Name = "vm-private" + PublicIpAddressName = $null +} +New-AzVm @vm2 +``` ++It takes a few minutes for Azure to create the VM. Don't continue to the next step until Azure finishes creating the VM and returns output to PowerShell. ++### [CLI](#tab/cli) ++To test network access to a storage account, deploy a VM to each subnet. ++### Create the first virtual machine ++Create a VM in the *subnet-public* subnet with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-public \ + --image Ubuntu2204 \ + --vnet-name vnet-1 \ + --subnet subnet-public \ + --admin-username azureuser \ + --generate-ssh-keys +``` ++The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example: ++```output +{ + "fqdns": "", + "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-public", + "location": "westus2", + "macAddress": "00-0D-3A-23-9A-49", + "powerState": "VM running", + "privateIpAddress": "10.0.0.4", + "publicIpAddress": "203.0.113.24", + "resourceGroup": "test-rg" +} +``` ++### Create the second virtual machine ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-private \ + --image Ubuntu2204 \ + --vnet-name vnet-1 \ + --subnet subnet-private \ + --admin-username azureuser \ + --generate-ssh-keys +``` ++The VM takes a few minutes to create. +++ ## Confirm access to storage account +### [Portal](#tab/portal) + The virtual machine you created earlier that is assigned to the **subnet-private** subnet is used to confirm access to the storage account. The virtual machine you created in the previous section that is assigned to the **subnet-1** subnet is used to confirm that access to the storage account is blocked. ### Get storage account access key The virtual machine you created earlier that is assigned to the **subnet-private 1. In **Security + networking**, select **Access keys**. -1. Copy the value of **key1**. You may need to select the **Show** button to display the key. +1. Copy the value of **key1**. You might need to select the **Show** button to display the key. :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/storage-account-access-key.png" alt-text="Screenshot of storage account access key."::: The virtual machine you created earlier that is assigned to the **subnet-private 1. Close the Bastion connection to **vm-private**. +### [PowerShell](#tab/powershell) ++The virtual machine you created earlier that is assigned to the **subnet-private** subnet is used to confirm access to the storage account. The virtual machine you created in the previous section that is assigned to the **subnet-1** subnet is used to confirm that access to the storage account is blocked. ++### Get storage account access key ++1. Sign-in to the [Azure portal](https://portal.azure.com/). ++1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results. ++1. In **Storage accounts**, select your storage account. ++1. In **Security + networking**, select **Access keys**. ++1. Copy the value of **key1**. You might need to select the **Show** button to display the key. ++ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/storage-account-access-key.png" alt-text="Screenshot of storage account access key."::: ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-private**. ++1. Select **Connect** then **Connect via Bastion** in **Overview**. ++1. Enter the username and password you specified when creating the virtual machine. Select **Connect**. ++1. Open Windows PowerShell. Use the following script to map the Azure file share to drive Z. ++ * Replace `<storage-account-key>` with the key you copied in the previous step. ++ * Replace `<storage-account-name>` with the name of your storage account. In this example, it's **storage8675**. ++ ```powershell + $key = @{ + String = "<storage-account-key>" + } + $acctKey = ConvertTo-SecureString @key -AsPlainText -Force + + $cred = @{ + ArgumentList = "Azure\<storage-account-name>", $acctKey + } + $credential = New-Object System.Management.Automation.PSCredential @cred ++ $map = @{ + Name = "Z" + PSProvider = "FileSystem" + Root = "\\<storage-account-name>.file.core.windows.net\file-share" + Credential = $credential + } + New-PSDrive @map + ``` ++ PowerShell returns output similar to the following example output: ++ ```output + Name Used (GB) Free (GB) Provider Root + - -- - + Z FileSystem \\storage8675.file.core.windows.net\f... + ``` ++ The Azure file share successfully mapped to the Z drive. ++1. Confirm that the VM has no outbound connectivity to any other public IP addresses: ++ ```powershell + ping bing.com + ``` ++ You receive no replies, because the network security group associated to the *Private* subnet doesn't allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service. ++1. Close the Bastion connection to **vm-private**. ++### [CLI](#tab/cli) ++SSH into the *vm-private* VM. ++1. Run the following command to store the IP address of the VM as an environment variable: ++ ```bash + export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-private --query publicIps --output tsv) + + ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS + ``` ++1. Create a folder for a mount point: ++ ```bash + sudo mkdir /mnt/file-share + ``` ++1. Mount the Azure file share to the directory you created. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account). ++ ```bash + sudo mount --types cifs //<storage-account-name>.file.core.windows.net/my-file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino + ``` ++ You receive the `user@vm-private:~$` prompt. The Azure file share successfully mounted to */mnt/file-share*. ++1. Confirm that the VM has no outbound connectivity to any other public IP addresses: ++ ```bash + ping bing.com -c 4 + ``` ++ You receive no replies, because the network security group associated to the *subnet-private* subnet doesn't allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service. ++1. Exit the SSH session to the *vm-private* VM. +++ ## Confirm access is denied to storage account +### [Portal](#tab/portal) + ### From vm-1 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. The virtual machine you created earlier that is assigned to the **subnet-private 1. Enter the username and password you specified when creating the virtual machine. Select **Connect**. -1. Repeat the previous command to attempt to map the drive to the file share in the storage account. You may need to copy the storage account access key again for this procedure: +1. Repeat the previous command to attempt to map the drive to the file share in the storage account. You might need to copy the storage account access key again for this procedure: ```powershell $key = @{ The virtual machine you created earlier that is assigned to the **subnet-private 4. Close the Bastion connection to **vm-1**. -### From a local machine: +### From a local machine 1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results. The virtual machine you created earlier that is assigned to the **subnet-private >[!NOTE] > The access is denied because your computer isn't in the **subnet-private** subnet of the **vnet-1** virtual network. +### [PowerShell](#tab/powershell) ++### From vm-1 ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-1**. ++1. Select **Bastion** in **Operations**. ++1. Enter the username and password you specified when creating the virtual machine. Select **Connect**. ++1. Repeat the previous command to attempt to map the drive to the file share in the storage account. You might need to copy the storage account access key again for this procedure: ++ ```powershell + $key = @{ + String = "<storage-account-key>" + } + $acctKey = ConvertTo-SecureString @key -AsPlainText -Force + + $cred = @{ + ArgumentList = "Azure\<storage-account-name>", $acctKey + } + $credential = New-Object System.Management.Automation.PSCredential @cred ++ $map = @{ + Name = "Z" + PSProvider = "FileSystem" + Root = "\\<storage-account-name>.file.core.windows.net\file-share" + Credential = $credential + } + New-PSDrive @map + ``` + +1. You should receive the following error message: ++ ```output + New-PSDrive : Access is denied + At line:1 char:5 + + New-PSDrive @map + + ~~~~~~~~~~~~~~~~ + + CategoryInfo : InvalidOperation: (Z:PSDriveInfo) [New-PSDrive], Win32Exception + + FullyQualifiedErrorId : CouldNotMapNetworkDrive,Microsoft.PowerShell.Commands.NewPSDriveCommand + ``` ++1. Close the Bastion connection to **vm-1**. ++1. From your computer, attempt to view the file shares in the storage account with the following command: ++ ```powershell-interactive + $storage = @{ + ShareName = "file-share" + Context = $storageContext + } + Get-AzStorageFile @storage + ``` ++ Access is denied. You receive an output similar to the following example. ++ ```output + Get-AzStorageFile : The remote server returned an error: (403) Forbidden. HTTP Status Code: 403 - HTTP Error Message: This request isn't authorized to perform this operation + ``` + Your computer isn't in the *subnet-private* subnet of the *vnet-1* virtual network. ++### [CLI](#tab/cli) ++SSH into the *vm-public* VM. ++1. Run the following command to store the IP address of the VM as an environment variable: ++ ```bash + export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-public --query publicIps --output tsv) + + ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS + ``` ++1. Create a directory for a mount point: ++ ```bash + sudo mkdir /mnt/file-share + ``` ++1. Attempt to mount the Azure file share to the directory you created. This article assumes you deployed the latest version of Ubuntu. If you're using earlier versions of Ubuntu, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for more instructions about mounting file shares. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account): ++ ```bash + sudo mount --types cifs //storage-account-name>.file.core.windows.net/file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino + ``` ++ Access is denied, and you receive a `mount error(13): Permission denied` error, because the *vm-public* VM is deployed within the *subnet-public* subnet. The *subnet-public* subnet doesn't have a service endpoint enabled for Azure Storage, and the storage account only allows network access from the *subnet-private* subnet, not the *subnet-public* subnet. ++1. Exit the SSH session to the *vm-public* VM. ++1. From your computer, attempt to view the shares in your storage account with [az storage share list](/cli/azure/storage/share). Replace `<account-name>` and `<account-key>` with the storage account name and key from [Create a storage account](#create-a-storage-account): ++ ```azurecli-interactive + az storage share list \ + --account-name <account-name> \ + --account-key <account-key> + ``` ++ Access is denied and you receive a **This request isn't authorized to perform this operation** error, because your computer isn't in the *subnet-private* subnet of the *vnet-1* virtual network. ++++### [Portal](#tab/portal) + [!INCLUDE [portal-clean-up.md](~/reusable-content/ce-skilling/azure/includes/portal-clean-up.md)] +### [PowerShell](#tab/powershell) ++When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains: ++```azurepowershell-interactive +$cleanup = @{ + Name = "test-rg" +} +Remove-AzResourceGroup @cleanup -Force +``` ++### [CLI](#tab/cli) ++## Clean up resources ++When no longer needed, use [az group delete](/cli/azure) to remove the resource group and all of the resources it contains. ++```azurecli-interactive +az group delete \ + --name test-rg \ + --yes \ + --no-wait +``` +++ ## Next steps In this tutorial: In this tutorial: To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md). -If you have multiple virtual networks in your account, you may want to establish connectivity between them so that resources can communicate with each other. To learn how to connect virtual networks, advance to the next tutorial. +If you have multiple virtual networks in your account, you might want to establish connectivity between them so that resources can communicate with each other. To learn how to connect virtual networks, advance to the next tutorial. > [!div class="nextstepaction"] > [Connect virtual networks](./tutorial-connect-virtual-networks-portal.md) |
vpn-gateway | Point To Site Entra Register Custom App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-register-custom-app.md | When you configure a custom audience app ID, you can use any of the supported va This article provides high-level steps. The screenshots to register an application might be slightly different, depending on the way you access the user interface, but the settings are the same. For more information, see [Quickstart: Register an application](/entr#entra-id). +If you're configuring a custom audience app ID in order to configure or restrict access based on users and groups, see [Scenario: Configure P2S access based on users and groups - Microsoft Entra ID authentication](point-to-site-entra-users-access.md). The scenario article outlines the workflow and steps to assign permissions. + ## Prerequisites * This article assumes that you already have a Microsoft Entra tenant and the permissions to create an Enterprise Application, typically the [Cloud Application Administrator role](/entra/identity/role-based-access-control/permissions-reference#cloud-application-administrator) or higher. For more information, see [Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant) and [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal). |