Updates from: 07/03/2024 01:11:14
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-model-customization.md
The following table describes the limits on the scale of your custom model proje
| Max # training images | 1,000,000 | 200,000 | | Max # evaluation images | 100,000 | 100,000 | | Min # training images per category | 2 | 2 |
-| Max # tags per image | multiclass: 1 | NA |
-| Max # regions per image | NA | 1,000 |
+| Max # tags per image | 1 | N/A |
+| Max # regions per image | N/A | 1,000 |
| Max # categories | 2,500 | 1,000 | | Min # categories | 2 | 1 | | Max image size (Training) | 20 MB | 20 MB |
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
This documentation contains the following article types:
The following are a few scenarios in which a software developer or team would require a content moderation service:
+- User prompts submitted to a generative AI service.
+- Content produced by generative AI models.
- Online marketplaces that moderate product catalogs and other user-generated content. - Gaming companies that moderate user-generated game artifacts and chat rooms. - Social messaging platforms that moderate images and text added by their users.
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
The host is a x64-based computer that runs the Docker container. It can be a com
* [Azure Container Instances](../../../container-instances/index.yml). * A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
+> [!NOTE]
+>
+> Note that Studio container cannot be deployed and run in Azure Kubernetes Service. Studio container is only supported to be run on local machine.
+ ### Container requirements and recommendations #### Required supporting containers
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/glossary.md
A model is an object that's trained to do a certain task, in this case conversat
## Overfitting
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+Overfitting happens when the model is fixated on the specific examples and isn't able to generalize well.
## Precision Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
Measures the model's ability to predict actual positive classes. It's the ratio
A regular expression entity represents a regular expression. Regular expression entities are exact matches. ## Schema
-Schema is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project
+Schema is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want to think about which intents and entities should be included in your project.
## Training data Training data is the set of information that is needed to train a model. ## Utterance
-An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
+An utterance is user input that is short text representative of a sentence in a conversation. It's a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
## Next steps
ai-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
Last updated 12/19/2023
-# Migrate from QnA Maker to custom question qnswering
+# Migrate from QnA Maker to custom question answering
**Purpose of this document:** This article aims to provide information that can be used to successfully migrate applications that use QnA Maker to custom question answering. Using this article, we hope customers will gain clarity on the following:
> - Automatic RBAC to Language project (not resource) > - Automatic enabling of analytics.
-You will also need to [re-enable analytics](analytics.md) for the language resource.
+You'll also need to [re-enable analytics](analytics.md) for the language resource.
## Comparison of features
In addition to a new set of features, custom question answering provides many te
|Smart URL Refresh|➖|✔️|Custom question answering provides a means to refresh ingested content from public sources with a single click.| |Q&A over knowledge base (hierarchical extraction)|✔️|✔️| | |Active learning|✔️|✔️|Custom question answering has an improved active learning model.|
-|Alternate Questions|✔️|✔️|The improved models in custom question answering reduces the need to add alternate questions.|
+|Alternate Questions|✔️|✔️|The improved models in custom question answering reduce the need to add alternate questions.|
|Synonyms|✔️|✔️| | |Metadata|✔️|✔️| | |Question Generation (private preview)|➖|✔️|This new feature will allow generation of questions over text.|
In addition to a new set of features, custom question answering provides many te
## Pricing
-When you are looking at migrating to custom question answering, please consider the following:
+When you're looking at migrating to custom question answering, please consider the following:
|Component |QnA Maker|Custom question answering|Details | |-||||
When you are looking at migrating to custom question answering, please consider
- Users may select a higher tier with higher capacity, which will impact overall price they pay. It doesnΓÇÖt impact the price on language component of custom question answering. -- ΓÇ£Text RecordsΓÇ¥ in custom question answering features refers to the query submitted by the user to the runtime, and it is a concept common to all features within Language service. Sometimes a query may have more text records when the query length is higher.
+- ΓÇ£Text RecordsΓÇ¥ in custom question answering features refers to the query submitted by the user to the runtime, and it's a concept common to all features within Language service. Sometimes a query may have more text records when the query length is higher.
**Example price estimations**
When you are looking at migrating to custom question answering, please consider
|Medium|10 |10(S1) |800K |4x3(S1) |Less expensive | |Low |4 |4(B1) |100K |3x3(S1) |Less expensive |
- Summary : Customers should save cost across the most common configurations as seen in the relative cost column.
+ Summary: Customers should save cost across the most common configurations as seen in the relative cost column.
Here you can find the pricing details for [custom question answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
Following are the broad migration phases to consider:
![A chart showing the phases of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-phases.png)
-Additional links which can help you are given below:
+Additional links which can help you're given below:
- [Authoring portal](https://language.cognitive.azure.com/home) - [API](authoring.md) - [SDK](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker)
This topic compares two hypothetical scenarios when migrating from QnA Maker to
> An attempt has been made to ensure these scenarios are representative of real customer migrations, however, individual customer scenarios will of course differ. Also, this article doesn't include pricing details. Visit the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) page for more information. > [!IMPORTANT]
-> Each custom question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
+> Each custom question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You'll also need to [re-enable analytics](analytics.md) for the language resource.
### Migration scenario 1: No custom authoring portal
ai-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker.md
Last updated 12/19/2023
-# Migrate from QnA Maker to custom question answering
+# Migrate from QnA Maker knowledge bases to custom question answering
> [!NOTE] > You can also migrate to [Azure OpenAI](../../../qnamaker/How-To/migrate-to-openai.md).
This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure
You can follow the steps below to migrate knowledge bases:
-1. Create a [language resource](https://aka.ms/create-language-resource) with custom question answering enabled in advance. When you create the language resource in the Azure portal, you will see the option to enable custom question answering. When you select that option and proceed, you will be asked for Azure Search details to save the knowledge bases.
+1. Create a [language resource](https://aka.ms/create-language-resource) with custom question answering enabled in advance. When you create the language resource in the Azure portal, you'll see the option to enable custom question answering. When you select that option and proceed, you'll be asked for Azure Search details to save the knowledge bases.
2. If you want to add knowledge bases in multiple languages to your language resource, visit [Language Studio](https://language.azure.com/) to create your first custom question answering project and select the first option as shown below. Language settings for the language resource can be specified only when creating a project. If you want to migrate existing knowledge bases in a single language to the language resource, you can skip this step.
You can follow the steps below to migrate knowledge bases:
> [!div class="mx-imgBorder"] > ![Migrate QnAMaker with red selection box around the QnAMaker resource selection option](../media/migrate-qnamaker/select-resource.png)
-6. Select the language resource to which you want to migrate the knowledge bases. You will only be able to see those language resources that have custom question answering enabled. The language setting for the language resource is displayed in the options. You wonΓÇÖt be able to migrate knowledge bases in multiple languages from QnA Maker resources to a language resource if its language setting is not specified.
+6. Select the language resource to which you want to migrate the knowledge bases. You'll only be able to see those language resources that have custom question answering enabled. The language setting for the language resource is displayed in the options. You wonΓÇÖt be able to migrate knowledge bases in multiple languages from QnA Maker resources to a language resource if its language setting isn't specified.
> [!div class="mx-imgBorder"] > ![Migrate QnAMaker with red selection box around the language resource option currently selected resource contains the information that language is unspecified](../media/migrate-qnamaker/language-setting.png)
- If you want to migrate knowledge bases in multiple languages to the language resource, you must enable the multiple language setting when creating the first custom question answering project for the language resource. You can do so by following the instructions in step #2. **If the language setting for the language resource is not specified, it is assigned the language of the selected QnA Maker resource**.
+ If you want to migrate knowledge bases in multiple languages to the language resource, you must enable the multiple language setting when creating the first custom question answering project for the language resource. You can do so by following the instructions in step #2. **If the language setting for the language resource isn't specified, it is assigned the language of the selected QnA Maker resource**.
7. Select all the knowledge bases that you wish to migrate > select **Next**.
You can follow the steps below to migrate knowledge bases:
> If you migrate a knowledge base with the same name as a project that already exists in the target language resource, **the content of the project will be overridden** by the content of the selected knowledge base. > [!div class="mx-imgBorder"]
- > ![Screenshot of an error message starting project names can't contain special characters](../media/migrate-qnamaker/migration-kb-name-validation.png)
+ > ![Screenshot of an error message starting project names can't contain special characters.](../media/migrate-qnamaker/migration-kb-name-validation.png)
9. After resolving the validation errors, select **Start migration** > [!div class="mx-imgBorder"]
- > ![Screenshot with special characters removed](../media/migrate-qnamaker/migration-kb-name-validation-success.png)
+ > ![Screenshot with special characters removed.](../media/migrate-qnamaker/migration-kb-name-validation-success.png)
-10. It will take a few minutes for the migration to occur. Do not cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration.
+10. It will take a few minutes for the migration to occur. Don't cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration.
> [!div class="mx-imgBorder"]
- > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using Language Studio](../media/migrate-qnamaker/migration-success.png)
+ > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using Language Studio.](../media/migrate-qnamaker/migration-success.png)
If any knowledge bases fail to migrate to custom question answering projects, an error will be displayed. The most common migration errors occur when: - Your source and target resources are invalid.
- - You are trying to migrate an empty knowledge base (KB).
- - You have reached the limit for an Azure Search instance linked to your target resources.
+ - You're trying to migrate an empty knowledge base (KB).
+ - You've reached the limit for an Azure Search instance linked to your target resources.
> [!div class="mx-imgBorder"]
- > ![Screenshot of a failed migration with an example error](../media/migrate-qnamaker/migration-errors.png)
+ > ![Screenshot of a failed migration with an example error.](../media/migrate-qnamaker/migration-errors.png)
Once you resolve these errors, you can rerun the migration.
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
These models are currently available for use in Azure OpenAI Service.
| `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0613 | No earlier than Sep 30, 2024 |
-| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on July 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on July 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on July 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
| `gpt-3.5-turbo-instruct` | 0914 | No earlier than Sep 14, 2025 | | `text-embedding-ada-002` | 2 | No earlier than April 3, 2025 | | `text-embedding-ada-002` | 1 | No earlier than April 3, 2025 |
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The following command shows the most basic way to use the GPT-4 Turbo with Visio
#### [REST](#tab/rest)
-Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2023-12-01-preview` where
+Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2024-02-15-preview` where
- RESOURCE_NAME is the name of your Azure OpenAI resource - DEPLOYMENT_NAME is the name of your GPT-4 Turbo with Vision model deployment
The following is a sample request body. The format is the same as the chat compl
api_base = '<your_azure_openai_endpoint>' # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ api_key="<your_azure_openai_key>" deployment_name = '<your_deployment_name>'
- api_version = '2023-12-01-preview' # this might change in the future
+ api_version = '2024-02-15-preview' # this might change in the future
client = AzureOpenAI( api_key=api_key, api_version=api_version,
- base_url=f"{api_base}openai/deployments/{deployment_name}/extensions",
+ base_url=f"{api_base}openai/deployments/{deployment_name}",
) ```
The **object grounding** integration brings a new layer to data analysis and use
#### [REST](#tab/rest)
-Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/extensions/chat/completions?api-version=2023-12-01-preview` where
+Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2024-02-15-preview` where
- RESOURCE_NAME is the name of your Azure OpenAI resource - DEPLOYMENT_NAME is the name of your GPT-4 Turbo with Vision model deployment
To use a User assigned identity on your Azure AI Services resource, follow these
#### [REST](#tab/rest)
-1. Prepare a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/extensions/chat/completions?api-version=2023-12-01-preview` where
+1. Prepare a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2024-02-15-preview` where
- RESOURCE_NAME is the name of your Azure OpenAI resource - DEPLOYMENT_NAME is the name of your GPT-4 Vision model deployment
ai-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md
zone_pivot_groups: programming-languages-set-three
# Configure OpenSSL for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version.
ai-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md
# Configure RHEL/CentOS 7 > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system.
ai-studio Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md
Certain models in the model catalog can be deployed as a serverless API with pay
- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). > [!IMPORTANT]
- > The serverless API model deployment offering for eligible models in the Mistral family is only available in hubs created in the **East US 2** and **Sweden Central** regions. For _Mistral Large_, the serverless API model deployment offering is also available in the **France Central** region.
-
+ > The serverless API model deployment offering for eligible models in the Mistral family is only available in hubs created in the **East US 2** and **Sweden Central** regions.
- An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
To create a deployment:
:::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model as a serverless API." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-pay-as-you-go.png":::
-1. Select the project in which you want to deploy your model. To deploy the Mistral model, your project must be in the *EastUS2* or *Sweden Central* region. For the Mistral Large model, you can also deploy in a project that's in the *France Central* region.
+1. Select the project in which you want to deploy your model. To deploy the Mistral model, your project must be in the *EastUS2* or *Sweden Central* region.
1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. 1. Select the **Pricing and terms** tab to learn about pricing for the selected model. 1. Select the **Subscribe and Deploy** button. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the resource group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Currently, you can have only one deployment for each model within a project.
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/p
Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3 | West US 3 Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3, | Not available Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
-Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, France Central, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
ai-studio Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md
Models deployed to [serverless API endpoints](../how-to/deploy-models-serverless
> * [Mistral-Large](../how-to/deploy-models-mistral.md) > * [Phi-3](../how-to/deploy-models-phi-3.md) family of models
+Models deployed to [managed inference](../concepts/deployments-overview.md):
+
+> [!div class="checklist"]
+> * [Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models
+> * [Phi-3](../how-to/deploy-models-phi-3.md) family of models
+> * Mixtral famility of models
+ The API is compatible with Azure OpenAI model deployments. ## Capabilities
The API indicates how developers can consume predictions for the following modal
* [Image embeddings](reference-model-inference-images-embeddings.md): Creates an embedding vector representing the input text and image.
+### Inference SDK support
+
+You can use streamlined inference clients in the language of your choice to consume predictions from models running the Azure AI model inference API.
+
+# [Python](#tab/python)
+
+Install the package `azure-ai-inference` using your package manager, like pip:
+
+```bash
+pip install azure-ai-inference
+```
+
+Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
+
+```python
+import os
+from azure.ai.inference import ChatCompletionsClient
+from azure.core.credentials import AzureKeyCredential
+
+model = ChatCompletionsClient(
+ endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
+ credential=AzureKeyCredential(os.environ["AZUREAI_ENDPOINT_KEY"]),
+)
+```
+
+# [JavaScript](#tab/javascript)
+
+Install the package `@azure-rest/ai-inference` using npm:
+
+```bash
+npm install @azure-rest/ai-inference
+```
+
+Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
+
+```javascript
+import ModelClient from "@azure-rest/ai-inference";
+import { isUnexpected } from "@azure-rest/ai-inference";
+import { AzureKeyCredential } from "@azure/core-auth";
+
+const client = new ModelClient(
+ process.env.AZUREAI_ENDPOINT_URL,
+ new AzureKeyCredential(process.env.AZUREAI_ENDPOINT_KEY)
+);
+```
+
+# [REST](#tab/rest)
+
+Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
+
+__Request__
+
+```HTTP/1.1
+POST /chat/completions?api-version=2024-04-01-preview
+Authorization: Bearer <bearer-token>
+Content-Type: application/json
+```
++ ### Extensibility The Azure AI Model Inference API specifies a set of modalities and parameters that models can subscribe to. However, some models may have further capabilities that the ones the API indicates. On those cases, the API allows the developer to pass them as extra parameters in the payload.
By setting a header `extra-parameters: allow`, the API will attempt to pass any
The following example shows a request passing the parameter `safe_prompt` supported by Mistral-Large, which isn't specified in the Azure AI Model Inference API:
+# [Python](#tab/python)
+
+```python
+response = model.complete(
+ messages=[
+ SystemMessage(content="You are a helpful assistant."),
+ UserMessage(content="How many languages are in the world?"),
+ ],
+ model_extras={
+ "safe_mode": True
+ }
+)
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+var messages = [
+ { role: "system", content: "You are a helpful assistant" },
+ { role: "user", content: "How many languages are in the world?" },
+];
+
+var response = await client.path("/chat/completions").post({
+ body: {
+ messages: messages,
+ safe_mode: true
+ }
+});
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1
extra-parameters: allow
} ``` ++ > [!TIP] > Alternatively, you can set `extra-parameters: drop` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
The Azure AI Model Inference API indicates a general set of capabilities but eac
The following example shows the response for a chat completion request indicating the parameter `reponse_format` and asking for a reply in `JSON` format. In the example, since the model doesn't support such capability an error 422 is returned to the user.
+# [Python](#tab/python)
+
+```python
+from azure.ai.inference.models import ChatCompletionsResponseFormat
+from azure.core.exceptions import HttpResponseError
+import json
+
+try:
+ response = model.complete(
+ messages=[
+ SystemMessage(content="You are a helpful assistant."),
+ UserMessage(content="How many languages are in the world?"),
+ ],
+ response_format={ "type": ChatCompletionsResponseFormat.JSON_OBJECT }
+ )
+except HttpResponseError as ex:
+ if ex.status_code == 422:
+ response = json.loads(ex.response._content.decode('utf-8'))
+ if isinstance(response, dict) and "detail" in response:
+ for offending in response["detail"]:
+ param = ".".join(offending["loc"])
+ value = offending["input"]
+ print(
+ f"Looks like the model doesn't support the parameter '{param}' with value '{value}'"
+ )
+ else:
+ raise ex
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+try {
+ var messages = [
+ { role: "system", content: "You are a helpful assistant" },
+ { role: "user", content: "How many languages are in the world?" },
+ ];
+
+ var response = await client.path("/chat/completions").post({
+ body: {
+ messages: messages,
+ response_format: { type: "json_object" }
+ }
+ });
+}
+catch (error) {
+ if (error.status_code == 422) {
+ var response = JSON.parse(error.response._content)
+ if (response.detail) {
+ for (const offending of response.detail) {
+ var param = offending.loc.join(".")
+ var value = offending.input
+ console.log(`Looks like the model doesn't support the parameter '${param}' with value '${value}'`)
+ }
+ }
+ }
+ else
+ {
+ throw error
+ }
+}
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1
__Response__
"message": "One of the parameters contain invalid values." } ```+ > [!TIP] > You can inspect the property `details.loc` to understand the location of the offending parameter and `details.input` to see the value that was passed in the request.
The Azure AI model inference API supports [Azure AI Content Safety](../concepts/
The following example shows the response for a chat completion request that has triggered content safety.
+# [Python](#tab/python)
+
+```python
+from azure.ai.inference.models import AssistantMessage, UserMessage, SystemMessage
+
+try:
+ response = model.complete(
+ messages=[
+ SystemMessage(content="You are an AI assistant that helps people find information."),
+ UserMessage(content="Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."),
+ ]
+ )
+
+ print(response.choices[0].message.content)
+
+except HttpResponseError as ex:
+ if ex.status_code == 400:
+ response = json.loads(ex.response._content.decode('utf-8'))
+ if isinstance(response, dict) and "error" in response:
+ print(f"Your request triggered an {response['error']['code']} error:\n\t {response['error']['message']}")
+ else:
+ raise ex
+ else:
+ raise ex
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+try {
+ var messages = [
+ { role: "system", content: "You are an AI assistant that helps people find information." },
+ { role: "user", content: "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills." },
+ ]
+
+ var response = await client.path("/chat/completions").post({
+ body: {
+ messages: messages,
+ }
+ });
+
+ console.log(response.body.choices[0].message.content)
+}
+catch (error) {
+ if (error.status_code == 400) {
+ var response = JSON.parse(error.response._content)
+ if (response.error) {
+ console.log(`Your request triggered an ${response.error.code} error:\n\t ${response.error.message}`)
+ }
+ else
+ {
+ throw error
+ }
+ }
+}
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1
__Response__
"type": null } ```+ ## Getting started
-The Azure AI Model Inference API is currently supported in models deployed as [Serverless API endpoints](../how-to/deploy-models-serverless.md). Deploy any of the [supported models](#availability) to a new [Serverless API endpoints](../how-to/deploy-models-serverless.md) to get started. Then you can consume the API in the following ways:
-
-# [Studio](#tab/azure-studio)
-
-You can use the Azure AI Model Inference API to run evaluations or while building with *Prompt flow*. Create a [Serverless Model connection](../how-to/deploy-models-serverless-connect.md) to a *Serverless API endpoint* and consume its predictions. The Azure AI Model Inference API is used under the hood.
-
-# [Python](#tab/python)
-
-Since the API is OpenAI-compatible, you can use any supported SDK that already supports Azure OpenAI. In the following example, we show how you can use LiteLLM with the common API:
-
-```python
-import litellm
-
-client = litellm.LiteLLM(
- base_url="https://<endpoint-name>.<region>.inference.ai.azure.com",
- api_key="<key>",
-)
-
-response = client.chat.completions.create(
- messages=[
- {
- "content": "Who is the most renowned French painter?",
- "role": "user"
- }
- ],
- model="azureai",
- custom_llm_provider="custom_openai",
-)
-
-print(response.choices[0].message.content)
-```
-
-# [REST](#tab/rest)
-
-Models deployed in Azure Machine Learning and Azure AI studio in Serverless API endpoints support the Azure AI Model Inference API. Each endpoint exposes the OpenAPI specification for the modalities the model support. Use the **Endpoint URI** and the **Key** to download the OpenAPI definition for the model. In the following example, we download it from a bash console. Replace `<TOKEN>` by the **Key** and `<ENDPOINT_URI>` for the **Endpoint URI**.
-
-```bash
-wget -d --header="Authorization: Bearer <TOKEN>" <ENDPOINT_URI>/swagger.json
-```
-
-Use the **Endpoint URI** and the **Key** to submit requests. The following example sends a request to a Cohere embedding model:
-
-```HTTP/1.1
-POST /embeddings?api-version=2024-04-01-preview
-Authorization: Bearer <bearer-token>
-Content-Type: application/json
-```
-
-```JSON
-{
- "input": [
- "Explain the theory of strings"
- ],
- "input_type": "query",
- "encoding_format": "float",
- "dimensions": 1024
-}
-```
-
-__Response__
-
-```json
-{
- "id": "ab1c2d34-5678-9efg-hi01-0123456789ea",
- "object": "list",
- "data": [
- {
- "index": 0,
- "object": "embedding",
- "embedding": [
- 0.001912117,
- 0.048706055,
- -0.06359863,
- //...
- -0.00044369698
- ]
- }
- ],
- "model": "",
- "usage": {
- "prompt_tokens": 7,
- "completion_tokens": 0,
- "total_tokens": 7
- }
-}
-```
---
+The Azure AI Model Inference API is currently supported in certain models deployed as [Serverless API endpoints](../how-to/deploy-models-serverless.md) and Managed Online Endpoints. Deploy any of the [supported models](#availability) and use the exact same code to consume their predictions.
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 11/28/2023 Last updated : 07/02/2024 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
az aks nodepool add --resource-group $resourceGroup --cluster-name $clusterName
> - Doesn't have network policies enabled. Network Policy engine can be uninstalled before the upgrade, see [Uninstall Azure Network Policy Manager or Calico](use-network-policies.md#uninstall-azure-network-policy-manager-or-calico-preview) > - Doesn't use any Windows node pools with docker as the container runtime.
-> [!NOTE]
-> Because Routing domain is not yet supported for ARM, CNI Overlay is not yet supported on ARM-based (ARM64) processor nodes.
- > [!NOTE] > Upgrading an existing cluster to CNI Overlay is a non-reversible process.
aks Deployment Safeguards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-safeguards.md
To learn more, see [workload validation in Gatekeeper](https://open-policy-agent
[az-feature-show]: /cli/azure/feature#az-feature-show [aks-gh-repo]: https://github.com/Azure/AKS [policy-for-kubernetes]: /azure/governance/policy/concepts/policy-for-kubernetes#install-azure-policy-add-on-for-aks
-[deployment-safeguards-list]: https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/InitiativeDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2Fc047ea8e-9c78-49b2-958b-37e56d291a44/scopes/
+[deployment-safeguards-list]: https://portal.azure.com/#view/Microsoft_Azure_Policy/InitiativeDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2Fc047ea8e-9c78-49b2-958b-37e56d291a44/scopes/
[Azure-Policy-built-in-definition-docs]: /azure/aks/policy-reference#policy-definitions [Azure-Policy-compliance-portal]: https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyMenuBlade/~/Compliance [Azure-Policy-RBAC-permissions]: /azure/governance/policy/overview#azure-rbac-permissions-in-azure-policy
aks Eks Edw Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-deploy.md
For more information on developing and running applications in AKS, see the foll
- [Deploy and manage a Kubernetes application from Azure Marketplace in AKS][k8s-aks] - [Deploy an application that uses OpenAI on AKS][openai-aks]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [eks-edw-overview]: ./eks-edw-overview.md [az-login]: /cli/azure/authenticate-azure-cli-interactively#interactive-login
aks Eks Edw Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-overview.md
code .
> [!div class="nextstepaction"] > [Understand platform differences][eks-edw-understand]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [competing-consumers]: /azure/architecture/patterns/competing-consumers [edw-aws-eks]: https://aws.amazon.com/blogs/containers/scalable-and-cost-effective-event-driven-workloads-with-keda-and-karpenter-on-amazon-eks/
aks Eks Edw Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-prepare.md
You can review the `environmentVariables.sh` Bash script in the `deployment` dir
> [!div class="nextstepaction"] > [Deploy the EDW workload to Azure][eks-edw-deploy]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [azure-storage-queue-scaler]: https://keda.sh/docs/1.4/scalers/azure-storage-queue/ [github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
aks Eks Edw Rearchitect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-rearchitect.md
In AWS, you can choose between on-demand compute (more expensive but no eviction
> [!div class="nextstepaction"] > [Refactor application code for AKS][eks-edw-refactor]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [competing-consumers]: /azure/architecture/patterns/competing-consumers [kubernetes]: https://kubernetes.io/
aks Eks Edw Refactor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-refactor.md
To build the container images and push them to ACR, make sure the environment va
> [!div class="nextstepaction"] > [Prepare to deploy the EDW workload to Azure][eks-edw-prepare]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [map-aws-to-azure]: ./eks-edw-rearchitect.md#map-aws-services-to-azure-services [storage-queue-data-contributor]: ../role-based-access-control/built-in-roles.md#storage
aks Eks Edw Understand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-understand.md
The following resources can help you learn more about the differences between AW
> [!div class="nextstepaction"] > [Rearchitect the workload for AKS][eks-edw-rearchitect]
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+- Ken Kilty | Principal TPM
+- Russell de Pina | Principal TPM
+- Jenny Hayes | Senior Content Developer
+- Carol Smith | Senior Content Developer
+- Erin Schaffer | Content Developer 2
+ <!-- LINKS --> [azure-rbac]: ../role-based-access-control/overview.md [entra-workload-id]: /azure/architecture/aws-professional/eks-to-aks/workload-identity#microsoft-entra-workload-id-for-kubernetes
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
Title: Stop and start an Azure Kubernetes Service (AKS) cluster description: Learn how to stop and start an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/14/2023 Last updated : 07/01/2024
When using the cluster stop/start feature, the following conditions apply:
## Start an AKS cluster > [!CAUTION]
-> Don't repeatedly stop and start your clusters. This can result in errors. Once your cluster is stopped, you should wait at least 15-30 minutes before starting it again.
+> After utilizing the start/stop feature on AKS, it is essential to wait 15-30 minutes before restarting your AKS cluster. This waiting period is necessary because it takes several minutes for the relevant services to fully stop. Attempting to restart your cluster during this process can disrupt the shutdown process and potentially cause issues with the cluster or its workloads.
### [Azure CLI](#tab/azure-cli)
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
The backend circuit breaker is an implementation of the [circuit breaker pattern
> [!NOTE] > * Currently, the backend circuit breaker isn't supported in the **Consumption** tier of API Management. > * Because of the distributed nature of the API Management architecture, circuit breaker tripping rules are approximate. Different instances of the gateway do not synchronize and will apply circuit breaker rules based on the information on the same instance.
+> * Currently, only one rule can be configured for a backend circuit breaker.
### Example
Include a JSON snippet similar to the following in your ARM template for a backe
-## Limitation
+## Limitations
-For **Developer** and **Premium** tiers, an API Management instance deployed in an [internal virtual network](api-management-using-with-internal-vnet.md) can throw HTTP 500 `BackendConnectionFailure` errors when the gateway endpoint URL and backend URL are the same. If you encounter this limitation, follow the instructions in the [Self-Chained API Management request limitation in internal virtual network mode](https://techcommunity.microsoft.com/t5/azure-paas-blog/self-chained-apim-request-limitation-in-internal-virtual-network/ba-p/1940417) article in the Tech Community blog.
+- For **Developer** and **Premium** tiers, an API Management instance deployed in an [internal virtual network](api-management-using-with-internal-vnet.md) can throw HTTP 500 `BackendConnectionFailure` errors when the gateway endpoint URL and backend URL are the same. If you encounter this limitation, follow the instructions in the [Self-Chained API Management request limitation in internal virtual network mode](https://techcommunity.microsoft.com/t5/azure-paas-blog/self-chained-apim-request-limitation-in-internal-virtual-network/ba-p/1940417) article in the Tech Community blog.
+- Currently, only one rule can be configured for a backend circuit breaker.
## Related content
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
If you choose to use Azure role-based access control to manage access to your ke
### Certificate
-The certificate for custom domain suffix must be stored in an Azure Key Vault. The certificate must be uploaded in .PFX format and be smaller than 20 kb. Certificates in .PEM format aren't supported at this time. App Service Environment uses the managed identity you selected to get the certificate. The key vault can be accessed publicly or through a [private endpoint](../../private-link/private-endpoint-overview.md) accessible from the subnet that the App Service Environment is deployed to. To learn how to configure a private endpoint, see [Integrate Key Vault with Azure Private Link](../../key-vault/general/private-link-service.md). In the case of public access, you can secure your key vault to only accept traffic from the outbound IP addresses of the App Service Environment.
-
+The certificate for custom domain suffix must be stored in an Azure Key Vault. The certificate must be uploaded in .PFX format and be smaller than 20 kb. Certificates in .PEM format aren't supported at this time. App Service Environment uses the managed identity you selected to get the certificate.
Your certificate must be a wildcard certificate for the selected custom domain name. For example, *internal.contoso.com* would need a certificate covering **.internal.contoso.com*. If the certificate used by the custom domain suffix contains a Subject Alternate Name (SAN) entry for scm, for example **.scm.internal.contoso.com*, the scm site is also available using the custom domain suffix. If you rotate your certificate in Azure Key Vault, the App Service Environment picks up the change within 24 hours.
+### Network access to Key Vault
+
+The key vault can be accessed publicly or through a [private endpoint](../../private-link/private-endpoint-overview.md) accessible from the subnet that the App Service Environment is deployed to. To learn how to configure a private endpoint, see [Integrate Key Vault with Azure Private Link](../../key-vault/general/private-link-service.md). If you use public access, you can secure your key vault to only accept traffic from the outbound IP address of the App Service Environment. The App Service Environment uses the platform outbound IP address as the source address when accessing the key vault. You can find the IP address in the IP Addresses page in Azure portal.
++ ::: zone pivot="experience-azp" ## Use the Azure portal to configure custom domain suffix
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
This article uses Health check in the Azure portal to monitor App Service instan
![Health check failure][1]
-Note that _/api/health_ is just an example added for illustration purposes. We do not create a Health Check path by default. You should make sure that the path you are selecting is a valid path that exists within your application
+Note that _/api/health_ is just an example added for illustration purposes. We don't create a Health Check path by default. You should make sure that the path you are selecting is a valid path that exists within your application
## What App Service does with Health checks
function envVarMatchesHeader(headerValue) {
Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that application's instance and gives you the option to manually restart the instance.
-If the status of your application instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button.
+If the status of your application instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they're listed on the opening blade from the restart button.
-If you restart the instance and the restart process fails, you will then be given the option to replace the worker (only 1 instance can be replaced per hour). This will also affect any applications using the same App Service Plan.
+If you restart the instance and the restart process fails, you'll then be given the option to replace the worker (only one instance can be replaced per hour). This will also affect any applications using the same App Service Plan.
Windows applications will also have the option to view processes via the Process Explorer. This gives you further insight on the instance's processes including thread count, private memory, and total CPU time.
Once diagnostic collection is enabled, you can create or choose an existing stor
## Monitoring
-After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, select the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and option to create a new alert rule. Health check metrics aggregate the successful pings & display failures only when the instance was deemed unhealthy based on the health check configuration. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
+After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, select the **Metrics** in the top toolbar. This opens a new blade where you can see the site's historical health status and option to create a new alert rule. Health check metrics aggregate the successful pings & display failures only when the instance was deemed unhealthy based on the health check configuration. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
## Limitations -- Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and setup alerts, but because **Free** and **Shared** sites can't scale out, any unhealthy instances won't be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it will increase your app's availability and performance.
+- Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and setup alerts, but because **Free** and **Shared** sites can't scale out, any unhealthy instances won't be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it increases your app's availability and performance.
- The App Service plan can have a maximum of one unhealthy instance replaced per hour and, at most, three instances per day.-- There's a non-configurable limit on the total number of instances replaced by Health Check per scale unit. If this limit is reached, no unhealthy instances are replaced. This value gets reset every 12 hours.
+- There's a nonconfigurable limit on the total number of instances replaced by Health Check per scale unit. If this limit is reached, no unhealthy instances are replaced. This value gets reset every 12 hours.
## Frequently Asked Questions
The Health check requests are sent to your site internally, so the request won't
### Are the Health check requests sent over HTTP or HTTPS?
-On Windows App Service, the Health check requests are sent via HTTPS when [HTTPS Only](configure-ssl-bindings.md#enforce-https) is enabled on the site. Otherwise, they're sent over HTTP. On Linux App Service, the health check requests are only sent over HTTP and can't be sent over HTTP**S** at this time.
+On Windows and Linux App Service, the Health check requests are sent via HTTPS when [HTTPS Only](configure-ssl-bindings.md#enforce-https) is enabled on the site. Otherwise, they're sent over HTTP.
### Is Health check following the application code configured redirects between the default domain and the custom domain?
-No, the Health check feature is pinging the path of the default domain of the web application. If there is a redirect from the default domain to a custom domain, then the status code that Health check is returning is not going to be a 200 but a redirect (301), which is going to mark the worker unhealthy.
+No, the Health check feature is pinging the path of the default domain of the web application. If there's a redirect from the default domain to a custom domain, then the status code that Health check is returning is not going to be a 200 but a redirect (301), which is going to mark the worker unhealthy.
### What if I have multiple apps on the same App Service Plan?
Unhealthy instances will always be removed from the load balancer rotation regar
#### Example
-Imagine you have two applications (or one app with a slot) with Health check enabled, called App A and App B. They are on the same App Service Plan and that the Plan is scaled out to four instances. If App A becomes unhealthy on two instances, the load balancer stops sending requests to App A on those two instances. Requests are still routed to App B on those instances assuming App B is healthy. If App A remains unhealthy for over an hour on those two instances, those instances are only replaced if App B is **also** unhealthy on those instances. If App B is healthy, the instance isn't replaced.
+Imagine you have two applications (or one app with a slot) with Health check enabled, called App A and App B. They're on the same App Service Plan and that the Plan is scaled out to four instances. If App A becomes unhealthy on two instances, the load balancer stops sending requests to App A on those two instances. Requests are still routed to App B on those instances assuming App B is healthy. If App A remains unhealthy for over an hour on those two instances, those instances are only replaced if App B is **also** unhealthy on those instances. If App B is healthy, the instance isn't replaced.
![Visual diagram explaining the example scenario above.][2]
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
The virtual network integration feature supports two virtual interfaces per work
Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet consumes five IPs from the start. One address is used from the integration subnet for each App Service plan instance. If you scale your app to four instances, then four addresses are used.
-When you scale up/down in instance size, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours.
+When you scale up/down in instance size, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours and if you rapidly scaling in/out or up/down, you need more IPs than the maximum scale.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of `/27` is required. If the subnet already exists before integrating through the portal, you can use a `/28` subnet.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, we recommand allocating double the IPs of your planned maximum scale. A `/26` with 64 addresses cover the maximum scale of a single multitenant App Service plan. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of `/27` is required. If the subnet already exists before integrating through the portal, you can use a `/28` subnet.
With multi plan subnet join (MPSJ), you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
application-gateway Ingress Controller Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md
Previously updated : 07/28/2023 Last updated : 07/01/2024
You can now enable the AGIC add-on in your AKS cluster to target your existing A
```azurecli-interactive az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId ```
+Alternatively, you can navigate to your [AKS cluster in the Azure portal](https://portal.azure.com/?feature.aksagic=true) and enable the AGIC add-on in the **Virtual network integration** tab of your cluster. Select your existing Application Gateway when you choose which Application Gateway that the add-on should target.
+
+![Application Gateway Ingress Controller Portal](./media/tutorial-ingress-controller-add-on-existing/portal-ingress-controller-add-on.png)
## Next Steps - [**Application Gateway Ingress Controller Troubleshooting**](ingress-controller-troubleshoot.md): Troubleshooting guide for AGIC
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Previously updated : 11/28/2023 Last updated : 07/01/2024
If you'd like to continue using Azure CLI, you can continue to enable the AGIC a
appgwId=$(az network application-gateway show --name myApplicationGateway --resource-group myResourceGroup -o tsv --query "id") az aks enable-addons --name myCluster --resource-group myResourceGroup --addon ingress-appgw --appgw-id $appgwId ```
+## Enable the AGIC add-on in existing AKS cluster through Azure portal
+
+If you'd like to use Azure portal to enable AGIC add-on, go to [(https://aka.ms/azure/portal/aks/agic)](https://aka.ms/azure/portal/aks/agic) and navigate to your AKS cluster through the portal link. Select the **Networking** menu item under **Settings**. From there, go to the **Virtual network integration** tab within your AKS cluster. You'll see an **Application gateway ingress controller** section, which allows you to enable and disable the ingress controller add-on. Select the **Manage** button, then the checkbox next to **Enable ingress controller**. Select the application gateway you created, **myApplicationGateway** and then select **Save**.
+ > [!IMPORTANT]
-> When you use an application gateway in a different resource group than the AKS cluster resource group, the managed identity **_ingressapplicationgateway-{AKSNAME}_** that is created must have **Network Contributor** and **Reader** roles set in the application gateway resource group.
+> If you use an application gateway in a different resource group than the AKS cluster resource group, the managed identity **_ingressapplicationgateway-{AKSNAME}_** that is created must have **Network Contributor** and **Reader** roles set in the application gateway resource group.
## Peer the two virtual networks together
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-arc.md
Last updated 05/12/2022
# Azure Automanage for Machines Best Practices - Azure Arc-enabled servers > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
These Azure services are automatically onboarded for you when you use Automanage Machine Best Practices on an Azure Arc-enabled server VM. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-linux.md
# Azure Automanage for Machines Best Practices - Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
These Azure services are automatically onboarded for you when you use Automanage Machine Best Practices Profiles on a Linux VM. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
# Deploy an agent-based Linux Hybrid Runbook Worker in Automation > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
# Change Tracking and Inventory overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!Important] > - Change Tracking and Inventory using Log Analytics agent will retire on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md).
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
# Configure a VM with Desired State Configuration > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!NOTE] > Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
# Troubleshoot Update Management issues > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article discusses issues that you might run into when using the Update Management feature to assess and manage updates on your machines. There's an agent troubleshooter for the Hybrid Runbook Worker agent to help determine the underlying problem. To learn more about the troubleshooter, see [Troubleshoot Windows update agent issues](update-agent-issues.md) and [Troubleshoot Linux update agent issues](update-agent-issues-linux.md). For other feature deployment issues, see [Troubleshoot feature deployment issues](onboarding.md).
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
# How to deploy updates and review results > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to schedule an update deployment and review the process after the deployment is complete. You can configure an update deployment from a selected Azure virtual machine, from the selected Azure Arc-enabled server, or from the Automation account across all configured machines and servers.
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Last updated 06/30/2024
# Manage updates and patches for your VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Software updates in Azure Automation Update Management provides a set of tools and resources that can help manage the complex task of tracking and applying software updates to machines in Azure and hybrid cloud. An effective software update management process is necessary to maintain operational efficiency, overcome security issues, and reduce the risks of increased cyber security threats. However, because of the changing nature of technology and the continual appearance of new security threats, effective software update management requires consistent and continual attention.
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
# Operating systems supported by Update Management > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Management.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
# Update Management overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!Important] > - Azure Automation Update Management will retire on **31 August 2024**. Follow the guidelines for [migration to Azure Update Manager](../../update-manager/guidance-migration-automation-update-management-azure-update-manager.md).
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md
# View update assessments in Update Management > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In Update Management, you can view information about your machines, missing updates, update deployments, and scheduled update deployments. You can view the assessment information scoped to the selected Azure virtual machine, from the selected Azure Arc-enabled server, or from the Automation account across all configured machines and servers.
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
# Archive for What's new in Azure Automation? > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The primary [What's new in Azure Automation?](whats-new.md) article contains updates for the last six months, while this article contains all the older information.
azure-app-configuration Concept Experimentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-experimentation.md
Title: Experimentation in Azure App Configuration
-description: Experimentation in Azure App Configuration
+description: This document introduces experimentation in Azure App Configuration, scenarios for using Split Experimentation, and more.
- build-2024 Last updated 05/08/2024+ # Experimentation (preview)
azure-app-configuration Quickstart Feature Flag Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-python.md
Title: Quickstart for adding feature flags to Python with Azure App Configuration (Preview)
+ Title: Quickstart for adding feature flags to Python with Azure App Configuration
description: Add feature flags to Python apps and manage them using Azure App Configuration. ms.devlang: python Previously updated : 05/29/2024 Last updated : 07/01/2024 #Customer intent: As an Python developer, I want to use feature flags to control feature availability quickly and confidently.
-# Quickstart: Add feature flags to a Python app (preview)
+# Quickstart: Add feature flags to a Python app
In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control Python apps to create an end-to-end implementation of feature management.
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
The most recent version of the Flux v2 extension and the two previous versions (
> > The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`. >
-> To avoid issues due to breaking changes, we recommend updating your deployments by July 22, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs.
+> To avoid issues due to breaking changes, we recommend updating your deployments by July 29, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs.
> [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
# Archive for What's new with Azure Connected Machine agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The primary [What's new in Azure Connected Machine agent?](agent-release-notes.md) article contains updates for the last six months, while this article contains all the older information.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
# Managing and maintaining the Connected Machine agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
After initial deployment of the Azure Connected Machine agent, you may need to reconfigure the agent, upgrade it, or remove it from the computer. These routine maintenance tasks can be done manually or through automation (which reduces both operational error and expenses). This article describes the operational aspects of the agent. See the [azcmagent CLI documentation](azcmagent.md) for command line reference information.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
# Virtual machine extension management with Azure Arc-enabled servers > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used.
azure-arc Migrate Legacy Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-legacy-agents.md
+
+ Title: How to migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc
+description: Learn how to migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc.
Last updated : 07/01/2024+++
+# Migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc
+
+Azure Monitor Agent (AMA) replaces the Log Analytics agent (also known as Microsoft Monitor Agent (MMA) and OMS) for Windows and Linux machines. Azure Arc is required to migrate off the legacy Log Analytics agents for non-Azure environments, including on-premises or multicloud infrastructure.
+
+Azure Arc is a bridge, extending not only Azure Monitor but the breadth of Azure management capabilities across Microsoft Defender, Azure Policy, and Azure Update Manager to non-Azure environments. Through the lightweight Connected Machine agent, Azure Arc projects non-Azure servers into the Azure control plane, providing a consistent management experience across Azure VMs and non-Azure servers.
+
+This article focuses on considerations when migrating from legacy Log Analytics agents in non-Azure environments. For core migration guidance, see [Migrate to Azure Monitor Agent from Log Analytics agent](../../azure-monitor/agents/azure-monitor-agent-migration.md).
+
+## Advantages of Azure Arc
+
+Deploying Azure Monitor Agent as an extension with Azure Arc-enabled servers provides several benefits over the legacy Log Analytics agents (MMA and OMS), which directly connect non-Azure servers to Log Analytics workspaces:
+
+- Azure Arc centralizes the identity, connectivity, and governance of non-Azure resources. This streamlines operational overhead and improves the security posture and performance.
+
+- Azure Arc offers extension management capabilities including auto-extension upgrade, reducing typical maintenance overhead.
+
+- Azure Arc enables access to the breadth of server management capabilities beyond monitoring, such as Cloud Security Posture Management with [Microsoft Defender](../../defender-for-cloud/defender-for-cloud-introduction.md) or scripting with [Run Command](run-command.md). As you centralize operations in Azure, Azure Arc provides a robust foundation for these other capabilities.
+
+Azure Arc is the foundation for a cloud-based inventory bringing together Azure and on-premises, multicloud, and edge infrastructure that can be queried and organized through Azure Resource Manager (ARM).
+
+## Limitations on Azure Arc
+
+Azure Arc relies on the [Connected Machine agent](/azure/azure-arc/servers/agent-overview) and is an agent-based solution requiring connectivity and designed for server infrastructure:
+
+- Azure Arc requires the Connected Machine agent in addition to the Azure Monitor Agent as a VM extension. The Connected Machine agent must be configured specifying details of the Azure resource.
+
+- Azure Arc only supports client-like Operating Systems when computers are in a server-like environment and doesn't support short-lived servers or virtual desktop infrastructure.
+
+- Azure Arc has two regional availability gaps with Azure Monitor Agent:
+ - Qatar Central (Availability expected in August 2024)
+ - Australia Central (Other Australia regions are available)
+
+- Azure Arc requires servers to have regular connectivity and the allowance of key endpoints. While proxy and private link connectivity are supported, Azure Arc doesn't support completely disconnected scenarios. Azure Arc doesn't support the Log Analytics (OMS) Gateway.
+
+- Azure Arc defines a System Managed Identity for connected servers, but doesn't support User Assigned Identities.
+
+Learn more about the full Connected Machine agent [prerequisites](/azure/azure-arc/servers/prerequisites#supported-operating-systems) for environmental constraints.
+
+## Relevant services
+
+Azure Arc-enabled servers is required for deploying all solutions that previously required the legacy Log Analytics agents (MMA/OMS) to non-Azure infrastructure. The new Azure Monitor Agent is only required for a subset of these services.
+
+|Azure Monitor Agent and Azure Arc required |Only Azure Arc required |
+|||
+|Microsoft Sentinel |Microsoft Defender for Cloud |
+|Virtual Machine Insights (previously Dependency Agent) |Azure Update Management |
+|Change Tracking and Inventory |Automation Hybrid Runbook Worker |
+
+As you design the holistic migration from the legacy Log Analytics agents (MMA/OMS), it's critical to consider and prepare for the migration of these solutions.
+
+## Deploying Azure Arc
+
+Azure Arc can be deployed interactively on a single server basis or programmatically at scale:
+
+- PowerShell and Bash deployment scripts can be generated from Azure portal or written manually following documentation.
+
+- Windows Server machines can be connected through Windows Admin Center and the Windows Server Graphical Installer.
+
+- At scale deployment options include Configuration Manager, Ansible, and Group Policy using the Azure service principal, a limited identity for Arc server onboarding.
+
+- Azure Automation Update Manager customers can onboard from Azure portal with the Arc-enablement of all detected non-Azure servers connected to the Log Analytics workspace with the Azure Automation Update Management solution.
+
+See [Azure Connected Machine agent deployment options](/azure/azure-arc/servers/deployment-options) to learn more.
+
+## Agent control and footprint
+
+You can lock down the Connected Machine agent by specifying the extensions and capabilities that are enabled. If migrating from the legacy Log Analytics agent, the Monitor mode is especially salient. Monitor mode applies a Microsoft-managed extension allowlist, disables remote connectivity, and disables the machine configuration agent. If youΓÇÖre using Azure Arc solely for monitoring purposes, setting the agent to Monitor mode makes it easy to restrict the agent to just the functionality required to use Azure Monitor and solutions that use Azure Monitor. You can configure the agent mode with the following command (run locally on each machine):
+
+`azcmagent config set config.mode monitor`
+
+See [Extensions security](/azure/azure-arc/servers/security-extensions) to learn more.
+
+## Networking options
+
+Azure Arc-enabled servers supports three networking options:
+
+- Connectivity over public endpoint
+- Proxy
+- Private Link (Azure Express Route).
+
+All connections are TCP and outbound over port 443 unless specified. All HTTP connections use HTTPS and SSL/TLS with officially signed and verifiable certificates.
+
+Azure Arc doesn't officially support using the Log Analytics gateway as a proxy for the Connected Machine agent.
+
+The connectivity method specified can be changed after onboarding.
+
+See [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud) to learn more.
+
+## Deploying Azure Monitor Agent with Azure Arc
+
+There are multiple methods to deploy the Azure Monitor Agent extension on Azure Arc-enabled servers programmatically, graphically, and automatically. Some popular methods to deploy Azure Monitor Agent on Azure Arc-enabled servers include:
+
+- Azure portal
+- PowerShell, Azure CLI, or Azure Resource Manager (ARM) templates
+- Azure Policy
+
+Azure Arc doesn't eliminate the need to configure and define Data Collection Rules. You should configure Data Collection Rules similar to your Azure VMs for Azure Arc-enabled servers.
+
+See [Deployment options for Azure Monitor Agent on Azure Arc-enabled servers](/azure/azure-arc/servers/concept-log-analytics-extension-deployment) to learn more.
+
+## Standalone Azure Monitor Agent installation
+
+For Windows client machines running in non-Azure environments, use a standalone Azure Monitor Agent installation that doesn't require deployment of the Azure Connected Machine agent through Azure Arc. See [Install Azure Monitor Agent on Windows client devices using the client installer](/azure/azure-monitor/agents/azure-monitor-agent-windows-client) to learn more.
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
# Evaluate Azure Arc-enabled servers on an Azure virtual machine > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Arc-enabled servers is designed to help you connect servers running on-premises or in other clouds to Azure. Normally, you wouldn't connect an Azure virtual machine to Azure Arc because all the same capabilities are natively available for these VMs. Azure VMs already have a representation in Azure Resource Manager, VM extensions, managed identities, and Azure Policy. If you attempt to install Azure Arc-enabled servers on an Azure VM, you'll receive an error message stating that it is unsupported.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
# Connected Machine agent prerequisites > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have more requirements.
The listed version is supported until the **End of Arc Support Date**. If critic
| Operating system | Last supported agent version | End of Arc Support Date | Notes | | -- | -- | -- | -- |
-| Windows Server 2008 R2 SP1 | 1.39 [Download](https://download.microsoft.com/download/1/9/f/19f44dde-2c34-4676-80d7-9fa5fc44d2a8/AzureConnectedMachineAgent.msi) | 03/31/2025 | Windows Server 2008 and 2008 R2 reached End of Support in January 2020. See [End of support for Windows Server 2008 and Windows Server 2008 R2](/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2). |
-| CentOS 7 and 8 | 1.42 [Download](https://download.microsoft.com/download/9/6/0/9600825a-e532-4e50-a2d5-7f07e400afc1/AzureConnectedMachineAgent.msi) | 05/31/2025 | See the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). |
+| Windows Server 2008 R2 SP1 | 1.39 [Download](https://aka.ms/AzureConnectedMachineAgent-1.39) | 03/31/2025 | Windows Server 2008 and 2008 R2 reached End of Support in January 2020. See [End of support for Windows Server 2008 and Windows Server 2008 R2](/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2). |
+| CentOS 7 and 8 | 1.42 | 05/31/2025 | See the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). |
+
+### Connect new limited support servers
+
+To connect a new server running a Limited Support operating system to Azure Arc, you will need to make some adjustments to the onboarding script.
+
+For Windows, modify the installation script to specify the version required, using the -AltDownload parameter.
+
+Instead of
+
+```pwsh
+ # Install the hybrid agent
+ & "$env:TEMP\install_windows_azcmagent.ps1";
+```
+
+Use
+
+```pwsh
+ # Install the hybrid agent
+ & "$env:TEMP\install_windows_azcmagent.ps1" -AltDownload https://aka.ms/AzureConnectedMachineAgent-1.39;
+```
+
+For Linux, the relevant package repository will only contain releases that are applicable, so no special considerations are required.
+ ### Client operating system guidance
azure-arc Remove Scvmm From Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/remove-scvmm-from-azure-arc.md
# Remove your SCVMM environment from Azure Arc > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you learn how to cleanly remove your SCVMM managed environment from Azure Arc-enabled SCVMM. For SCVMM environments that you no longer want to manage with Azure Arc-enabled SCVMM, follow the steps in the article to:
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
# Remove your VMware vCenter environment from Azure Arc > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, follow the steps in the article to:
azure-arc Troubleshoot Guest Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md
# Troubleshoot Guest Management for Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article provides information on how to troubleshoot and resolve the issues that can occur while you enable guest management on Arc-enabled VMware vSphere virtual machines.
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to use a secured storage account with Azure Functions
-description: Article that shows you how to use a secured storage account in a virtual network as the default storage account for a function app in Azure Functions.
+description: Learn how to use a secured storage account in a virtual network as the default storage account for a function app in Azure Functions.
+ Previously updated : 06/03/2024 Last updated : 06/27/2024 +
+# Customer intent: As a developer, I want to understand how to use a secured storage account in a virtual network as the default storage account for my function app, so that my function app can be secure.
+ # How to use a secured storage account with Azure Functions
-This article shows you how to connect your function app to a secured storage account. For an in-depth tutorial on how to create your function app with inbound and outbound access restrictions, refer to the [Integrate with a virtual network](functions-create-vnet.md) tutorial. To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
+This article shows you how to connect your function app to a secured storage account. For an in-depth tutorial on how to create your function app with inbound and outbound access restrictions, see the [Integrate with a virtual network](functions-create-vnet.md) tutorial. To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
-## Restrict your storage account to a virtual network
+## Restrict your storage account to a virtual network
-When you create a function app, you either create a new storage account or link to an existing one. Currently, only the Azure portal, [ARM template deployments](functions-infrastructure-as-code.md?tabs=json&pivots=premium-plan#secured-deployments), and [Bicep deployments](functions-infrastructure-as-code.md?tabs=bicep&pivots=premium-plan#secured-deployments) support function app creation with an existing secured storage account.
+When you create a function app, you either create a new storage account or link to an existing one. Currently, only the Azure portal, [ARM template deployments](functions-infrastructure-as-code.md?tabs=json&pivots=premium-plan#secured-deployments), and [Bicep deployments](functions-infrastructure-as-code.md?tabs=bicep&pivots=premium-plan#secured-deployments) support function app creation with an existing secured storage account.
> [!NOTE]
-> Securing your storage account is supported for all tiers of the [Dedicated (App Service) plan](./dedicated-plan.md) and the [Elastic Premium plan](./functions-premium-plan.md), as well as in the [Flex Consumption plan](./flex-consumption-plan.md).
+> Secured storage accounts are supported for all tiers of the [Dedicated (App Service) plan](./dedicated-plan.md) and the [Elastic Premium plan](./functions-premium-plan.md). They're also supported by the [Flex Consumption plan](./flex-consumption-plan.md).
> The [Consumption plan](consumption-plan.md) doesn't support virtual networks. For a list of all restrictions on storage accounts, see [Storage account requirements](storage-considerations.md#storage-account-requirements). [!INCLUDE [functions-flex-preview-note](../../includes/functions-flex-preview-note.md)]
-## Secure storage during function app creation
+## Secure storage during function app creation
-You can create a function app along with a new storage account that is secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
+You can create a function app, along with a new storage account that is secured behind a virtual network. The following sections show you how to create these resources by using either the Azure portal or by using deployment templates.
### [Azure portal](#tab/portal) Complete the steps in [Create a function app in a Premium plan](functions-create-vnet.md#create-a-function-app-in-a-premium-plan). This section of the virtual networking tutorial shows you how to create a function app that connects to storage over private endpoints.
-> [!NOTE]
+> [!NOTE]
> When you create your function app in the Azure portal, you can also choose an existing secured storage account in the **Storage** tab. However, you must configure the appropriate networking on the function app so that it can connect through the virtual network used to secure the storage account. If you don't have permissions to configure networking or you haven't fully prepared your network, select **Configure networking after creation** in the **Networking** tab. You can configure networking for your new function app in the portal under **Settings** > **Networking**. ### [Deployment templates](#tab/templates)
-Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must set the `vnetContentShareEnabled` site property, create the file share as part of your deployment, and set the `WEBSITE_CONTENTSHARE` app setting to the name of the file share. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments).
+Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must set the `vnetContentShareEnabled` site property, create the file share as part of your deployment, and set the `WEBSITE_CONTENTSHARE` app setting to the name of the file share. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md?pivots=premium-plan#secured-deployments).
## Secure storage for an existing function app
-When you have an existing function app, you can directly configure networking on the storage account being used by the app. This process results in your app being down while you configure networking and while your app restarts.
+When you have an existing function app, you can directly configure networking on the storage account being used by the app. However, this process results in your function app being down while you configure networking and while your function app restarts.
To minimize downtime, you can instead swap-out an existing storage account for a new, secured storage account. ### 1. Enable virtual network integration
-As a prerequisite, you need to enable virtual network integration for your function app.
+As a prerequisite, you need to enable virtual network integration for your function app:
1. Choose a function app with a storage account that doesn't have service endpoints or private endpoints enabled. 1. [Enable virtual network integration](./functions-networking-options.md#enable-virtual-network-integration) for your function app.
-### 2. Create a secured storage account
+### 2. Create a secured storage account
-Set up a secured storage account for your function app:
+Set up a secured storage account for your function app:
-1. [Create a second storage account](../storage/common/storage-account-create.md). This is going to be the secured storage account that your function app will use instead. You can also use an existing storage account not already being used by Functions.
+1. [Create a second storage account](../storage/common/storage-account-create.md). This storage account is the secured storage account for your function app to use instead of its original unsecured storage account. You can also use an existing storage account not already being used by Functions.
-1. Copy the connection string for this storage account. You need this string for later.
+1. Save the connection string for this storage account to use later.
-1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account. Try to use the same name as the file share in the existing storage account. Otherwise, you'll need to copy the name of the new file share to configure an app setting later.
+1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account. For your convenience, you can use the same file share name from your original storage account. Otherwise, if you use a new file share name, you must update your app setting.
1. Secure the new storage account in one of the following ways:
- * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When you set up private endpoint connections, create private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints. If you're using a custom or on-premises DNS server, make sure you [configure your DNS server](../storage/common/storage-private-endpoints.md#dns-changes-for-private-endpoints) to resolve to the new private endpoints.
+ * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). As you set up your private endpoint connection, create private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints. If you're using a custom or on-premises Domain Name System (DNS) server, [configure your DNS server](../storage/common/storage-private-endpoints.md#dns-changes-for-private-endpoints) to resolve to the new private endpoints.
- * [Restrict traffic to specific subnets](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). Ensure that one of the allowed subnets is the one your function app is network integrated with. Double check that the subnet has a service endpoint to Microsoft.Storage.
+ * [Restrict traffic to specific subnets](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). Ensure your function app is network integrated with an allowed subnet and that the subnet has a service endpoint to `Microsoft.Storage`.
-1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share. [AzCopy](../storage/common/storage-use-azcopy-blobs-copy.md) and [Azure Storage Explorer](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-move-azure-storage-blobs-between/ba-p/3545304) are common methods. If you use Azure Storage Explorer, you may need to allow your client IP address into your storage account's firewall.
+1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share. [AzCopy](../storage/common/storage-use-azcopy-blobs-copy.md) and [Azure Storage Explorer](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-move-azure-storage-blobs-between/ba-p/3545304) are common methods. If you use Azure Storage Explorer, you might need to allow your client IP address access to your storage account's firewall.
Now you're ready to configure your function app to communicate with the newly secured storage account. ### 3. Enable application and configuration routing > [!NOTE]
-> These configuration steps are only required for the [Elastic Premium](./functions-premium-plan.md) and [Dedicated (App Service)](./dedicated-plan.md) hosting plans.
+> These configuration steps are required only for the [Elastic Premium](./functions-premium-plan.md) and [Dedicated (App Service)](./dedicated-plan.md) hosting plans.
> The [Flex Consumption plan](./flex-consumption-plan.md) doesn't require site settings to configure networking.
-You should now route your function app's traffic to go through the virtual network.
+You're now ready to route your function app's traffic to go through the virtual network:
-1. Enable [application routing](../app-service/overview-vnet-integration.md#application-routing) to route your app's traffic into the virtual network.
+1. Enable [application routing](../app-service/overview-vnet-integration.md#application-routing) to route your app's traffic to the virtual network:
- * Navigate to the **Networking** tab of your function app. Under **Outbound traffic configuration**, select the subnet associated with your virtual network integration.
+ 1. In your function app, expand **Settings**, and then select **Networking**. In the **Networking** page, under **Outbound traffic configuration**, select the subnet associated with your virtual network integration.
- * In the new page, check the box for **Outbound internet traffic** under **Application routing**.
+ 1. In the new page, under **Application routing**, select **Outbound internet traffic**.
-1. Enable [content share routing](../app-service/overview-vnet-integration.md#content-share) to have your function app communicate with your new storage account through its virtual network.
-
- * In the same page, check the box for **Content storage** under **Configuration routing**.
+1. Enable [content share routing](../app-service/overview-vnet-integration.md#content-share) to enable your function app to communicate with your new storage account through its virtual network. In the same page as the previous step, under **Configuration routing**, select **Content storage**.
### 4. Update application settings
-Finally, you need to update your application settings to point at the new secure storage account.
+Finally, you need to update your application settings to point to the new secure storage account:
-1. Update the **Application Settings** under the **Configuration** tab of your function app to the following:
+1. In your function app, expand **Settings**, and then select **Environment variables**.
+1. In the **App settings** tab, update the following settings by selecting each setting, editing it, and then selecting **Apply**:
| Setting name | Value | Comment | |-|-|-|
- | [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)<br>[`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md#website_contentazurefileconnectionstring) | Storage connection string | Both settings contain the connection string for the new secured storage account, which you saved earlier. |
- | [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md#website_contentshare) | File share | The name of the file share created in the secured storage account where the project deployment files reside. |
+ | [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)| Storage connection string | Use the connection string for your new secured storage account, which you saved earlier. |
+ | [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md#website_contentazurefileconnectionstring) | Storage connection string | Use the connection string for your new secured storage account, which you saved earlier. |
+ | [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md#website_contentshare) | File share | Use the name of the file share created in the secured storage account where the project deployment files reside. |
+
+1. Select **Apply**, and then **Confirm** to save the new application settings in the function app.
-1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
+ The function app restarts.
-After the function app restarts, it's now connected to a secured storage account.
+After the function app finishes restarting, it connects to the secured storage account.
## Next steps
azure-functions Functions Bindings Notification Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-notification-hubs.md
Title: Notification Hubs bindings for Azure Functions
-description: Understand how to use Azure Notification Hub binding in Azure Functions.
+ Title: Azure Notification Hubs output bindings for Azure Functions
+description: Learn how to use Azure Notification Hub output bindings in Azure Functions.
+ Last updated : 06/24/2024 ms.devlang: csharp # ms.devlang: csharp, fsharp, javascript Previously updated : 11/21/2017
-# Notification Hubs output binding for Azure Functions
+# Azure Notification Hubs output bindings for Azure Functions
This article explains how to send push notifications by using [Azure Notification Hubs](../notification-hubs/notification-hubs-push-notification-overview.md) bindings in Azure Functions. Azure Functions supports output bindings for Notification Hubs.
-Azure Notification Hubs must be configured for the Platform Notifications Service (PNS) you want to use. To learn how to get push notifications in your client app from Notification Hubs, see [Getting started with Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) and select your target client platform from the drop-down list near the top of the page.
+You must configure Notification Hubs for the Platform Notifications Service (PNS) you want to use. For more information about how to get push notifications in your client app from Notification Hubs, see [Quickstart: Set up push notifications in a notification hub](../notification-hubs/configure-notification-hub-portal-pns-settings.md).
> [!IMPORTANT]
-> Google has [deprecated Google Cloud Messaging (GCM) in favor of Firebase Cloud Messaging (FCM)](https://developers.google.com/cloud-messaging/faq). This output binding doesn't support FCM. To send notifications using FCM, use the [Firebase API](https://firebase.google.com/docs/cloud-messaging/server#choosing-a-server-option) directly in your function or use [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md).
+> Google has [deprecated Google Cloud Messaging (GCM) in favor of Firebase Cloud Messaging (FCM)](https://developers.google.com/cloud-messaging/faq). However, output bindings for Notification Hubs doesn't support FCM. To send notifications using FCM, use the [Firebase API](https://firebase.google.com/docs/cloud-messaging/server#choosing-a-server-option) directly in your function or use [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md).
-## Packages - Functions 1.x
+## Packages: Functions 1.x
[!INCLUDE [functions-runtime-1x-retirement-note](../../includes/functions-runtime-1x-retirement-note.md)]
The Notification Hubs bindings are provided in the [Microsoft.Azure.WebJobs.Exte
[!INCLUDE [functions-package](../../includes/functions-package.md)]
-## Packages - Functions 2.x and higher
+## Packages: Functions 2.x and higher
-This binding is not available in Functions 2.x and higher.
+Output binding isn't available in Functions 2.x and higher.
-## Example - template
+## Example: template
-The notifications you send can be native notifications or [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). Native notifications target a specific client platform as configured in the `platform` property of the output binding. A template notification can be used to target multiple platforms.
+The notifications you send can be native notifications or [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). A native notification targets a specific client platform, as configured in the `platform` property of the output binding. A template notification can be used to target multiple platforms.
-See the language-specific example:
+Template examples for each language:
-* [C# script - out parameter](#c-script-template-exampleout-parameter)
-* [C# script - asynchronous](#c-script-template-exampleasynchronous)
-* [C# script - JSON](#c-script-template-examplejson)
-* [C# script - library types](#c-script-template-examplelibrary-types)
+* [C# script: out parameter](#c-script-template-example-out-parameter)
+* [C# script: asynchronous](#c-script-template-example-asynchronous)
+* [C# script: JSON](#c-script-template-example-json)
+* [C# script: library types](#c-script-template-example-library-types)
* [F#](#f-template-example) * [JavaScript](#javascript-template-example)
-### C# script template example - out parameter
+### C# script template example: out parameter
-This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template.
+This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template:
```cs using System;
private static IDictionary<string, string> GetTemplateProperties(string message)
} ```
-### C# script template example - asynchronous
+### C# script template example: asynchronous
-If you are using asynchronous code, out parameters are not allowed. In this case use `IAsyncCollector` to return your template notification. The following code is an asynchronous example of the code above.
+If you're using asynchronous code, out parameters aren't allowed. In this case, use `IAsyncCollector` to return your template notification. The following code is an asynchronous example of the previous example:
```cs using System;
private static IDictionary<string, string> GetTemplateProperties(string message)
} ```
-### C# script template example - JSON
+### C# script template example: JSON
-This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template using a valid JSON string.
+This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template using a valid JSON string:
```cs using System;
public static void Run(string myQueueItem, out string notification, TraceWriter
} ```
-### C# script template example - library types
+### C# script template example: library types
-This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/).
+This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/):
```cs #r "Microsoft.Azure.NotificationHubs"
private static TemplateNotification GetTemplateNotification(string message)
### F# template example
-This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`.
+This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`:
```fsharp let Run(myTimer: TimerInfo, notification: byref<IDictionary<string, string>>) =
let Run(myTimer: TimerInfo, notification: byref<IDictionary<string, string>>) =
### JavaScript template example
-This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`.
+This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`:
```javascript module.exports = async function (context, myTimer) {
module.exports = async function (context, myTimer) {
}; ```
-## Example - APNS native
+## Example: APNS native
-This C# script example shows how to send a native APNS notification.
+This C# script example shows how to send a native Apple Push Notification Service (APNS) notification:
```cs #r "Microsoft.Azure.NotificationHubs"
public static async Task Run(string myQueueItem, IAsyncCollector<Notification> n
{ log.Info($"C# Queue trigger function processed: {myQueueItem}");
- // In this example the queue item is a new user to be processed in the form of a JSON string with
+ // In this example, the queue item is a new user to be processed in the form of a JSON string with
// a "name" value. //
- // The JSON format for a native APNS notification is ...
+ // The JSON format for a native Apple Push Notification Service (APNS) notification is:
// { "aps": { "alert": "notification message" }} log.LogInformation($"Sending APNS notification of a new user");
public static async Task Run(string myQueueItem, IAsyncCollector<Notification> n
} ```
-## Example - WNS native
+## Example: WNS native
-This C# script example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native WNS toast notification.
+This C# script example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native Windows Push Notification Service (WNS) toast notification:
```cs #r "Microsoft.Azure.NotificationHubs"
public static async Task Run(string myQueueItem, IAsyncCollector<Notification> n
{ log.Info($"C# Queue trigger function processed: {myQueueItem}");
- // In this example the queue item is a new user to be processed in the form of a JSON string with
+ // In this example, the queue item is a new user to be processed in the form of a JSON string with
// a "name" value. // // The XML format for a native WNS toast notification is ...
public static async Task Run(string myQueueItem, IAsyncCollector<Notification> n
In [C# class libraries](functions-dotnet-class-library.md), use the [NotificationHub](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.NotificationHubs/NotificationHubAttribute.cs) attribute.
-The attribute's constructor parameters and properties are described in the [configuration](#configuration) section.
+The attribute's constructor parameters and properties are described in the [Configuration](#configuration) section.
## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `NotificationHub` attribute:
+The following table lists the binding configuration properties that you set in the *function.json* file and the `NotificationHub` attribute:
|function.json property | Attribute property |Description| |||-|
-|**type** |n/a| Must be set to `notificationHub`. |
-|**direction** |n/a| Must be set to `out`. |
+|**type** |n/a| Set to `notificationHub`. |
+|**direction** |n/a| Set to `out`. |
|**name** |n/a| Variable name used in function code for the notification hub message. |
-|**tagExpression** |**TagExpression** | Tag expressions allow you to specify that notifications be delivered to a set of devices that have registered to receive notifications that match the tag expression. For more information, see [Routing and tag expressions](../notification-hubs/notification-hubs-tags-segment-push-message.md). |
-|**hubName** | **HubName** | Name of the notification hub resource in the Azure portal. |
-|**connection** | **ConnectionStringSetting** | The name of an app setting that contains a Notification Hubs connection string. The connection string must be set to the *DefaultFullSharedAccessSignature* value for your notification hub. See [Connection string setup](#connection-string-setup) later in this article.|
-|**platform** | **Platform** | The platform property indicates the client platform your notification targets. By default, if the platform property is omitted from the output binding, template notifications can be used to target any platform configured on the Azure Notification Hub. For more information on using templates in general to send cross platform notifications with an Azure Notification Hub, see [Templates](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). When set, **platform** must be one of the following values: <ul><li><code>apns</code>&mdash;Apple Push Notification Service. For more information on configuring the notification hub for APNS and receiving the notification in a client app, see [Sending push notifications to iOS with Azure Notification Hubs](../notification-hubs/xamarin-notification-hubs-ios-push-notification-apns-get-started.md).</li><li><code>adm</code>&mdash;[Amazon Device Messaging](https://developer.amazon.com/device-messaging). For more information on configuring the notification hub for ADM and receiving the notification in a Kindle app, see [Getting Started with Notification Hubs for Kindle apps](../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md).</li><li><code>wns</code>&mdash;[Windows Push Notification Services](/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) targeting Windows platforms. Windows Phone 8.1 and later is also supported by WNS. For more information, see [Getting started with Notification Hubs for Windows Universal Platform Apps](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md).</li><li><code>mpns</code>&mdash;[Microsoft Push Notification Service](/previous-versions/windows/apps/ff402558(v=vs.105)). This platform supports Windows Phone 8 and earlier Windows Phone platforms. For more information, see [Sending push notifications with Azure Notification Hubs on Windows Phone](../notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md).</li></ul> |
+|**tagExpression** |**TagExpression** | Tag expressions allow you to specify that notifications be delivered to a set of devices that are registered to receive notifications matching the tag expression. For more information, see [Routing and tag expressions](../notification-hubs/notification-hubs-tags-segment-push-message.md). |
+|**hubName** | **HubName** | The name of the notification hub resource in the Azure portal. |
+|**connection** | **ConnectionStringSetting** | The name of an app setting that contains a Notification Hubs connection string. Set the connection string to the *DefaultFullSharedAccessSignature* value for your notification hub. For more information, see [Connection string setup](#connection-string-setup). |
+|**platform** | **Platform** | The platform property indicates the client platform your notification targets. By default, if the platform property is omitted from the output binding, template notifications can be used to target any platform configured on the Azure Notification Hub. For more information about using templates to send cross-platform notifications with an Azure Notification Hub, see [Notification Hubs templates](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). When **platform** is set, it must be one of the following values: <ul><li><code>apns</code>: Apple Push Notification Service. For more information on configuring the notification hub for APNS and receiving the notification in a client app, see [Send push notifications to iOS with Azure Notification Hubs](../notification-hubs/xamarin-notification-hubs-ios-push-notification-apns-get-started.md).</li><li><code>adm</code>: [Amazon Device Messaging](https://developer.amazon.com/device-messaging). For more information on configuring the notification hub for Azure Deployment Manager (ADM) and receiving the notification in a Kindle app, see [Send push notifications to Android devices using Firebase SDK](../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md).</li><li><code>wns</code>: [Windows Push Notification Services](/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) targeting Windows platforms. WNS also supports Windows Phone 8.1 and later. For more information, see [Send notifications to Universal Windows Platform apps using Azure Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md).</li><li><code>mpns</code>: [Microsoft Push Notification Service](/previous-versions/windows/apps/ff402558(v=vs.105)). This platform supports Windows Phone 8 and earlier Windows Phone platforms. For more information, see [Send notifications to Universal Windows Platform apps using Azure Notification Hubs](../notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md).</li></ul> |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ### function.json file example
-Here's an example of a Notification Hubs binding in a *function.json* file.
+Here's an example of a Notification Hubs binding in a *function.json* file:
```json {
Here's an example of a Notification Hubs binding in a *function.json* file.
### Connection string setup
-To use a notification hub output binding, you must configure the connection string for the hub. You can select an existing notification hub or create a new one right from the *Integrate* tab in the Azure portal. You can also configure the connection string manually.
+To use a notification hub output binding, you must configure the connection string for the hub. You can select an existing notification hub or create a new one from the **Integrate** tab in the Azure portal. You can also configure the connection string manually.
To configure the connection string to an existing notification hub:
-1. Navigate to your notification hub in the [Azure portal](https://portal.azure.com), choose **Access policies**, and select the copy button next to the **DefaultFullSharedAccessSignature** policy. This copies the connection string for the *DefaultFullSharedAccessSignature* policy to your notification hub. This connection string lets your function send notification messages to the hub.
- ![Copy the notification hub connection string](./media/functions-bindings-notification-hubs/get-notification-hub-connection.png)
-1. Navigate to your function app in the Azure portal, choose **Application settings**, add a key such as **MyHubConnectionString**, paste the copied *DefaultFullSharedAccessSignature* for your notification hub as the value, and then click **Save**.
+1. Navigate to your notification hub in the [Azure portal](https://portal.azure.com), choose **Access policies**, and select the copy button next to the **DefaultFullSharedAccessSignature** policy.
-The name of this application setting is what goes in the output binding connection setting in *function.json* or the .NET attribute. See the [Configuration section](#configuration) earlier in this article.
+ The connection string for the *DefaultFullSharedAccessSignature* policy is copied to your notification hub. This connection string lets your function send notification messages to the hub.
+ :::image type="content" source="./media/functions-bindings-notification-hubs/get-notification-hub-connection.png" alt-text="Screenshot that shows how to copy the notification hub connection string." lightbox="./media/functions-bindings-notification-hubs/get-notification-hub-connection.png":::
+
+1. Navigate to your function app in the Azure portal, expand **Settings**, and then select **Environment variables**.
+
+1. From the **App setting** tab, select **+ Add** to add a key such as **MyHubConnectionString**. The **Name** of this app setting is the output binding connection setting in *function.json* or the .NET attribute. For more information, see [Configuration](#configuration).
+
+1. For the value, paste the copied *DefaultFullSharedAccessSignature* connection string from your notification hub, and then select **Apply**.
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
The name of this application setting is what goes in the output binding connecti
||| | Notification Hub | [Operations Guide](/rest/api/notificationhubs/) |
-## Next steps
+## Related content
-> [!div class="nextstepaction"]
-> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
+* [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md)
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-serverless-api.md
Title: Customize an HTTP endpoint in Azure Functions
-description: Learn how to customize an HTTP trigger endpoint in Azure Functions
+description: Learn how to customize an HTTP trigger endpoint in Azure Functions.
- Previously updated : 04/27/2020 ++ Last updated : 06/27/2024 +
+#Customer intent: As a developer, I want to customize HTTP trigger endpoints in Azure Functions so that I can build a highly scalable API.
# Customize an HTTP endpoint in Azure Functions
-In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in various languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
+In this article, you learn how to build highly scalable APIs with Azure Functions by customizing an HTTP trigger to handle specific actions in your API design. Azure Functions includes a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in various languages, including Node.js, C#, and more. You also prepare to grow your API by integrating it with Azure Functions proxies and setting up mock APIs. Because these tasks are accomplished on top of the Functions serverless compute environment, you don't need to be concerned about scaling resources. Instead, you can just focus on your API logic.
[!INCLUDE [functions-legacy-proxies-deprecation](../../includes/functions-legacy-proxies-deprecation.md)]
-## Prerequisites
+## Prerequisites
[!INCLUDE [Previous quickstart note](../../includes/functions-quickstart-previous-topics.md)]
-The resulting function will be used for the rest of this article.
+After you create this function app, you can follow the procedures in this article.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
## Customize your HTTP function
-By default, your HTTP trigger function is configured to accept any HTTP method. You can also use the default URL, `https://<yourapp>.azurewebsites.net/api/<funcname>?code=<functionkey>`. In this section, you modify the function to respond only to GET requests with `/api/hello`.
+By default, you configure your HTTP trigger function to accept any HTTP method. In this section, you modify the function to respond only to GET requests with `/api/hello`. You can use the default URL, `https://<yourapp>.azurewebsites.net/api/<funcname>?code=<functionkey>`:
1. Navigate to your function in the Azure portal. Select **Integration** in the left menu, and then select **HTTP (req)** under **Trigger**.
- :::image type="content" source="./media/functions-create-serverless-api/customizing-http.png" alt-text="Customizing an HTTP function":::
+ :::image type="content" source="./media/functions-create-serverless-api/customizing-http.png" alt-text="Screenshot that shows how to edit the HTTP trigger settings of a function." lightbox="./media/functions-create-serverless-api/customizing-http.png":::
1. Use the HTTP trigger settings as specified in the following table.
By default, your HTTP trigger function is configured to accept any HTTP method.
| Authorization level | Anonymous | Optional: Makes your function accessible without an API key | | Selected HTTP methods | GET | Allows only selected HTTP methods to be used to invoke this function |
- You didn't include the `/api` base path prefix in the route template, because it's handled by a global setting.
+ Because a global setting handles the `/api` base path prefix in the route template, you don't need to set it here.
1. Select **Save**.
-For more information about customizing HTTP functions, see [Azure Functions HTTP bindings](./functions-bindings-http-webhook.md).
+For more information about customizing HTTP functions, see [Azure Functions HTTP triggers and bindings overview](./functions-bindings-http-webhook.md).
### Test your API Next, test your function to see how it works with the new API surface:
-1. On the function page, select **Code + Test** from the left menu.
-1. Select **Get function URL** from the top menu and copy the URL. Confirm that it now uses the `/api/hello` path.
-
-1. Copy the URL into a new browser tab or your preferred REST client.
+1. On the **Function** page, select **Code + Test** from the left menu.
+
+1. Select **Get function URL** from the top menu and copy the URL. Confirm that your function now uses the `/api/hello` path.
+
+1. Copy the URL to a new browser tab or your preferred REST client. Browsers use GET by default.
- Browsers use GET by default.
-
-1. Add parameters to the query string in your URL.
+1. Add parameters to the query string in your URL. For example, `/api/hello/?name=John`.
- For example, `/api/hello/?name=John`.
-
-1. Press Enter to confirm that it's working. You should see the response, "*Hello John*."
+1. Press Enter to confirm that your function is working. You should see the response, "*Hello John*."
-1. You can also try calling the endpoint with another HTTP method to confirm that the function isn't executed. To do so, use a REST client, such as cURL, Postman, or Fiddler.
+1. You can also call the endpoint with another HTTP method to confirm that the function isn't executed. To do so, use a REST client, such as cURL, Postman, or Fiddler.
## Proxies overview
-In the next section, you'll surface your API through a proxy. Azure Functions Proxies allows you to forward requests to other resources. You define an HTTP endpoint just like with HTTP trigger. However, instead of writing code to execute when that endpoint is called, you provide a URL to a remote implementation. Doing so allows you to compose multiple API sources into a single API surface, which is easy for clients to consume, which is useful if you wish to build your API as microservices.
+In the next section, you surface your API through a proxy. Azure Functions proxies allow you to forward requests to other resources. You define an HTTP endpoint as you would with an HTTP trigger. However, instead of writing code to execute when that endpoint is called, you provide a URL to a remote implementation. Doing so allows you to compose multiple API sources into a single API surface, which is easier for clients to consume, and is useful if you wish to build your API as microservices.
A proxy can point to any HTTP resource, such as:-- Azure Functions +
+- Azure Functions
- API apps in [Azure App Service](../app-service/overview.md) - Docker containers in [App Service on Linux](../app-service/overview.md#app-service-on-linux) - Any other hosted API
-To learn more about proxies, see [Working with Azure Functions Proxies].
+To learn more about Azure Functions proxies, see [Work with legacy proxies].
> [!NOTE]
-> Proxies is available in Azure Functions versions 1.x to 3.x.
+> Azure Functions proxies is available in Azure Functions versions 1.x to 3.x.
## Create your first proxy
-In this section, you create a new proxy, which serves as a frontend to your overall API.
+In this section, you create a new proxy, which serves as a frontend to your overall API.
-### Setting up the frontend environment
+### Set up the frontend environment
-Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you'll create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
+Repeat the steps in [Create a function app](./functions-create-function-app-portal.md#create-a-function-app) to create a new function app in which you create your proxy. This new app's URL serves as the frontend for our API, and the function app you previously edited serves as a backend:
1. Navigate to your new frontend function app in the portal.
-1. Select **Configuration** and choose **Application Settings**.
-1. Scroll down to **Application settings**, where key/value pairs are stored, and create a new setting with the key `HELLO_HOST`. Set its value to the host of your backend function app, such as `<YourBackendApp>.azurewebsites.net`. This value is part of the URL that you copied earlier when testing your HTTP function. You'll reference this setting in the configuration later.
+1. Expand **Settings**, and then select **Environment variables**.
+1. Select the **App settings** tab, where key/value pairs are stored.
+1. Select **+ Add** to create a new setting. Enter **HELLO_HOST** for its **Name** and set its **Value** to the host of your backend function app, such as `<YourBackendApp>.azurewebsites.net`.
- > [!NOTE]
- > App settings are recommended for the host configuration to prevent a hard-coded environment dependency for the proxy. Using app settings means that you can move the proxy configuration between environments, and the environment-specific app settings will be applied.
+ This value is part of the URL that you copied earlier when you tested your HTTP function. You later reference this setting in the configuration.
-1. Select **Save**.
+ > [!NOTE]
+ > It's recommended that you use app settings for the host configuration to prevent a hard-coded environment dependency for the proxy. Using app settings means that you can move the proxy configuration between environments, and the environment-specific app settings will be applied.
+
+1. Select **Apply** to save the new setting. On the **App settings** tab, select **Apply**, and then select **Confirm** to restart the function app.
-### Creating a proxy on the frontend
+### Create a proxy on the frontend
1. Navigate back to your front-end function app in the portal.
-1. In the left-hand menu, select **Proxies**, and then select **Add**.
+1. In the left-hand menu, expand **Functions**, select **Proxies**, and then select **Add**.
-1. On the **New Proxy** page, use the settings in the following table, and then select **Create**.
+1. On the **New proxy** page, use the settings in the following table, and then select **Create**.
| Field | Sample value | Description | ||||
Repeat the steps to [Create a function app](./functions-get-started.md) to creat
| Route template | /api/remotehello | Determines what route is used to invoke this proxy | | Backend URL | https://%HELLO_HOST%/api/hello | Specifies the endpoint to which the request should be proxied |
-
- :::image type="content" source="./media/functions-create-serverless-api/creating-proxy.png" alt-text="Creating a proxy":::
+ :::image type="content" source="./media/functions-create-serverless-api/creating-proxy.png" alt-text="Screenshot that shows the settings in the New proxy page." lightbox="./media/functions-create-serverless-api/creating-proxy.png":::
- Azure Functions Proxies doesn't provide the `/api` base path prefix, which must be included in the route template. The `%HELLO_HOST%` syntax references the app setting you created earlier. The resolved URL will point to your original function.
+ Because Azure Functions proxies don't provide the `/api` base path prefix, you must include it in the route template. The `%HELLO_HOST%` syntax references the app setting you created earlier. The resolved URL points to your original function.
1. Try out your new proxy by copying the proxy URL and testing it in the browser or with your favorite HTTP client:
- - For an anonymous function use:
+ - For an anonymous function, use:
`https://YOURPROXYAPP.azurewebsites.net/api/remotehello?name="Proxies"`.
- - For a function with authorization use:
+ - For a function with authorization, use:
`https://YOURPROXYAPP.azurewebsites.net/api/remotehello?code=YOURCODE&name="Proxies"`. ## Create a mock API
-Next, you'll use a proxy to create a mock API for your solution. This proxy allows client development to progress, without needing the backend fully implemented. Later in development, you can create a new function app, which supports this logic and redirect your proxy to it.
-
-To create this mock API, we'll create a new proxy, this time using the [App Service Editor](https://github.com/projectkudu/kudu/wiki/App-Service-Editor). To get started, navigate to your function app in the portal. Select **Platform features**, and under **Development Tools** find **App Service Editor**. The App Service Editor opens in a new tab.
-
-Select `proxies.json` in the left navigation. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. To learn more about this file, see [Proxies advanced configuration](./legacy-proxies.md#advanced-configuration).
-
-If you've followed along so far, your proxies.json should look like as follows:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "HelloProxy": {
- "matchCondition": {
- "route": "/api/remotehello"
- },
- "backendUri": "https://%HELLO_HOST%/api/hello"
- }
- }
-}
-```
-
-Next, you'll add your mock API. Replace your proxies.json file with the following code:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "HelloProxy": {
- "matchCondition": {
- "route": "/api/remotehello"
- },
- "backendUri": "https://%HELLO_HOST%/api/hello"
- },
- "GetUserByName" : {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/users/{username}"
- },
- "responseOverrides": {
- "response.statusCode": "200",
- "response.headers.Content-Type" : "application/json",
- "response.body": {
- "name": "{username}",
- "description": "Awesome developer and master of serverless APIs",
- "skills": [
- "Serverless",
- "APIs",
- "Azure",
- "Cloud"
- ]
- }
- }
- }
- }
-}
-```
-
-This code adds a new proxy, `GetUserByName`, without the `backendUri` property. Instead of calling another resource, it modifies the default response from Proxies using a response override. Request and response overrides can also be used with a backend URL. This technique is useful when proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn more about request and response overrides, see [Modifying requests and responses in Proxies](./legacy-proxies.md).
-
-Test your mock API by calling the `<YourProxyApp>.azurewebsites.net/api/users/{username}` endpoint using a browser or your favorite REST client. Be sure to replace _{username}_ with a string value representing a username.
-
-## Next steps
-
-In this article, you learned how to build and customize an API on Azure Functions. You also learned how to bring multiple APIs, including mocks, together as a unified API surface. You can use these techniques to build out APIs of any complexity, all while running on the serverless compute model provided by Azure Functions.
-
-The following references may be helpful as you develop your API further:
--- [Azure Functions HTTP bindings](./functions-bindings-http-webhook.md)-- [Working with Azure Functions Proxies]-- [Documenting an Azure Functions API (preview)](./functions-openapi-definition.md)--
-[Create your first function]: ./functions-get-started.md
-[Working with Azure Functions Proxies]: ./legacy-proxies.md
+Next, you use a proxy to create a mock API for your solution. This proxy allows client development to progress, without needing to fully implement the backend. Later in development, you can create a new function app that supports this logic, and redirect your proxy to it:
+
+1. To create this mock API, create a new proxy, this time using [App Service Editor](https://github.com/projectkudu/kudu/wiki/App-Service-Editor). To get started, navigate to your function app in the Azure portal. Select **Platform features**, and then select **App Service Editor** under **Development Tools**.
+
+ The App Service Editor opens in a new tab.
+
+1. Select `proxies.json` in the left pane. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. For more information about this file, see [Proxies advanced configuration](./legacy-proxies.md#advanced-configuration).
+
+ Your *proxies.json* file should appear as follows:
+
+ ```json
+ {
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "HelloProxy": {
+ "matchCondition": {
+ "route": "/api/remotehello"
+ },
+ "backendUri": "https://%HELLO_HOST%/api/hello"
+ }
+ }
+ }
+ ```
+
+1. Add your mock API. Replace your *proxies.json* file with the following code:
+
+ ```json
+ {
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "HelloProxy": {
+ "matchCondition": {
+ "route": "/api/remotehello"
+ },
+ "backendUri": "https://%HELLO_HOST%/api/hello"
+ },
+ "GetUserByName" : {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/users/{username}"
+ },
+ "responseOverrides": {
+ "response.statusCode": "200",
+ "response.headers.Content-Type" : "application/json",
+ "response.body": {
+ "name": "{username}",
+ "description": "Awesome developer and master of serverless APIs",
+ "skills": [
+ "Serverless",
+ "APIs",
+ "Azure",
+ "Cloud"
+ ]
+ }
+ }
+ }
+ }
+ }
+ ```
+
+ This code adds a new proxy, `GetUserByName`, which omits the `backendUri` property. Instead of calling another resource, it modifies the default response from Azure Functions proxies by using a response override. You can also use request and response overrides with a backend URL. This technique is useful when you proxy to a legacy system, where you might need to modify headers, query parameters, and so on. For more information about request and response overrides, see [Modify requests and responses](./legacy-proxies.md#modify-requests-responses).
+
+1. Test your mock API by calling the `<YourProxyApp>.azurewebsites.net/api/users/{username}` endpoint with a browser or your favorite REST client. Replace *{username}* with a string value that represents a username.
+
+## Related content
+
+In this article, you learned how to build and customize an API with Azure Functions. You also learned how to bring multiple APIs, including mock APIS, together as a unified API surface. You can use these techniques to build out APIs of any complexity, all while running on the serverless compute model provided by Azure Functions.
+
+For more information about developing your API:
+
+- [Azure Functions HTTP triggers and bindings overview](./functions-bindings-http-webhook.md)
+- [Work with legacy proxies](./legacy-proxies.md)
+- [Expose serverless APIs from HTTP endpoints using Azure API Management](./functions-openapi-definition.md)
azure-functions Functions Identity Based Connections Tutorial 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial-2.md
Title: Use identity-based connections with Azure Functions triggers and bindings
+description: Learn how to use identity-based connections instead of secrets when connecting to a Service Bus queue using Azure Functions.
-description: Learn how to use identity-based connections instead of connection strings when connecting to a Service Bus queue using Azure Functions.
Previously updated : 10/20/2021 Last updated : 06/27/2024 ms.devlang: csharp
-#Customer intent: As a function developer, I want to learn how to use managed identities so that I can avoid having to handle connection strings in my application settings.
+
+#Customer intent: As a function developer, I want to learn how to use managed identities so that I can avoid needing to handle secrets or connection strings in my application settings.
# Tutorial: Use identity-based connections instead of secrets with triggers and bindings
-This tutorial shows you how to configure Azure Functions to connect to Azure Service Bus queues using managed identities instead of secrets stored in the function app settings. The tutorial is a continuation of the [Create a function app without default storage secrets in its definition][previous tutorial] tutorial. To learn more about identity-based connections, see [Configure an identity-based connection.](functions-reference.md#configure-an-identity-based-connection).
+This tutorial shows you how to configure Azure Functions to connect to Azure Service Bus queues by using managed identities, instead of secrets stored in the function app settings. The tutorial is a continuation of the [Create a function app without default storage secrets in its definition][previous tutorial] tutorial. To learn more about identity-based connections, see [Configure an identity-based connection.](functions-reference.md#configure-an-identity-based-connection).
-While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
+While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] >
-> * Create a Service Bus namespace and queue.
-> * Configure your function app with managed identity
-> * Create a role assignment granting that identity permission to read from the Service Bus queue
-> * Create and deploy a function app with a Service Bus trigger.
-> * Verify your identity-based connection to Service Bus
+> - Create a Service Bus namespace and queue.
+> - Configure your function app with a managed identity.
+> - Create a role assignment granting that identity permission to read from the Service Bus queue.
+> - Create and deploy a function app with a Service Bus trigger.
+> - Verify your identity-based connection to the Service Bus.
## Prerequisite Complete the previous tutorial: [Create a function app with identity-based connections][previous tutorial].
-## Create a service bus and queue
+## Create a Service Bus namespace and queue
1. In the [Azure portal](https://portal.azure.com), choose **Create a resource (+)**.
-1. On the **Create a resource** page, select **Integration** > **Service Bus**.
+1. On the **Create a resource** page, search for and select **Service Bus**, and then select **Create**.
1. On the **Basics** page, use the following table to configure the Service Bus namespace settings. Use the default values for the remaining options.
Complete the previous tutorial: [Create a function app with identity-based conne
1. In your new Service Bus namespace, select **+ Queue** to add a queue.
-1. Type `myinputqueue` as the new queue's name and select **Create**.
+1. Enter **myinputqueue** as the new queue's name and select **Create**.
-Now, that you have a queue, you will add a role assignment to the managed identity of your function app.
+Now that you have a queue, you can add a role assignment to the managed identity of your function app.
## Configure your Service Bus trigger with a managed identity
-To use Service Bus triggers with identity-based connections, you will need to add the **Azure Service Bus Data Receiver** role assignment to the managed identity in your function app. This role is required when using managed identities to trigger off of your service bus namespace. You can also add your own account to this role, which makes it possible to connect to the service bus namespace during local testing.
+To use Service Bus triggers with identity-based connections, you need to add the **Azure Service Bus Data Receiver** role assignment to the managed identity in your function app. This role is required when using managed identities to trigger off of your Service Bus namespace. You can also add your own account to this role, which makes it possible to connect to the Service Bus namespace during local testing.
> [!NOTE]
-> Role requirements for using identity-based connections vary depending on the service and how you are connecting to it. Needs vary across triggers, input bindings, and output bindings. For more details on specific role requirements, please refer to the trigger and binding documentation for the service.
+> Role requirements for using identity-based connections vary depending on the service and how you are connecting to it. Needs vary across triggers, input bindings, and output bindings. For more information about specific role requirements, see the trigger and binding documentation for the service.
-1. In your service bus namespace that you just created, select **Access Control (IAM)**. This is where you can view and configure who has access to the resource.
+1. In your Service Bus namespace that you created, select **Access control (IAM)**. This page is where you can view and configure who has access to the resource.
-1. Click **Add** and select **add role assignment**.
+1. Select **+ Add** and select **Add role assignment**.
-1. Search for **Azure Service Bus Data Receiver**, select it, and click **Next**.
+1. Search for **Azure Service Bus Data Receiver**, select it, and then select **Next**.
1. On the **Members** tab, under **Assign access to**, choose **Managed Identity**
-1. Click **Select members** to open the **Select managed identities** panel.
+1. Select **Select members** to open the **Select managed identities** panel.
1. Confirm that the **Subscription** is the one in which you created the resources earlier.
-1. In the **Managed identity** selector, choose **Function App** from the **System-assigned managed identity** category. The label "Function App" may have a number in parentheses next to it, indicating the number of apps in the subscription with system-assigned identities.
+1. In the **Managed identity** selector, choose **Function App** from the **System-assigned managed identity** category. The **Function App** label might have a number in parentheses next to it, indicating the number of apps in the subscription with system-assigned identities.
1. Your app should appear in a list below the input fields. If you don't see it, you can use the **Select** box to filter the results with your app's name.
-1. Click on your application. It should move down into the **Selected members** section. Click **Select**.
+1. Select your application. It should move down into the **Selected members** section. Select **Select**.
-1. Back on the **Add role assignment** screen, click **Review + assign**. Review the configuration, and then click **Review + assign**.
+1. Back on the **Add role assignment** screen, select **Review + assign**. Review the configuration, and then select **Review + assign**.
-You've granted your function app access to the service bus namespace using managed identities.
+You've granted your function app access to the Service Bus namespace using managed identities.
-## Connect to Service Bus in your function app
+## Connect to the Service Bus in your function app
1. In the portal, search for the function app you created in the [previous tutorial], or browse to it in the **Function App** page.
-1. In your function app, select **Configuration** under **Settings**.
+1. In your function app, expand **Settings**, and then select **Environment variables**.
-1. In **Application settings**, select **+ New application setting** to create the new setting in the following table.
+1. In the **App settings** tab, select **+ Add** to create a setting. Use the information in the following table to enter the **Name** and **Value** for the new setting:
| Name | Value | Description | | | - | -- | | **ServiceBusConnection__fullyQualifiedNamespace** | <SERVICE_BUS_NAMESPACE>.servicebus.windows.net | This setting connects your function app to the Service Bus using an identity-based connection instead of secrets. |
-1. After you create the two settings, select **Save** > **Confirm**.
+1. Select **Apply**, and then select **Apply** and **Confirm** to save your changes and restart the app function.
> [!NOTE]
-> When using [Azure App Configuration](../../articles/azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
->
+> When you use [Azure App Configuration](../../articles/azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator, such as `:` or `/`, in place of the `__` to ensure names are resolved correctly.
+>
> For example, `ServiceBusConnection:fullyQualifiedNamespace`.
-Now that you've prepared the function app to connect to the service bus namespace using a managed identity, you can add a new function that uses a Service Bus trigger to your local project.
--
+Now that you've prepared the function app to connect to the Service Bus namespace using a managed identity, you can add a new function that uses a Service Bus trigger to your local project.
## Add a Service Bus triggered function
Now that you've prepared the function app to connect to the service bus namespac
func init LocalFunctionProj --dotnet ```
-1. Navigate into the project folder:
+1. Navigate to the project folder:
```console cd LocalFunctionProj ```
-1. In the root project folder, run the following commands:
+1. In the root project folder, run the following command:
```command dotnet add package Microsoft.Azure.WebJobs.Extensions.ServiceBus --version 5.2.0 ```
- This replaces the default version of the Service Bus extension package with a version that supports managed identities.
+ This command replaces the default version of the Service Bus extension package with a version that supports managed identities.
1. Run the following command to add a Service Bus triggered function to the project:
Now that you've prepared the function app to connect to the service bus namespac
func new --name ServiceBusTrigger --template ServiceBusQueueTrigger ```
- This adds the code for a new Service Bus trigger and a reference to the extension package. You need to add a service bus namespace connection setting for this trigger.
+ This command adds the code for a new Service Bus trigger and a reference to the extension package. You need to add a Service Bus namespace connection setting for this trigger.
-1. Open the new ServiceBusTrigger.cs project file and replace the `ServiceBusTrigger` class with the following code:
+1. Open the new *ServiceBusTrigger.cs* project file and replace the `ServiceBusTrigger` class with the following code:
```csharp public static class ServiceBusTrigger
Now that you've prepared the function app to connect to the service bus namespac
} ```
- This code sample updates the queue name to `myinputqueue`, which is the same name as you queue you created earlier. It also sets the name of the Service Bus connection to `ServiceBusConnection`. This is the Service Bus namespace used by the identity-based connection `ServiceBusConnection__fullyQualifiedNamespace` you configured in the portal.
+ This code sample updates the queue name to `myinputqueue`, which is the same name as you queue you created earlier. It also sets the name of the Service Bus connection to `ServiceBusConnection`. This name is the Service Bus namespace used by the identity-based connection `ServiceBusConnection__fullyQualifiedNamespace` you configured in the portal.
> [!NOTE]
-> If you try to run your functions now using `func start` you'll receive an error. This is because you don't have an identity-based connection defined locally. If you want to run your function locally, set the app setting `ServiceBusConnection__fullyQualifiedNamespace` in `local.settings.json` as you did in [the previous section](#connect-to-service-bus-in-your-function-app). In addition, you'll need to assign the role to your developer identity. For more details, please refer to the [local development with identity-based connections documentation](./functions-reference.md#local-development-with-identity-based-connections).
+> If you try to run your functions now using `func start`, you'll receive an error. This is because you don't have an identity-based connection defined locally. If you want to run your function locally, set the app setting `ServiceBusConnection__fullyQualifiedNamespace` in `local.settings.json` as you did in [the previous section](#connect-to-the service-bus-in-your-function-app). In addition, you need to assign the role to your developer identity. For more information, see [local development with identity-based connections](./functions-reference.md#local-development-with-identity-based-connections).
> [!NOTE] > When using [Azure App Configuration](../../articles/azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
->
+>
> For example, `ServiceBusConnection:fullyQualifiedNamespace`. ## Publish the updated project
Now that you've prepared the function app to connect to the service bus namespac
az functionapp deploy -n FUNCTION_APP_NAME -g RESOURCE_GROUP_NAME --src-path PATH_TO_ZIP ```
-Now that you have updated the function app with the new trigger, you can verify that it works using the identity.
+Now that you've updated the function app with the new trigger, you can verify that it works using the identity.
## Validate your changes
Now that you have updated the function app with the new trigger, you can verify
1. In your instance, select **Live Metrics** under **Investigate**.
-1. Keep the previous tab open, and open the Azure portal in a new tab. In your new tab, navigate to your Service Bus namespace, select **Queues** from the left blade.
+1. Keep the previous tab open, and open the Azure portal in a new tab. In your new tab, navigate to your Service Bus namespace, select **Queues** from the left menu.
1. Select your queue named `myinputqueue`.
-1. Select **Service Bus Explorer** from the left blade.
+1. Select **Service Bus Explorer** from the left menu.
1. Send a test message. 1. Select your open **Live Metrics** tab and see the Service Bus queue execution.
-Congratulations! You have successfully set up your Service Bus queue trigger with a managed identity!
+Congratulations! You have successfully set up your Service Bus queue trigger with a managed identity.
[!INCLUDE [clean-up-section-portal](../../includes/clean-up-section-portal.md)]
Congratulations! You have successfully set up your Service Bus queue trigger wit
In this tutorial, you created a function app with identity-based connections.
-Use the following links to learn more Azure Functions with identity-based connections:
--- [Managed identity in Azure Functions](../app-service/overview-managed-identity.md)-- [Identity-based connections in Azure Functions](./functions-reference.md#configure-an-identity-based-connection)-- [Functions documentation for local development](./functions-reference.md#local-development-with-identity-based-connections)
+Advance to the next article to learn how to manage identity.
+> [!div class="nextstepaction"]
+> [Managed identity in Azure Functions](../app-service/overview-managed-identity.md)
[previous tutorial]: ./functions-identity-based-connections-tutorial.md
azure-functions Functions Identity Based Connections Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial.md
Title: Create a function app without default storage secrets in its definition-
+description: Learn how to remove storage connection strings from your function app definition and use identity-based connections instead.
-description: Learn how to remove Storage connection strings from your function app definition.
Previously updated : 10/20/2021
-#Customer intent: As a function developer, I want to learn how to use managed identities so that I can avoid having to handle connection strings in my application settings.
Last updated : 06/27/2024++
+#Customer intent: As a function developer, I want to learn how to use managed identities so that I can avoid needing to handle secrets or connection strings in my application settings.
# Tutorial: Create a function app that connects to Azure services using identities instead of secrets This tutorial shows you how to configure a function app using Microsoft Entra identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. To learn more about identity-based connections, see [configure an identity-based connection](functions-reference.md#configure-an-identity-based-connection).
-While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
+While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a function app in Azure using an ARM template
-> * Enable both system-assigned and user-assigned managed identities on the function app
-> * Create role assignments that give permissions to other resources
-> * Move secrets that can't be replaced with identities into Azure Key Vault
-> * Configure an app to connect to the default host storage using its managed identity
+>
+> - Create a function app in Azure using an ARM template
+> - Enable both system-assigned and user-assigned managed identities on the function app
+> - Create role assignments that give permissions to other resources
+> - Move secrets that can't be replaced with identities into Azure Key Vault
+> - Configure an app to connect to the default host storage using its managed identity
-After you complete this tutorial, you should complete the follow-on tutorial that shows how to [use identity-based connections instead of secrets with triggers and bindings].
+After you complete this tutorial, you should complete the follow-on tutorial that shows how to [use identity-based connections instead of secrets with triggers and bindings].
## Prerequisites
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [.NET 6.0 SDK](https://dotnet.microsoft.com/download)
+- The [.NET 6.0 SDK](https://dotnet.microsoft.com/download)
-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.
+- The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.
## Why use identity?
-Managing secrets and credentials is a common challenge for teams of all sizes. Secrets need to be secured against theft or accidental disclosure, and they may need to be periodically rotated. Many Azure services allow you to instead use an identity in [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to authenticate clients and check against permissions which can be modified and revoked quickly. This allows for greater control over application security with less operational overhead. An identity could be a human user, such as the developer of an application, or a running application in Azure with a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
+Managing secrets and credentials is a common challenge for teams of all sizes. Secrets need to be secured against theft or accidental disclosure, and they might need to be periodically rotated. Many Azure services allow you to instead use an identity in [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to authenticate clients and check against permissions, which can be modified and revoked quickly. Doing so allows for greater control over application security with less operational overhead. An identity could be a human user, such as the developer of an application, or a running application in Azure with a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-Some services do not support Microsoft Entra authentication, so secrets may still be required by your applications. However, these can be stored in [Azure Key Vault](../key-vault/general/overview.md), which helps simplify the management lifecycle for your secrets. Access to a key vault is also controlled with identities.
+Because some services don't support Microsoft Entra authentication, your applications might still require secrets in certain cases. However, these secrets can be stored in [Azure Key Vault](../key-vault/general/overview.md), which helps simplify the management lifecycle for your secrets. Access to a key vault is also controlled with identities.
-By understanding how to use identities instead of secrets when you can and to use Key Vault when you can't, you'll be able to reduce risk, decrease operational overhead, and generally improve the security posture for your applications.
+By understanding how to use identities instead of secrets when you can, and to use Key Vault when you can't, you reduce risk, decrease operational overhead, and generally improve the security posture for your applications.
## Create a function app that uses Key Vault for necessary secrets
-Azure Files is an example of a service that does not yet support Microsoft Entra authentication for SMB file shares. Azure Files is the default file system for Windows deployments on Premium and Consumption plans. While we could [remove Azure Files entirely](./storage-considerations.md#create-an-app-without-azure-files), this introduces limitations you may not want. Instead, you will move the Azure Files connection string into Azure Key Vault. That way it is centrally managed, with access controlled by the identity.
+Azure Files is an example of a service that doesn't yet support Microsoft Entra authentication for Server Message Block (SMB) file shares. Azure Files is the default file system for Windows deployments on Premium and Consumption plans. While we could [remove Azure Files entirely](./storage-considerations.md#create-an-app-without-azure-files), doing so introduces limitations you might not want. Instead, you move the Azure Files connection string into Azure Key Vault. That way it's centrally managed, with access controlled by the identity.
### Create an Azure Key Vault
-First you will need a key vault to store secrets in. You will configure it to use [Azure role-based access control (RBAC)](../role-based-access-control/overview.md) for determining who can read secrets from the vault.
+First you need a key vault to store secrets in. You configure it to use [Azure role-based access control (RBAC)](../role-based-access-control/overview.md) for determining who can read secrets from the vault.
1. In the [Azure portal](https://portal.azure.com), choose **Create a resource (+)**.
First you will need a key vault to store secrets in. You will configure it to us
| Option | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | Subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
- | **Key vault name** | Globally unique name | Name that identifies your new key vault. The vault name must only contain alphanumeric characters and dashes and cannot start with a number. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you create your function app. |
+ | **Key vault name** | Globally unique name | Name that identifies your new key vault. The vault name must only contain alphanumeric characters and dashes and can't start with a number. |
| **Pricing Tier** | Standard | Options for billing. Standard is sufficient for this tutorial. | |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
- Use the default selections for the "Recovery options" sections.
+ Use the default selections for the "Recovery options" sections.
-1. Make a note of the name you used, as you will need it later.
+1. Make a note of the name you used, for use later.
-1. Click **Next: Access Policy** to navigate to the **Access Policy** tab.
+1. Select **Next: Access Policy** to navigate to the **Access Policy** tab.
1. Under **Permission model**, choose **Azure role-based access control**
-1. Select **Review + create**. Review the configuration, and then click **Create**.
+1. Select **Review + create**. Review the configuration, and then select **Create**.
### Set up an identity and permissions for the app
-In order to use Azure Key Vault, your app will need to have an identity that can be granted permission to read secrets. This app will use a user-assigned identity so that the permissions can be set up before the app is even created. You can learn more about managed identities for Azure Functions in the [How to use managed identities in Azure Functions](../app-service/overview-managed-identity.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json) topic.
+In order to use Azure Key Vault, your app needs to have an identity that can be granted permission to read secrets. This app uses a user-assigned identity so that the permissions can be set up before the app is even created. For more information about managed identities for Azure Functions, see [How to use managed identities in Azure Functions](../app-service/overview-managed-identity.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).
1. In the [Azure portal](https://portal.azure.com), choose **Create a resource (+)**.
In order to use Azure Key Vault, your app will need to have an identity that can
| Option | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | Subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you create your function app. |
|**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | | **Name** | Globally unique name | Name that identifies your new user-assigned identity. |
-1. Select **Review + create**. Review the configuration, and then click **Create**.
+1. Select **Review + create**. Review the configuration, and then select **Create**.
-1. When the identity is created, navigate to it in the portal. Select **Properties**, and make note of the **Resource ID**, as you will need it later.
+1. When the identity is created, navigate to it in the portal. Select **Properties**, and make note of the **Resource ID** for use later.
-1. Select **Azure Role Assignments**, and click **Add role assignment (Preview)**.
+1. Select **Azure Role Assignments**, and select **Add role assignment (Preview)**.
-1. In the **Add role assignment (Preview)** page, use options as shown in the table below.
+1. In the **Add role assignment (Preview)** page, use options as shown in the following table.
| Option | Suggested value | Description | | | - | -- |
In order to use Azure Key Vault, your app will need to have an identity that can
1. Select **Save**. It might take a minute or two for the role to show up when you refresh the role assignments list for the identity.
-The identity will now be able to read secrets stored in the key vault. Later in the tutorial, you will add additional role assignments for different purposes.
+The identity is now able to read secrets stored in the key vault. Later in the tutorial, you add additional role assignments for different purposes.
### Generate a template for creating a function app
-The portal experience for creating a function app does not interact with Azure Key Vault, so you will need to generate and edit and Azure Resource Manager template. You can then use this template to create your function app referencing the Azure Files connection string from your key vault.
+Because the portal experience for creating a function app doesn't interact with Azure Key Vault, you need to generate and edit an Azure Resource Manager template. You can then use this template to create your function app referencing the Azure Files connection string from your key vault.
> [!IMPORTANT] > Don't create the function app until after you edit the ARM template. The Azure Files configuration needs to be set up at app creation time.
The portal experience for creating a function app does not interact with Azure K
| Option | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | Subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you create your function app. |
| **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Choose to publish code files or a Docker container. | | **Runtime stack** | .NET | This tutorial uses .NET. | |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
-1. Select **Review + create**. Your app uses the default values on the **Hosting** and **Monitoring** page. You're welcome to review the default options, and they'll be included in the ARM template that you generate.
+1. Select **Review + create**. Your app uses the default values on the **Hosting** and **Monitoring** page. Review the default options, which are included in the ARM template that you generate.
1. Instead of creating your function app here, choose **Download a template for automation**, which is to the right of the **Next** button. 1. In the template page, select **Deploy**, then in the Custom deployment page, select **Edit template**.
- :::image type="content" source="./media/functions-identity-connections-tutorial/function-app-portal-template-deploy-button.png" alt-text="Screenshot of where to find the deploy button at the top of the template screen.":::
+ :::image type="content" source="./media/functions-identity-connections-tutorial/function-app-portal-template-deploy-button.png" alt-text="Screenshot that shows the Deploy button at the top of the Template page.":::
### Edit the template
-You will now edit the template to store the Azure Files connection string in Key Vault and allow your function app to reference it. Make sure that you have the following values from the earlier sections before proceeding:
+You now edit the template to store the Azure Files connection string in Key Vault and allow your function app to reference it. Make sure that you have the following values from the earlier sections before proceeding:
- The resource ID of the user-assigned identity - The name of your key vault
You will now edit the template to store the Azure Files connection string in Key
> [!NOTE] > If you were to create a full template for automation, you would want to include definitions for the identity and role assignment resources, with the appropriate `dependsOn` clauses. This would replace the earlier steps which used the portal. Consult the [Azure Resource Manager guidance](../azure-resource-manager/templates/syntax.md) and the documentation for each service. -
-1. In the editor, find where the `resources` array begins. Before the function app definition, add the following section which puts the Azure Files connection string into Key Vault. Substitute "VAULT_NAME" with the name of your key vault.
+1. In the editor, find where the `resources` array begins. Before the function app definition, add the following section, which puts the Azure Files connection string into Key Vault. Substitute "VAULT_NAME" with the name of your key vault.
```json {
You will now edit the template to store the Azure Files connection string in Key
"[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]" ] },
- ```
-
-1. In the definition for the function app resource (which has `type` set to `Microsoft.Web/sites`), add `Microsoft.KeyVault/vaults/VAULT_NAME/secrets/azurefilesconnectionstring` to the `dependsOn` array. Again substitute "VAULT_NAME" with the name of your key vault. This makes it so your app will not be created before that secret is defined. The `dependsOn` array should look like the following example.
+ ```
+
+1. In the definition for the function app resource (which has `type` set to `Microsoft.Web/sites`), add `Microsoft.KeyVault/vaults/VAULT_NAME/secrets/azurefilesconnectionstring` to the `dependsOn` array. Again, substitute "VAULT_NAME" with the name of your key vault. Doing so prevents your app from being created before the secret is defined. The `dependsOn` array should look like the following example:
```json {
You will now edit the template to store the Azure Files connection string in Key
} ```
- This `identity` block also sets up a system-assigned identity which you will use later in this tutorial.
+ This `identity` block also sets up a system-assigned identity, which you use later in this tutorial.
-1. Add the `keyVaultReferenceIdentity` property to the `properties` object for the function app as in the below example. Substitute "IDENTITY_RESOURCE_ID" for the resource ID of your user-assigned identity.
+1. Add the `keyVaultReferenceIdentity` property to the `properties` object for the function app, as in the following example. Substitute "IDENTITY_RESOURCE_ID" for the resource ID of your user-assigned identity.
```json {
You will now edit the template to store the Azure Files connection string in Key
} ```
- You need this configuration because an app could have multiple user-assigned identities configured. Whenever you want to use a user-assigned identity, you have to specify which one through some ID. That isn't true of system-assigned identities, since an app will only ever have one. Many features that use managed identity assume they should use the system-assigned one by default.
+ You need this configuration because an app could have multiple user-assigned identities configured. Whenever you want to use a user-assigned identity, you must specify it with an ID. System-assigned identities don't need to be specified this way, because an app can only ever have one. Many features that use managed identity assume they should use the system-assigned one by default.
-1. Now find the JSON objects that defines the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting, which should look like the following example:
+1. Find the JSON objects that define the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting, which should look like the following example:
```json {
You will now edit the template to store the Azure Files connection string in Key
1. Make sure that your create options, including **Resource Group**, are still correct and select **Review + create**.
-1. After your template validates, make a note of your **Storage Account Name**, since you'll use this account later. Finally, select **Create** to create your Azure resources and deploy your code to the function app.
+1. After your template validates, make a note of your **Storage Account Name**, since you'll use this account later. Finally, select **Create** to create your Azure resources and deploy your code to the function app.
-1. After deployment completes, select **Go to resource group** and then select the new function app.
+1. After deployment completes, select **Go to resource group** and then select the new function app.
Congratulations! You've successfully created your function app to reference the Azure Files connection string from Azure Key Vault.
-Whenever your app would need to add a reference to a secret, you would just need to define a new application setting pointing to the value stored in Key Vault. You can learn more about this in [Key Vault references for Azure Functions](../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).
+Whenever your app would need to add a reference to a secret, you would just need to define a new application setting pointing to the value stored in Key Vault. For more information, see [Key Vault references for Azure Functions](../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).
> [!TIP] > The [Application Insights connection string](../azure-monitor/app/sdk-connection-string.md) and its included instrumentation key are not considered secrets and can be retrieved from App Insights using [Reader](../role-based-access-control/built-in-roles.md#reader) permissions. You do not need to move them into Key Vault, although you certainly can. ## Use managed identity for AzureWebJobsStorage
-Next you will use the system-assigned identity you configured in the previous steps for the `AzureWebJobsStorage` connection. `AzureWebJobsStorage` is used by the Functions runtime and by several triggers and bindings to coordinate between multiple running instances. It is required for your function app to operate, and like Azure Files, it is configured with a connection string by default when you create a new function app.
+Next, you use the system-assigned identity you configured in the previous steps for the `AzureWebJobsStorage` connection. `AzureWebJobsStorage` is used by the Functions runtime and by several triggers and bindings to coordinate between multiple running instances. It's required for your function app to operate, and like Azure Files, is configured with a connection string by default when you create a new function app.
### Grant the system-assigned identity access to the storage account
-Similar to the steps you took before with the user-assigned identity and your key vault, you will now create a role assignment granting the system-assigned identity access to your storage account.
+Similar to the steps you previously followed with the user-assigned identity and your key vault, you now create a role assignment granting the system-assigned identity access to your storage account.
1. In the [Azure portal](https://portal.azure.com), navigate to the storage account that was created with your function app earlier.
-1. Select **Access Control (IAM)**. This is where you can view and configure who has access to the resource.
+1. Select **Access Control (IAM)**. This page is where you can view and configure who has access to the resource.
-1. Click **Add** and select **add role assignment**.
+1. Select **Add** and select **add role assignment**.
-1. Search for **Storage Blob Data Owner**, select it, and click **Next**
+1. Search for **Storage Blob Data Owner**, select it, and select **Next**
1. On the **Members** tab, under **Assign access to**, choose **Managed Identity**
-1. Click **Select members** to open the **Select managed identities** panel.
+1. Select **Select members** to open the **Select managed identities** panel.
1. Confirm that the **Subscription** is the one in which you created the resources earlier.
-1. In the **Managed identity** selector, choose **Function App** from the **System-assigned managed identity** category. The label "Function App" may have a number in parentheses next to it, indicating the number of apps in the subscription with system-assigned identities.
+1. In the **Managed identity** selector, choose **Function App** from the **System-assigned managed identity** category. The **Function App** label might have a number in parentheses next to it, indicating the number of apps in the subscription with system-assigned identities.
1. Your app should appear in a list below the input fields. If you don't see it, you can use the **Select** box to filter the results with your app's name.
-1. Click on your application. It should move down into the **Selected members** section. Click **Select**.
+1. Select your application. It should move down into the **Selected members** section. Choose **Select**.
+
+1. On the **Add role assignment** screen, select **Review + assign**. Review the configuration, and then select **Review + assign**.
-1. Back on the **Add role assignment** screen, click **Review + assign**. Review the configuration, and then click **Review + assign**.
-
> [!TIP] > If you intend to use the function app for a blob-triggered function, you will need to repeat these steps for the **Storage Account Contributor** and **Storage Queue Data Contributor** roles over the account used by AzureWebJobsStorage. To learn more, see [Blob trigger identity-based connections](./functions-bindings-storage-blob-trigger.md#identity-based-connections). ### Edit the AzureWebJobsStorage configuration
-Next you will update your function app to use its system-assigned identity when it uses the blob service for host storage.
+Next you update your function app to use its system-assigned identity when it uses the blob service for host storage.
> [!IMPORTANT] > The `AzureWebJobsStorage` configuration is used by some triggers and bindings, and those extensions must be able to use identity-based connections, too. Apps that use blob triggers or event hub triggers may need to update those extensions. Because no functions have been defined for this app, there isn't a concern yet. To learn more about this requirement, see [Connecting to host storage with an identity](./functions-reference.md#connecting-to-host-storage-with-an-identity).
Next you will update your function app to use its system-assigned identity when
1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-1. Under **Settings**, select **Configuration**.
+1. In your function app, expand **Settings**, and then select **Environment variables**.
-1. Select the **Edit** button next to the **AzureWebJobsStorage** application setting, and change it based on the following values.
+1. In the **App settings** tab, select the **AzureWebJobsStorage** app setting, and edit it according to the following table:
| Option | Suggested value | Description | | | - | -- |
- | **Name** | AzureWebJobsStorage__accountName | Update the name from **AzureWebJobsStorage** to the exact name `AzureWebJobsStorage__accountName`. This setting tells the host to use the identity instead of looking for a stored secret. The new setting uses a double underscore (`__`), which is a special character in application settings. |
+ | **Name** | AzureWebJobsStorage__accountName | Change the name from **AzureWebJobsStorage** to the exact name `AzureWebJobsStorage__accountName`. This setting instructs the host to use the identity instead of searching for a stored secret. The new setting uses a double underscore (`__`), which is a special character in application settings. |
| **Value** | Your account name | Update the name from the connection string to just your **StorageAccountName**. |
- This configuration will let the system know that it should use an identity to connect to the resource.
+ This configuration tells the system to use an identity to connect to the resource.
-1. Select **OK** and then **Save** > **Continue** to save your changes.
+1. Select **Apply**, and then select **Apply** and **Confirm** to save your changes and restart the app function.
-You've removed the storage connection string requirement for AzureWebJobsStorage by configuring your app to instead connect to blobs using managed identities.
+You've now removed the storage connection string requirement for AzureWebJobsStorage by configuring your app to instead connect to blobs using managed identities.
> [!NOTE] > The `__accountName` syntax is unique to the AzureWebJobsStorage connection and cannot be used for other storage connections. To learn to define other connections, check the reference for each trigger and binding your app uses.
-## Next steps
+## Next steps
This tutorial showed how to create a function app without storing secrets in its configuration.
-In the next tutorial, you'll learn how to use identity in trigger and binding connections.
-
+Advance to the next tutorial to learn how to use identities in trigger and binding connections.
> [!div class="nextstepaction"]
-> [Use identity-based connections instead of secrets with triggers and bindings]
-
-[Use identity-based connections instead of secrets with triggers and bindings]: ./functions-identity-based-connections-tutorial-2.md
+> [Use identity-based connections with triggers and bindings](./functions-identity-based-connections-tutorial-2.md)
azure-functions Functions Integrate Storage Queue Output Binding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-integrate-storage-queue-output-binding.md
Title: Add messages to an Azure Storage queue using Functions
-description: Use Azure Functions to create a serverless function that is invoked by an HTTP request and creates a message in an Azure Storage queue.
-
+description: Use Azure Functions to create a serverless function that's triggered by an HTTP request and creates a message in an Azure Storage queue.
+ Previously updated : 04/24/2020 Last updated : 06/19/2024 ms.devlang: csharp # ms.devlang: csharp, javascript
+#Customer intent: As a function developer, I want to learn how to use Azure Functions to create a serverless function that's triggered by an HTTP request so that I can create a message in an Azure Storage queue.
+ # Add messages to an Azure Storage queue using Functions
-In Azure Functions, input and output bindings provide a declarative way to make data from external services available to your code. In this quickstart, you use an output binding to create a message in a queue when a function is triggered by an HTTP request. You use Azure storage container to view the queue messages that your function creates.
+In Azure Functions, input and output bindings provide a declarative way to make data from external services available to your code. In this article, you use an output binding to create a message in a queue when an HTTP request triggers a function. You use Azure storage container to view the queue messages that your function creates.
## Prerequisites
-To complete this quickstart:
- - An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- Follow the directions in [Create your first function from the Azure portal](./functions-get-started.md) and don't do the **Clean up resources** step. That quickstart creates the function app and function that you use here.
+- Follow the directions in [Create your first function in the Azure portal](./functions-create-function-app-portal.md), omitting the **Clean up resources** step, to create the function app and function to use in this article.
-## <a name="add-binding"></a>Add an output binding
+## Add an output binding
-In this section, you use the portal UI to add a queue storage output binding to the function you created earlier. This binding makes it possible to write minimal code to create a message in a queue. You don't have to write code for tasks such as opening a storage connection, creating a queue, or getting a reference to a queue. The Azure Functions runtime and queue output binding take care of those tasks for you.
+In this section, you use the portal UI to add an Azure Queue Storage output binding to the function you created in the prerequisites. This binding makes it possible to write minimal code to create a message in a queue. You don't need to write code for such tasks as opening a storage connection, creating a queue, or getting a reference to a queue. The Azure Functions runtime and queue output binding take care of those tasks for you.
-1. In the Azure portal, open the function app page for the function app that you created in [Create your first function from the Azure portal](./functions-get-started.md). To do open the page, search for and select **Function App**. Then, select your function app.
+1. In the Azure portal, search for and select the function app that you created in [Create your first function from the Azure portal](./functions-get-started.md).
-1. Select your function app, and then select the function that you created in that earlier quickstart.
+1. In your function app, select the function that you created.
1. Select **Integration**, and then select **+ Add output**.
- :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-create-output-binding.png" alt-text="Create an output binding for your function." border="true":::
+ :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-create-output-binding.png" alt-text="Screenshot that shows how to create an output binding for your function." lightbox="./media/functions-integrate-storage-queue-output-binding/function-create-output-binding.png":::
+
+1. Select the **Azure Queue Storage** binding type and add the settings as specified in the table that follows this screenshot:
-1. Select the **Azure Queue Storage** binding type, and add the settings as specified in the table that follows this screenshot:
+ :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-create-output-binding-details.png" alt-text="Screenshot that shows how to add a Queue Storage output binding to a function in the Azure portal.":::
- :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-create-output-binding-details.png" alt-text="Add a Queue storage output binding to a function in the Azure portal." border="true":::
-
- | Setting | Suggested value | Description |
+ | Setting | Suggested value | description |
| | - | -- |
- | **Message parameter name** | outputQueueItem | The name of the output binding parameter. |
- | **Queue name** | outqueue | The name of the queue to connect to in your Storage account. |
- | **Storage account connection** | AzureWebJobsStorage | You can use the storage account connection already being used by your function app, or create a new one. |
+ | **Message parameter name** | outputQueueItem | The name of the output binding parameter. |
+ | **Queue name** | outqueue | The name of the queue to connect to in your storage account. |
+ | **Storage account connection** | AzureWebJobsStorage | You can use the existing storage account connection used by your function app or create a new one. |
1. Select **OK** to add the binding.
Now that you have an output binding defined, you need to update the code to use
## Add code that uses the output binding
-In this section, you add code that writes a message to the output queue. The message includes the value that is passed to the HTTP trigger in the query string. For example, if the query string includes `name=Azure`, the queue message will be *Name passed to the function: Azure*.
+In this section, you add code that writes a message to the output queue. The message includes the value passed to the HTTP trigger in the query string. For example, if the query string includes `name=Azure`, the queue message is *Name passed to the function: Azure*.
1. In your function, select **Code + Test** to display the function code in the editor.
-1. Update the function code depending on your function language:
+1. Update the function code, according to your function language:
# [C\#](#tab/csharp)
- Add an **outputQueueItem** parameter to the method signature as shown in the following example.
+ Add an **outputQueueItem** parameter to the method signature as shown in the following example:
```cs public static async Task<IActionResult> Run(HttpRequest req,
In this section, you add code that writes a message to the output queue. The mes
} ```
- In the body of the function just before the `return` statement, add code that uses the parameter to create a queue message.
+ In the body of the function, just before the `return` statement, add code that uses the parameter to create a queue message:
```cs outputQueueItem.Add("Name passed to the function: " + name);
In this section, you add code that writes a message to the output queue. The mes
# [JavaScript](#tab/nodejs)
- Add code that uses the output binding on the `context.bindings` object to create a queue message.
+ To create a queue message, add code that uses the output binding on the `context.bindings` object:
```javascript context.bindings.outputQueueItem = "Name passed to the function: " +
In this section, you add code that writes a message to the output queue. The mes
-1. Select **Save** to save changes.
+1. Select **Save** to save your changes.
## Test the function 1. After the code changes are saved, select **Test**.
-1. Confirm that your test matches the image below and select **Run**.
- :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/functions-test-run-function.png" alt-text="Test the queue storage binding in the Azure portal." border="true":::
+1. Confirm that your test matches this screenshot, and then select **Run**.
- Notice that the **Request body** contains the `name` value *Azure*. This value appears in the queue message that is created when the function is invoked.
-
- As an alternative to selecting **Run** here, you can call the function by entering a URL in a browser and specifying the `name` value in the query string. The browser method is shown in the [previous quickstart](./functions-get-started.md).
+ :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/functions-test-run-function.png" alt-text="Screenshot that shows how to test the Queue Storage binding in the Azure portal." lightbox="./media/functions-integrate-storage-queue-output-binding/functions-test-run-function.png":::
-1. Check the logs to make sure that the function succeeded.
+ Notice that the **Request body** contains the `name` value *Azure*. This value appears in the queue message created when the function is invoked.
-A new queue named **outqueue** is created in your Storage account by the Functions runtime when the output binding is first used. You'll use storage account to verify that the queue and a message in it were created.
+ As an alternative to selecting **Run**, you can call the function by entering a URL in a browser and specifying the `name` value in the query string. This browser method is shown in [Create your first function from the Azure portal](./functions-get-started.md).
-### Find the storage account connected to AzureWebJobsStorage
+1. Check the logs to make sure that the function succeeded.
+
+ A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You use storage account to verify that the queue and a message in it were created.
+### Find the storage account connected to AzureWebJobsStorage
-1. Go to your function app and select **Configuration**.
+1. In your function app, expand **Settings**, and then select **Environment variables**.
-1. Under **Application settings**, select **AzureWebJobsStorage**.
+1. In the **App settings** tab, select **AzureWebJobsStorage**.
- :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-find-storage-account.png" alt-text="Screenshot shows the Configuration page with AzureWebJobsStorage selected." border="true":::
+ :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-find-storage-account.png" alt-text="Screenshot that shows the Configuration page with AzureWebJobsStorage selected." lightbox="./media/functions-integrate-storage-queue-output-binding/function-find-storage-account.png":::
1. Locate and make note of the account name.
- :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-storage-account-name.png" alt-text="Locate the storage account connected to AzureWebJobsStorage." border="true":::
+ :::image type="content" source="./media/functions-integrate-storage-queue-output-binding/function-storage-account-name.png" alt-text="Screenshot that shows how to locate the storage account connected to AzureWebJobsStorage." lightbox="./media/functions-integrate-storage-queue-output-binding/function-storage-account-name.png":::
### Examine the output queue
-1. In the resource group for your function app, select the storage account that you're using for this quickstart.
+1. In the resource group for your function app, select the storage account that you're using.
-1. Under **Queue service**, select **Queues** and select the queue named **outqueue**.
+1. Under **Queue service**, select **Queues**, and select the queue named **outqueue**.
The queue contains the message that the queue output binding created when you ran the HTTP-triggered function. If you invoked the function with the default `name` value of *Azure*, the queue message is *Name passed to the function: Azure*.
-1. Run the function again, and you'll see a new message appear in the queue.
+1. Run the function again.
-## Clean up resources
+ A new message appears in the queue.
-## Next steps
+## Related content
-In this quickstart, you added an output binding to an existing function. For more information about binding to Queue storage, see [Azure Functions Storage queue bindings](functions-bindings-storage-queue.md).
+In this article, you added an output binding to an existing function. For more information about binding to Queue Storage, see [Queue Storage trigger and bindings](functions-bindings-storage-queue.md).
[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps-2.md)]
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
namespace Company.Function
} ```
-An HTTP trigger for the migrated version might like the following example:
+An HTTP trigger for the migrated version might look like the following example:
# [.NET 8](#tab/net8)
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
Title: Run your functions from a package file in Azure
-description: Have the Azure Functions runtime run your functions by mounting a deployment package file that contains your function app project files.
+description: Learn how to configure Azure Functions to run your function app from a deployment package file that contains your function app project.
+ Previously updated : 02/05/2022 Last updated : 06/28/2024
+# Customer intent: As a function developer, I want to understand how to run my function app from a deployment package file, so I can make my function app run faster and easier to update.
# Run your functions from a package file in Azure
This article describes the benefits of running your functions from a package. It
## Benefits of running from a package file
-There are several benefits to running from a package file:
+There are several benefits to running functions from a package file:
+ Reduces the risk of file copy locking issues. + Can be deployed to a production app (with restart).
-+ You can be certain of the files that are running in your app.
++ Verifies the files that are running in your app. + Improves the performance of [Azure Resource Manager deployments](functions-infrastructure-as-code.md).
-+ May reduce cold-start times, particularly for JavaScript functions with large npm package trees.
++ Reduces cold-start times, particularly for JavaScript functions with large npm package trees. For more information, see [this announcement](https://github.com/Azure/app-service-announcements/issues/84). ## Enable functions to run from a package
-To enable your function app to run from a package, add a `WEBSITE_RUN_FROM_PACKAGE` setting to your function app settings. The `WEBSITE_RUN_FROM_PACKAGE` setting can have one of the following values:
+To enable your function app to run from a package, add a `WEBSITE_RUN_FROM_PACKAGE` app setting to your function app. The `WEBSITE_RUN_FROM_PACKAGE` app setting can have one of the following values:
| Value | Description | ||| | **`1`** | Indicates that the function app runs from a local package file deployed in the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder of your function app. | |**`<URL>`** | Sets a URL that is the remote location of the specific package file you want to run. Required for functions apps running on Linux in a Consumption plan. |
-The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options for deployment to a specific operating system and hosting plan:
+The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` values for deployment to a specific operating system and hosting plan:
| Hosting plan | Windows | Linux | | | | |
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options
## General considerations
-+ The package file must be .zip formatted. Tar and gzip formats aren't currently supported.
++ The package file must be .zip formatted. Tar and gzip formats aren't supported. + [Zip deployment](#integration-with-zip-deployment) is recommended. + When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment.
-+ When you run from a package, the `wwwroot` folder becomes read-only and you'll receive an error when writing files to this directory. Files are also read-only in the Azure portal.
-+ The maximum size for a deployment package file is currently 1 GB.
-+ You can't use local cache when running from a deployment package.
-+ If your project needs to use remote build, don't use the `WEBSITE_RUN_FROM_PACKAGE` app setting. Instead add the `SCM_DO_BUILD_DURING_DEPLOYMENT=true` deployment customization app setting. For Linux, also add the `ENABLE_ORYX_BUILD=true` setting. To learn more, see [Remote build](functions-deployment-technologies.md#remote-build).
++ When you run from a package, the `wwwroot` folder is read-only and you receive an error if you write files to this directory. Files are also read-only in the Azure portal.++ The maximum size for a deployment package file is 1 GB.++ You can't use the local cache when running from a deployment package.++ If your project needs to use remote build, don't use the `WEBSITE_RUN_FROM_PACKAGE` app setting. Instead, add the `SCM_DO_BUILD_DURING_DEPLOYMENT=true` deployment customization app setting. For Linux, also add the `ENABLE_ORYX_BUILD=true` setting. For more information, see [Remote build](functions-deployment-technologies.md#remote-build). > [!NOTE]
-> WEBSITE_RUN_FROM_PACKAGE does not work with MSDeploy as described [here](https://github.com/projectkudu/kudu/wiki/MSDeploy-VS.-ZipDeploy). You will receive an error during deployment like `ARM-MSDeploy Deploy Failed`. Change /MSDeploy to /ZipDeploy and this error will be resolved.
+> The `WEBSITE_RUN_FROM_PACKAGE` app setting does not work with MSDeploy as described in [MSDeploy VS. ZipDeploy](https://github.com/projectkudu/kudu/wiki/MSDeploy-VS.-ZipDeploy). You will receive an error during deployment, such as `ARM-MSDeploy Deploy Failed`. To resolve this error, hange `/MSDeploy` to `/ZipDeploy`.
-### Adding the WEBSITE_RUN_FROM_PACKAGE setting
+### Add the WEBSITE_RUN_FROM_PACKAGE setting
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
-## Using WEBSITE_RUN_FROM_PACKAGE = 1
+## Use WEBSITE_RUN_FROM_PACKAGE = 1
This section provides information about how to run your function app from a local package file. ### Considerations for deploying from an on-site package
-+ Using an on-site package is the recommended option for running from the deployment package, except on Linux hosted in a Consumption plan.
+<a name="troubleshooting"></a>
+++ Using an on-site package is the recommended option for running from the deployment package, except when running on Linux hosted in a Consumption plan. + [Zip deployment](#integration-with-zip-deployment) is the recommended way to upload a deployment package to your site. + When not using zip deployment, make sure the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder has a file named `packagename.txt`. This file contains only the name, without any whitespace, of the package file in this folder that's currently running. ### Integration with zip deployment
-[Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
+Zip deployment is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
-With the `WEBSITE_RUN_FROM_PACKAGE` app setting value of `1`, the zip deployment APIs copy your package to the `c:\home\data\SitePackages` (Windows) or `/home/dat).
+When you set the `WEBSITE_RUN_FROM_PACKAGE` app setting value to `1`, the zip deployment APIs copy your package to the `c:\home\data\SitePackages` (Windows) or `/home/dat).
> [!NOTE]
-> When a deployment occurs, a restart of the function app is triggered. Function executions currently running during the deploy are terminated. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
+> When a deployment occurs, a restart of the function app is triggered. Function executions currently running during the deploy are terminated. For information about how to write stateless and defensive functions, sett [Write functions to be stateless](performance-reliability.md#write-functions-to-be-stateless).
-## Using WEBSITE_RUN_FROM_PACKAGE = URL
+## Use WEBSITE_RUN_FROM_PACKAGE = URL
-This section provides information about how to run your function app from a package deployed to a URL endpoint. This option is the only one supported for running from a package on Linux hosted in a Consumption plan.
+This section provides information about how to run your function app from a package deployed to a URL endpoint. This option is the only one supported for running from a Linux-hosted package with a Consumption plan.
### Considerations for deploying from a URL
-<a name="troubleshooting"></a>
-
-+ Function apps running on Windows experience a slight increase in [cold start time](event-driven-scaling.md#cold-start) when the application package is deployed to a URL endpoint via `WEBSITE_RUN_FROM_PACKAGE = <URL>`.
++ Function apps running on Windows experience a slight increase in [cold-start time](event-driven-scaling.md#cold-start) when the application package is deployed to a URL endpoint via `WEBSITE_RUN_FROM_PACKAGE = <URL>`. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ Don't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [shared access signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. + You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts).
-+ When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on).
-+ You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account.
++ When you're running on a Dedicated plan, ensure you enable [Always On](dedicated-plan.md#always-on).++ You can use [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account. ### Manually uploading a package to Blob Storage
-To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. This example deploys to a container in Blob Storage.
+To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. The following procedure deploys to a container in Blob Storage:
1. Create a .zip package for your project using the utility of your choice.
-1. In the [Azure portal](https://portal.azure.com), search for your storage account name or browse for it in storage accounts.
+1. In the [Azure portal](https://portal.azure.com), search for your storage account name or browse for it in the storage accounts list.
1. In the storage account, select **Containers** under **Data storage**. 1. Select **+ Container** to create a new Blob Storage container in your account.
-1. In the **New container** page, provide a **Name** (for example, "deployments"), make sure the **Public access level** is **Private**, and select **Create**.
+1. In the **New container** page, provide a **Name** (for example, *deployments*), ensure the **Anonymous access level** is **Private**, and then select **Create**.
-1. Select the container you created, select **Upload**, browse to the location of the .zip file you created with your project, and select **Upload**.
+1. Select the container you created, select **Upload**, browse to the location of the .zip file you created with your project, and then select **Upload**.
-1. After the upload completes, choose your uploaded blob file, and copy the URL. You may need to generate a SAS URL if you aren't [using an identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity)
+1. After the upload completes, choose your uploaded blob file, and copy the URL. If you aren't [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity), you might need to generate a SAS URL.
1. Search for your function app or browse for it in the **Function App** page.
-1. In your function app, select **Configurations** under **Settings**.
+1. In your function app, expand **Settings**, and then select **Environment variables**.
-1. In the **Application Settings** tab, select **New application setting**
+1. In the **App settings** tab, select **+ Add**.
-1. Enter the value `WEBSITE_RUN_FROM_PACKAGE` for the **Name**, and paste the URL of your package in Blob Storage as the **Value**.
+1. Enter the value `WEBSITE_RUN_FROM_PACKAGE` for the **Name**, and paste the URL of your package in Blob Storage for the **Value**.
-1. Select **OK**. Then select **Save** > **Continue** to save the setting and restart the app.
+1. Select **Apply**, and then select **Apply** and **Confirm** to save the setting and restart the function app.
-Now you can run your function in Azure to verify that deployment has succeeded using the deployment package .zip file.
-
-The following shows a function app configured to run from a .zip file hosted in Azure Blob storage:
-
-![WEBSITE_RUN_FROM_ZIP app setting](./media/run-functions-from-deployment-package/run-from-zip-app-setting-portal.png)
+Now you can run your function in Azure to verify that deployment of the deployment package .zip file was successful.
### Fetch a package from Azure Blob Storage using a managed identity [!INCLUDE [Run from package via Identity](../../includes/app-service-run-from-package-via-identity.md)]
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Continuous deployment for Azure Functions](functions-continuous-deployment.md)
+## Related content
-[Zip deployment for Azure Functions]: deployment-zip-push.md
++ [Continuous deployment for Azure Functions](functions-continuous-deployment.md)
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md
Custom handlers are lightweight web servers that receive events from the Azure F
Starting with version 2.x, the runtime is designed to offer [language extensibility](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Language-Extensibility). The JavaScript and Java languages in the 2.x runtime are built with this extensibility.
+## ODBC driver support
+This table indicates the ODBC driver support for your Python functions:
+
+| Driver version | Python version |
+| - | - |
+| ODBC driver 18 | ≥ Python 3.11 |
+| ODBC driver 17 | Γëñ Python 3.10 |
+ ## Next steps ::: zone pivot="programming-language-csharp" ### [Isolated worker model](#tab/isolated-process)
azure-linux Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/faq.md
# Frequently asked questions about the Azure Linux Container Host for AKS > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article answers common questions about the Azure Linux Container Host.
azure-maps Power Bi Visual Filled Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-filled-map.md
Some common uses for filled maps include:
This article uses [Sales and Marketing Sample PBIX] as the data source for demonstration purposes. You can create a new report using this data before continuing if you wish to follow along.
+> [!NOTE]
+> To ensure the highest level of accuracy in geocoding results within Filled Map, it's crucial to correctly set the data category. See [Categorize geographic fields in Power BI](./power-bi-visual-geocode.md#categorize-geographic-fields-in-power-bi).
+ ## Filled map settings There are two places where you can adjust filled maps settings: Build and format visuals. Both are located in the **Visualizations** pane.
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
To ensure fields are correctly geocoded, you can set the Data Category on the da
:::image type="content" source="media/power-bi-visual/data-category.png" alt-text="A screenshot showing the data category drop-down list in Power BI desktop.":::
+> [!NOTE]
+> When categorizing geographic fields in Power BI, be sure to enter **State** and **County** data separately for accurate geocoding. Incorrect categorization, such as entering both **State** and **County** data into either category, might work currently but can lead to issues in the future.
+>
+> For instance:
+> - Correct Usage: State = GA, County = Decatur County
+> - Incorrect Usage: State = Decatur County, GA or County = Decatur County, GA
+ ## Next steps Learn more about the Azure Maps Power BI visual:
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
This article provides help in troubleshooting errors you might experience with the Log Analytics agent for Linux in Azure Monitor. > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
## Log Analytics Troubleshooting Tool
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
# Install the Log Analytics agent on Linux computers > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article provides details on installing the Log Analytics agent on Linux computers hosted in other clouds or on-premises. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
### Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
| Operating system | Azure Monitor agent <sup>1</sup> | Legacy Agent <sup>1</sup> | |:|::|::|
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
# Azure Monitor Agent extension versions > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the version details for the Azure Monitor Agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on-premises servers with Azure Arc agent installed).
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
# Syslog troubleshooting guide for Azure Monitor Agent for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards:
azure-monitor Data Collection Snmp Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-snmp-data.md
# Collect SNMP trap data with Azure Monitor Agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Simple Network Management Protocol (SNMP) is a widely-deployed management protocol for monitoring and configuring Linux devices and appliances.
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
# Collect Syslog events with Azure Monitor Agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon that's built in to Linux devices and appliances to collect local events of the types you specify. Then you can have it send those events to a Log Analytics workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector.
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
# Collect Syslog data sources with the Log Analytics agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Syslog is an event logging protocol that's common to Linux. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the messages to Azure Monitor where a corresponding record is created.
azure-monitor Troubleshooter Ama Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md
# How to use the Linux operating system (OS) Azure Monitor Agent Troubleshooter > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The Azure Monitor Agent (AMA) Troubleshooter is designed to help identify issues with the agent and perform general health assessments. It can perform various checks to ensure that the agent is properly installed and connected, and can also gather AMA-related logs from the machine being diagnosed.
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
To submit custom metrics to Azure Monitor, the entity that submits the metric ne
### Get an authorization token
-Once you have created your managed identity or service principal and assigned **Monitoring Metrics Publisher** permissions, you can get an authorization token by using the following request:
-
-```console
-curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
--H 'Content-Type: application/x-www-form-urlencoded' \data-urlencode 'grant_type=client_credentials' \data-urlencode 'client_id=<your apps client ID>' \data-urlencode 'client_secret=<your apps client secret' \data-urlencode 'resource=https://monitoring.azure.com'
-```
-
-The response body appears in the following format:
+Once you have created your managed identity or service principal and assigned **Monitoring Metrics Publisher** permissions, you can get an authorization token.
+When requesting a token specify `resource: https://monitoring.azure.com`.
-```JSON
-{
- "token_type": "Bearer",
- "expires_in": "86399",
- "ext_expires_in": "86399",
- "expires_on": "1672826207",
- "not_before": "1672739507",
- "resource": "https://monitoring.azure.com",
- "access_token": "eyJ0eXAiOiJKV1Qi....gpHWoRzeDdVQd2OE3dNsLIvUIxQ"
-}
-```
Save the access token from the response for use in the following HTTP requests.
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
Retrieve metric definitions, dimension values, and metric values using the Azure
Request submitted using the Azure Monitor API use the Azure Resource Manager authentication model. All requests are authenticated with Microsoft Entra ID. One approach to authenticating the client application is to create a Microsoft Entra service principal and retrieve an authentication token. You can create a Microsoft Entra service principal using the Azure portal, CLI, or PowerShell. For more information, see [Register an App to request authorization tokens and work with APIs](../logs/api/register-app-for-token.md). ### Retrieve a token
-Once you've created a service principal, retrieve an access token using a REST call. Submit the following request using the `appId` and `password` for your service principal or app:
-
-```HTTP
-
- POST /<tenant-id>/oauth2/token
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- grant_type=client_credentials
- &client_id=<app-client-id>
- &resource=https://management.azure.com
- &client_secret=<password>
-
-```
-
-For example
-
-```bash
-curl --location --request POST 'https://login.microsoftonline.com/abcd1234-5849-4a5d-a2eb-5267eae1bbc7/oauth2/token' \
header 'Content-Type: application/x-www-form-urlencoded' \data-urlencode 'grant_type=client_credentials' \data-urlencode 'client_id=0123b56a-c987-1234-abcd-1a2b3c4d5e6f' \data-urlencode 'client_secret=123456.ABCDE.~XYZ876123ABceDb0000' \data-urlencode 'resource=https://management.azure.com'-
-```
-A successful request receives an access token in the response:
-
-```HTTP
-{
- token_type": "Bearer",
- "expires_in": "86399",
- "ext_expires_in": "86399",
- "access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax"
-}
-```
+Once you've created a service principal, retrieve an access token. Specify `resource=https://management.azure.com` in the token request.
After authenticating and retrieving a token, use the access token in your Azure Monitor API requests by including the header `'Authorization: Bearer <access token>'`
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
The Log Analytics API supports Microsoft Entra authentication with three differe
In the client credentials flow, the token is used with the Log Analytics endpoint. A single request is made to receive a token by using the credentials provided for your app in the previous step when you [register an app in Microsoft Entra ID](./register-app-for-token.md).
-Use the `https://api.loganalytics.azure.com` endpoint.
+Use `resource=https://api.loganalytics.azure.com`.
-#### Client credentials token URL (POST request)
-```http
- POST /<your-tenant-id>/oauth2/token
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- grant_type=client_credentials
- &client_id=<app-client-id>
- &resource=https://api.loganalytics.io
- &client_secret=<app-client-secret>
-```
-
-A successful request receives an access token in the response:
-
-```http
- {
- token_type": "Bearer",
- "expires_in": "86399",
- "ext_expires_in": "86399",
- "access_token": ""eyJ0eXAiOiJKV1QiLCJ.....Ax"
- }
-```
Use the token in requests to the Log Analytics endpoint:
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Last updated 09/28/2023
# Dependency Agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation.
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
Last updated 09/28/2023
# How to query logs from VM insights > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
VM insights collects performance and connection metrics, computer and process inventory data, and health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for [query](../logs/log-query-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting.
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Last updated 09/28/2023
# Chart performance with VM insights > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected.
azure-netapp-files Azacsnap Cmd Ref Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-backup.md
Previously updated : 07/29/2022 Last updated : 05/15/2024
This article provides a guide for running the backup command of the Azure Applic
## Introduction
-A storage snapshot based backup is run using the `azacsnap -c backup` command. This command performs the orchestration of a database consistent storage snapshot on the DATA volumes, and a storage snapshot (without any database consistency setup) on the OTHER volumes.
+A storage snapshot based backup is run using the `azacsnap -c backup` command. This command performs the orchestration of a database consistent storage snapshot on the DATA volumes, and a storage snapshot (without any database consistency setup) on the OTHER volumes.
-For DATA volumes `azacsnap` will prepare the database for a storage snapshot, then it will take the storage snapshot for all configured volumes, finally it will advise the database the snapshot is complete. It will also manage any database entries which record snapshot backup activity (e.g. SAP HANA backup catalog).
+For DATA volumes `azacsnap` prepares the database for a storage snapshot, then it takes a storage snapshot for all configured volumes, finally it tells the database the snapshot is complete. It also manages any database entries which record snapshot backup activity (for example, SAP HANA backup catalog).
## Command options
The `-c backup` command takes the following arguments:
- `data` snapshots the volumes within the `dataVolume` stanza of the configuration file. 1. **data** Volume Snapshot process 1. put the database into *backup-mode*.
- 1. take snapshots of the Volume(s) listed in the configuration file's `"dataVolume"` stanza.
+ 1. take snapshots of the Volumes listed in the configuration file's `"dataVolume"` stanza.
1. take the database out of *backup-mode*. 1. perform snapshot management. - `other` snapshots the volumes within the `otherVolume` stanza of the configuration file. 1. **other** Volume Snapshot process
- 1. take snapshots of the Volume(s) listed in the configuration file's `"otherVolume"` stanza.
+ 1. take snapshots of the Volumes listed in the configuration file's `"otherVolume"` stanza.
1. perform snapshot management.
- - `all` snapshots all the volumes in the `dataVolume` stanza and then all the volumes in the `otherVolume` stanza of the configuration file. The
+ - `all` snapshots all the volumes in the `dataVolume` stanza and then all the volumes in the `otherVolume` stanza of the configuration file. The
processing is handled in the order outlined as follows: 1. **all** Volumes Snapshot process 1. **data** Volume Snapshot (same as the normal `--volume data` option) 1. put the database into *backup-mode*.
- 1. take snapshots of the Volume(s) listed in the configuration file's `"dataVolume"` stanza.
+ 1. take snapshots of the Volumes listed in the configuration file's `"dataVolume"` stanza.
1. take the database out of *backup-mode*. 1. perform snapshot management. 1. **other** Volume Snapshot (same as the normal `--volume other` option)
- 1. take snapshots of the Volume(s) listed in the configuration file's `"otherVolume"` stanza.
+ 1. take snapshots of the Volumes listed in the configuration file's `"otherVolume"` stanza.
1. perform snapshot management. > [!NOTE] > By creating a separate config file with the boot volume as the otherVolume, it's possible for `boot` snapshots to be taken on an entirely different schedule (for example, daily). -- `--prefix=` the customer snapshot prefix for the snapshot name. This parameter has two purposes. Firstly purpose is to provide a unique name for grouping of snapshots. Secondly to determine the `--retention` number of storage snapshots that are kept for the specified `--prefix`.
+- `--prefix=` the customer snapshot prefix for the snapshot name. This parameter has two purposes. Firstly provide a unique name for grouping of snapshots. Secondly to determine the `--retention` number of storage snapshots that are kept for the specified `--prefix`.
> [!IMPORTANT] > Only alpha numeric ("A-Z,a-z,0-9"), underscore ("_") and dash ("-") characters are allowed. -- `--retention` the number of snapshots of the defined `--prefix` to be kept. Any additional snapshots are removed after a new snapshot is taken for this `--prefix`.
+- `--retention` the number of snapshots of the defined `--prefix` to be kept. Any extra snapshots are removed after a new snapshot is taken for this `--prefix`.
-- `--trim` available for SAP HANA v2 and later, this option maintains the backup catalog and on disk catalog and log backups. The number of entries to keep in the backup catalog is determined by the `--retention` option above, and deletes older entries for the defined prefix (`--prefix`) from the backup catalog, and the related physical logs backup. It also deletes any log backup entries that are older than the oldest non-log backup entry. This operations helps to prevent the log backups from using up all available disk space.
+- `--trim` available for SAP HANA v2 and later, this option maintains the backup catalog and on disk catalog and log backups. The number of entries to keep in the backup catalog is determined by the `--retention` option above, and deletes older entries for the defined prefix (`--prefix`) from the backup catalog, and the related physical logs backup. It also deletes any log backup entries that are older than the oldest non-log backup entry. This `--trim` operation helps to prevent the log backups from using up all available disk space.
> [!NOTE] > The following example command will keep 9 storage snapshots and ensure the backup catalog is continuously trimmed to match the 9 storage snapshots being retained.
The `-c backup` command takes the following arguments:
- `[--ssl=]` an optional parameter that defines the encryption method used to communicate with SAP HANA, either `openssl` or `commoncrypto`. If defined, then the `azacsnap -c backup` command expects to find two files in the same directory, these files must be named after
- the corresponding SID. Refer to [Using SSL for communication with SAP HANA](azacsnap-installation.md#using-ssl-for-communication-with-sap-hana). The following example takes a `hana` type snapshot with a prefix of `hana_TEST` and will keep `9` of them communicating with SAP HANA using SSL (`openssl`).
+ the corresponding SID. Refer to [Using SSL for communication with SAP HANA](azacsnap-configure-database.md#using-ssl-for-communication-with-sap-hana). The following example takes a `hana` type snapshot with a prefix of `hana_TEST` and keeps `9` of them communicating with SAP HANA using SSL (`openssl`).
```bash azacsnap -c backup --volume data --prefix hana_TEST --retention 9 --trim --ssl=openssl
The `-c backup` command takes the following arguments:
## Snapshot backups are fast The duration of a snapshot backup is independent of the volume size, with a 10-TB volume being snapped
-within the same approximate time as a 10-GB volume.
+within the same approximate time as a 10-GB volume.
The primary factors impacting overall execution time are the number of volumes to be snapshot and any changes in the `--retention` parameter (where a reduction can increase the execution time as excess snapshots are removed).
-In the example configuration above (for **Azure Large Instance**), snapshots for the
+In the example configuration provided for **Azure Large Instance**, snapshots for the
two volumes took less than 5 seconds to complete. For **Azure NetApp Files**, snapshots for the two volumes would take about 60 seconds.
needs to remove the extra snapshots.
azacsnap -c backup --volume data --prefix hana_TEST --retention 9 --trim ```
-The command does not output to the console, but does write to a log file, a result file,
+The command doesn't output to the console, but does write to a log file, a result file,
and `/var/log/messages`.
-In this example the *log file* name is `azacsnap-backup-azacsnap.log` (see [Log files](#log-files))
+In this example, the *log file* name is `azacsnap-backup-azacsnap.log` (see [Log files](#log-files)).
-When running the `-c backup` with the `--volume data` option a result file is also generated as a file to allow
-for quickly checking the result of a backup. The *result* file has the same base name as the log file, with `.result` as its suffix.
+When running the command `-c backup` with the `--volume data` option, a result file is also generated as a file to allow
+for quickly checking the result of a backup. The *result* file has the same base name as the log file, with `.result` as its suffix.
-In this example the *result file* name is `azacsnap-backup-azacsnap.result` and contains the following output:
+In this example, the *result file* name is `azacsnap-backup-azacsnap.result` and contains the following output:
```bash cat logs/azacsnap-backup-azacsnap.result
Jul 2 06:02:06 server01 azacsnap[114280]: Database # 1 (H80) : completed ok
azacsnap -c backup --volume other --prefix logs_TEST --retention 9 ```
-The command does not output to the console, but does write to a log file only. It does _not_ write
+The command doesn't output to the console, but does write to a log file only. It does _not_ write
to a result file or `/var/log/messages`.
-In this example the *log file* name is `azacsnap-backup-azacsnap.log` (see [Log files](#log-files)).
+In this example, the *log file* name is `azacsnap-backup-azacsnap.log` (see [Log files](#log-files)).
## Example with `other` parameter (to backup host OS)
azacsnap -c backup --volume other --prefix boot_TEST --retention 9 --configfile
> For Azure Large Instance, the configuration file volume parameter for the boot volume might not be visible at the host operating system level. > This value can be provided by Microsoft Operations.
-The command does not output to the console, but does write to a log file only. It does _not_ write
+The command doesn't output to the console, but does write to a log file only. It does _not_ write
to a result file or `/var/log/messages`.
-In this example the *log file* name is `azacsnap-backup-bootVol.log` (see [Log files](#log-files)).
+In this example, the *log file* name is `azacsnap-backup-bootVol.log` (see [Log files](#log-files)).
## Log files
-The log file name is constructed from the following "(command name)-(the `-c` option)-(the config filename)". For example, if running the command `azacsnap -c backup --configfile h80.json --retention 5 --prefix one-off` then the log file will be called `azacsnap-backup-h80.log`. Or if using the `-c test` option with the same configuration file (e.g. `azacsnap -c test --configfile h80.json`) then the log file will be called `azacsnap-test-h80.log`.
+The log file name is constructed from the following "(command name)-(the `-c` option)-(the config filename)". For example, if running the command `azacsnap -c backup --configfile h80.json --retention 5 --prefix one-off` then the log file is called `azacsnap-backup-h80.log`. Or if using the `-c test` option with the same configuration file (e.g. `azacsnap -c test --configfile h80.json`) then the log file is called `azacsnap-test-h80.log`.
> [!NOTE] > Log files can be automatically maintained using [this guide](azacsnap-tips.md#manage-azacsnap-log-files).
azure-netapp-files Azacsnap Cmd Ref Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-test.md
Previously updated : 08/04/2021 Last updated : 05/15/2024
For SSL, this command can take the following optional argument:
- `--ssl=` forces an encrypted connection with the database and defines the encryption method used to communicate with SAP HANA, either `openssl` or `commoncrypto`. If defined, then this command expects to find two files in the same directory, these files must be
- named after the corresponding SID. Refer to [Using SSL for communication with SAP HANA](azacsnap-installation.md#using-ssl-for-communication-with-sap-hana).
+ named after the corresponding SID. Refer to [Using SSL for communication with SAP HANA](azacsnap-configure-database.md#using-ssl-for-communication-with-sap-hana).
### Output of the `azacsnap -c test --test hana` command
azure-netapp-files Azacsnap Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-configure-database.md
+
+ Title: Configure the database for Azure Application Consistent Snapshot tool for Azure NetApp Files
+description: Learn how to configure the database for use with the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
++++ Last updated : 05/15/2024+++
+# Configure the database for Azure Application Consistent Snapshot tool
+
+This article provides a guide for configuring the database and the database prerequisites for use with the Azure Application Consistent Snapshot tool (AzAcSnap) that you can use with Azure NetApp Files or Azure Large Instances.
++
+## Enable communication with the database
+
+This section explains how to enable communication with the database. Use the following tabs to correctly select the database that you're using.
+
+# [SAP HANA](#tab/sap-hana)
+
+If you're deploying to a centralized virtual machine, you need to install and set up the SAP HANA client so that the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. You can download the SAP HANA client from the [SAP Development Tools website](https://tools.hana.ondemand.com/#hanatools).
+
+The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to initiate and release the database save point. The following example shows the setup of the SAP HANA 2.0 user and `hdbuserstore` for communication to the SAP HANA database.
+
+The following example commands set up a user (`AZACSNAP`) in SYSTEMDB on an SAP HANA 2.0 database. Change the IP address, usernames, and passwords as appropriate.
+
+1. Connect to SYSTEMDB:
+
+ ```bash
+ hdbsql -n <IP_address_of_host>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD>
+ ```
+
+ ```output
+ Welcome to the SAP HANA Database interactive terminal.
+
+ Type: \h for help with commands
+ \q to quit
+
+ hdbsql SYSTEMDB=>
+ ```
+
+1. Create the user. This example creates the `AZACSNAP` user in SYSTEMDB:
+
+ ```sql
+ hdbsql SYSTEMDB=> CREATE USER AZACSNAP PASSWORD <AZACSNAP_PASSWORD_CHANGE_ME> NO FORCE_FIRST_PASSWORD_CHANGE;
+ ```
+
+1. Grant the user permissions. This example sets the permission for the `AZACSNAP` user to allow for performing a database-consistent storage snapshot:
+
+ - For SAP HANA releases up to version 2.0 SPS 03:
+
+ ```sql
+ hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, CATALOG READ TO AZACSNAP;
+ ```
+
+ - For SAP HANA releases from version 2.0 SPS 04, SAP added new fine-grained privileges:
+
+ ```sql
+ hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, DATABASE BACKUP ADMIN, CATALOG READ TO AZACSNAP;
+ ```
+
+1. *Optional*: Prevent the user's password from expiring.
+
+ > [!NOTE]
+ > Check with corporate policy before you make this change.
+
+ The following example disables the password expiration for the `AZACSNAP` user. Without this change, the user's password could expire and prevent snapshots from being taken correctly.
+
+ ```sql
+ hdbsql SYSTEMDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME;
+ ```
+
+1. Set up the SAP HANA Secure User Store (change the password). This example uses the `hdbuserstore` command from the Linux shell to set up the SAP HANA Secure User Store:
+
+ ```bash
+ hdbuserstore Set AZACSNAP <IP_address_of_host>:30013 AZACSNAP <AZACSNAP_PASSWORD_CHANGE_ME>
+ ```
+
+1. Check that you correctly set up the SAP HANA Secure User Store. Use the `hdbuserstore` command to list the output, similar to the following example. More details on using `hdbuserstore` are available on the SAP website.
+
+ ```bash
+ hdbuserstore List
+ ```
+
+ ```output
+ DATA FILE : /home/azacsnap/.hdb/sapprdhdb80/SSFS_HDB.DAT
+ KEY FILE : /home/azacsnap/.hdb/sapprdhdb80/SSFS_HDB.KEY
+
+ KEY AZACSNAP
+ ENV : <IP_address_of_host>:
+ USER: AZACSNAP
+ ```
+
+### Using SSL for communication with SAP HANA
+
+AzAcSnap uses SAP HANA's `hdbsql` command to communicate with SAP HANA. Using `hdbsql` allows the use of SSL options to encrypt communication with SAP HANA.
+
+AzAcSnap always uses the following options when you're using the `azacsnap --ssl` option:
+
+- `-e`: Enables TLS/SSL encryption. The server chooses the highest available.
+- `-ssltrustcert`: Specifies whether to validate the server's certificate.
+- `-sslhostnameincert "*"`: Specifies the host name that verifies the server's identity. When you specify `"*"` as the host name, the server's host name isn't validated.
+
+SSL communication also requires key-store and trust-store files. It's possible for these files to be stored in default locations on a Linux installation. But to ensure that the correct key material is being used for the various SAP HANA systems (for the cases where different key-store and trust-store files are used for each SAP HANA system), AzAcSnap expects the key-store and trust-store files to be stored in the `securityPath` location. The AzAcSnap configuration file specifies this location.
+
+#### Key-store files
+
+If you're using multiple system identifiers (SIDs) with the same key material, it's easier to create links into the `securityPath` location as defined in the AzAcSnap configuration file. Ensure that these values exist for every SID that uses SSL.
+
+- For `openssl`: `ln $HOME/.ssl/key.pem <securityPath>/<SID>_keystore`
+- For `commoncrypto`: `ln $SECUDIR/sapcli.pse <securityPath>/<SID>_keystore`
+
+If you're using multiple SIDs with different key material per SID, copy (or move and rename) the files into the `securityPath` location as defined in the SID's AzAcSnap configuration file.
+
+- For `openssl`: `mv key.pem <securityPath>/<SID>_keystore`
+- For `commoncrypto`: `mv sapcli.pse <securityPath>/<SID>_keystore`
+
+When AzAcSnap calls `hdbsql`, it adds `-sslkeystore=<securityPath>/<SID>_keystore` to the `hdbsql` command line.
+
+#### Trust-store files
+
+If you're using multiple SIDs with the same key material, create hard links into the `securityPath` location as defined in the AzAcSnap configuration file. Ensure that these values exist for every SID that uses SSL.
+
+- For `openssl`: `ln $HOME/.ssl/trust.pem <securityPath>/<SID>_truststore`
+- For `commoncrypto`: `ln $SECUDIR/sapcli.pse <securityPath>/<SID>_truststore`
+
+If you're using multiple SIDs with the different key material per SID, copy (or move and rename) the files into the `securityPath` location as defined in the SID's AzAcSnap configuration file.
+
+- For `openssl`: `mv trust.pem <securityPath>/<SID>_truststore`
+- For `commoncrypto`: `mv sapcli.pse <securityPath>/<SID>_truststore`
+
+The `<SID>` component of the file names must be the SAP HANA system identifier in all uppercase (for example, `H80` or `PR1`). When AzAcSnap calls `hdbsql`, it adds `-ssltruststore=<securityPath>/<SID>_truststore` to the command line.
+
+If you run `azacsnap -c test --test hana --ssl openssl`, where `SID` is `H80` in the configuration file, it executes the `hdbsql`connections as follows:
+
+```bash
+hdbsql \
+ -e \
+ -ssltrustcert \
+ -sslhostnameincert "*" \
+ -sslprovider openssl \
+ -sslkeystore ./security/H80_keystore \
+ -ssltruststore ./security/H80_truststore
+ "sql statement"
+```
+
+In the preceding code, the backslash (`\`) character is a command-line line wrap to improve the clarity of the multiple parameters passed on the command line.
+
+# [Oracle](#tab/oracle)
+
+The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable and disable backup mode.
+
+After AzAcSnap puts the database in backup mode, AzAcSnap queries the Oracle database to get a list of files that have backup mode as active. This file list is sent into an external file. The external file is in the same location and basename as the log file, but with a `.protected-tables` file name extension. (The AzAcSnap log file details the output file name.)
+
+The following example commands show the setup of the Oracle database user (`AZACSNAP`), the use of `mkstore` to create an Oracle wallet, and the `sqlplus` configuration files that are required for communication to the Oracle database. Change the IP address, usernames, and passwords as appropriate.
+
+1. Connect to the Oracle database:
+
+ ```bash
+ su ΓÇô oracle
+ sqlplus / AS SYSDBA
+ ```
+
+ ```output
+ SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 1 01:34:05 2021
+ Copyright (c) 1982, 2014, Oracle. All rights reserved.
+ Connected to:
+ Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
+ SQL>
+ ```
+
+1. Create the user. This example creates the `azacsnap` user:
+
+ ```sql
+ SQL> CREATE USER azacsnap IDENTIFIED BY password;
+ ```
+
+ ```output
+ User created.
+ ```
+
+1. Grant the user permissions. This example sets the permission for the `azacsnap` user to allow for putting the database in backup mode:
+
+ ```sql
+ SQL> GRANT CREATE SESSION TO azacsnap;
+ ```
+
+ ```output
+ Grant succeeded.
+ ```
+
+ ```sql
+ SQL> GRANT SYSBACKUP TO azacsnap;
+ ```
+
+ ```output
+ Grant succeeded.
+ ```
+
+ ```sql
+ SQL> connect azacsnap/password
+ ```
+
+ ```output
+ Connected.
+ ```
+
+ ```sql
+ SQL> quit
+ ```
+
+1. *Optional*: Prevent the user's password from expiring. Without this change, the user's password could expire and prevent snapshots from being taken correctly.
+
+ > [!NOTE]
+ > Check with corporate policy before you make this change.
+
+ This example gets the password expiration for the `AZACSNAP` user:
+
+ ```sql
+ SQL> SELECT username,account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
+ ```
+
+ ```output
+ USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
+
+ AZACSNAP OPEN DD-MMM-YY DEFAULT
+ ```
+
+ There are a few methods for disabling password expiration in the Oracle database. Contact your database administrator for guidance. One method is to modify the `DEFAULT` user's profile so that the password lifetime is unlimited:
+
+ ```sql
+ SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME unlimited;
+ ```
+
+ After you make this change to the database setting, there should be no password expiration date for users who have the `DEFAULT` profile:
+
+ ```sql
+ SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
+ ```
+
+ ```output
+ USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
+
+ AZACSNAP OPEN DEFAULT
+ ```
+
+1. Set up the Oracle wallet (change the password).
+
+ The Oracle wallet provides a method to manage database credentials across multiple domains. This capability uses a database connection string in the data-source definition, which is resolved with an entry in the wallet. When you use the Oracle wallet correctly, passwords in the data-source configuration are unnecessary.
+
+ This setup makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, which hides details of the database connection string. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead of (potentially) many data-source definitions.
+
+ Run the following commands on the Oracle database server. This example uses the `mkstore` command from the Linux shell to set up the Oracle wallet. These commands are run on the Oracle database server via unique user credentials to avoid any impact on the running database. This example creates a new user (`azacsnap`) and appropriately configures the environment variables.
+
+ 1. Get the Oracle environment variables to be used in setup. Run the following commands as the root user on the Oracle database server:
+
+ ```bash
+ su - oracle -c 'echo $ORACLE_SID'
+ ```
+
+ ```output
+ oratest1
+ ```
+
+ ```bash
+ su - oracle -c 'echo $ORACLE_HOME'
+ ```
+
+ ```output
+ /u01/app/oracle/product/19.0.0/dbhome_1
+ ```
+
+ 1. Create the Linux user to generate the Oracle wallet and associated `*.ora` files by using the output from the previous step.
+
+ These examples use the `bash` shell. If you're using a different shell (for example, `csh`), be sure to set environment variables correctly.
+
+ ```bash
+ useradd -m azacsnap
+ echo "export ORACLE_SID=oratest1" >> /home/azacsnap/.bash_profile
+ echo "export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1" >> /home/azacsnap/.bash_profile
+ echo "export TNS_ADMIN=/home/azacsnap" >> /home/azacsnap/.bash_profile
+ echo "export PATH=\$PATH:\$ORACLE_HOME/bin" >> /home/azacsnap/.bash_profile
+ ```
+
+ 1. As the new Linux user (`azacsnap`), create the wallet and `*.ora` files.
+
+ 1. Switch to the user created in the previous step:
+
+ ```bash
+ sudo su - azacsnap
+ ```
+
+ 1. Create the Oracle wallet:
+
+ ```bash
+ mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -create
+ ```
+
+ ```output
+ Oracle Secret Store Tool Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
+
+ Enter password: <wallet_password>
+ Enter password again: <wallet_password>
+ ```
+
+ 1. Add the connection string credentials to the Oracle wallet. In the following example command, `AZACSNAP` is the connection string that AzAcSnap will use, `azacsnap` is the Oracle database user, and `AzPasswd1` is the Oracle user's database password.
+
+ ```bash
+ mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -createCredential AZACSNAP azacsnap AzPasswd1
+ ```
+
+ ```output
+ Oracle Secret Store Tool Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
+
+ Enter wallet password: <wallet_password>
+ ```
+
+ 1. Create the `tnsnames-ora` file. In the following example command, set `HOST` to the IP address of the Oracle database server. Set `SID` to the Oracle database SID.
+
+ ```bash
+ echo "# Connection string
+ AZACSNAP=\"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.1)(PORT=1521))(CONNECT_DATA=(SID=oratest1)))\"
+ " > $TNS_ADMIN/tnsnames.ora
+ ```
+
+ 1. Create the `sqlnet.ora` file:
+
+ ```bash
+ echo "SQLNET.WALLET_OVERRIDE = TRUE
+ WALLET_LOCATION=(
+ SOURCE=(METHOD=FILE)
+ (METHOD_DATA=(DIRECTORY=\$TNS_ADMIN/.oracle_wallet))
+ ) " > $TNS_ADMIN/sqlnet.ora
+ ```
+
+ 1. Test the Oracle wallet:
+
+ ```bash
+ sqlplus /@AZACSNAP as SYSBACKUP
+ ```
+
+ ```output
+ SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 12 00:25:32 2022
+ Version 19.3.0.0.0
+
+ Copyright (c) 1982, 2019, Oracle. All rights reserved.
+
+
+ Connected to:
+ Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ ```sql
+ SELECT MACHINE FROM V$SESSION WHERE SID=1;
+ ```
+
+ ```output
+ MACHINE
+ -
+ oradb-19c
+ ```
+
+ ```sql
+ quit
+ ```
+
+ ```output
+ Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ 1. Create a ZIP file archive of the Oracle wallet and `*.ora` files:
+
+ ```bash
+ cd $TNS_ADMIN
+ zip -r wallet.zip sqlnet.ora tnsnames.ora .oracle_wallet
+ ```
+
+ ```output
+ adding: sqlnet.ora (deflated 9%)
+ adding: tnsnames.ora (deflated 7%)
+ adding: .oracle_wallet/ (stored 0%)
+ adding: .oracle_wallet/ewallet.p12.lck (stored 0%)
+ adding: .oracle_wallet/ewallet.p12 (deflated 1%)
+ adding: .oracle_wallet/cwallet.sso.lck (stored 0%)
+ adding: .oracle_wallet/cwallet.sso (deflated 1%)
+ ```
+
+ 1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap).
+
+ > [!IMPORTANT]
+ > If you're deploying to a centralized virtual machine, you need to install and set up Oracle Instant Client on it so that the AzAcSnap user can run `sqlplus` commands. You can download Oracle Instant Client from the [Oracle downloads page](https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html).
+ >
+ > For SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
+
+ 1. Complete the following steps on the system running AzAcSnap:
+
+ 1. Deploy the ZIP file that you copied in the previous step.
+
+ This step assumes that you already created the user running AzAcSnap (by default, `azacsnap`) by using the AzAcSnap installer.
+
+ > [!NOTE]
+ > It's possible to use the `TNS_ADMIN` shell variable to allow for multiple Oracle targets by setting the unique shell variable value for each Oracle system as needed.
+
+ ```bash
+ export TNS_ADMIN=$HOME/ORACLE19c
+ mkdir $TNS_ADMIN
+ cd $TNS_ADMIN
+ unzip ~/wallet.zip
+ ```
+
+ ```output
+ Archive: wallet.zip
+ inflating: sqlnet.ora
+ inflating: tnsnames.ora
+ creating: .oracle_wallet/
+ extracting: .oracle_wallet/ewallet.p12.lck
+ inflating: .oracle_wallet/ewallet.p12
+ extracting: .oracle_wallet/cwallet.sso.lck
+ inflating: .oracle_wallet/cwallet.sso
+ ```
+
+ Check that the files were extracted correctly:
+
+ ```bash
+ ls
+ ```
+
+ ```output
+ sqlnet.ora tnsnames.ora wallet.zip
+ ```
+
+ Assuming that you completed all the previous steps correctly, it should be possible to connect to the database by using the `/@AZACSNAP` connection string:
+
+ ```bash
+ sqlplus /@AZACSNAP as SYSBACKUP
+ ```
+
+ ```output
+ SQL*Plus: Release 21.0.0.0.0 - Production on Wed Jan 12 13:39:36 2022
+ Version 21.1.0.0.0
+
+ Copyright (c) 1982, 2020, Oracle. All rights reserved.
+
+
+ Connected to:
+ Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+
+ ```sql
+ SQL> quit
+ ```
+
+ ```output
+ Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
+ Version 19.3.0.0.0
+ ```
+
+ 1. Test the setup with AzAcSnap
+
+ After you configure AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connection string (for example, `/@AZACSNAP`), it should be possible to connect to the Oracle database.
+
+ Check that the `$TNS_ADMIN` variable is set for the correct Oracle target system. The `$TNS_ADMIN` shell variable determines where to locate the Oracle wallet and `*.ora` files, so you must set it before you run the `azacsnap` command.
+
+ ```bash
+ ls -al $TNS_ADMIN
+ ```
+
+ ```output
+ total 16
+ drwxrwxr-x. 3 orasnap orasnap 84 Jan 12 13:39 .
+ drwx. 18 orasnap sapsys 4096 Jan 12 13:39 ..
+ drwx. 2 orasnap orasnap 90 Jan 12 13:23 .oracle_wallet
+ -rw-rw-r--. 1 orasnap orasnap 125 Jan 12 13:39 sqlnet.ora
+ -rw-rw-r--. 1 orasnap orasnap 128 Jan 12 13:24 tnsnames.ora
+ -rw-r--r--. 1 root root 2569 Jan 12 13:28 wallet.zip
+ ```
+
+ Run the `azacsnap` test command:
+
+ ```bash
+ cd ~/bin
+ azacsnap -c test --test oracle --configfile ORACLE.json
+ ```
+
+ ```output
+ BEGIN : Test process started for 'oracle'
+ BEGIN : Oracle DB tests
+ PASSED: Successful connectivity to Oracle DB version 1903000000
+ END : Test process complete for 'oracle'
+ ```
+
+ You must set up the `$TNS_ADMIN` variable correctly for `azacsnap` to run correctly. You can either add it to the user's `.bash_profile` file or export it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; ./azacsnap --configfile ORACLE19c.json -c backup --volume data --prefix hourly-ora19c --retention 12`).
+
+# [IBM Db2](#tab/db2)
+
+The snapshot tools issue commands to the IBM Db2 database by using the command-line processor `db2` to enable and disable backup mode.
+
+After AzAcSnap puts the database in backup mode, it queries the IBM Db2 database to get a list of protected paths, which are part of the database where backup mode is active. This list is sent into an external file, which is in the same location and basename as the log file but has a `.\<DBName>-protected-paths` extension. (The AzAcSnap log file details the output file name.)
+
+AzAcSnap uses the IBM Db2 command-line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. So you should install AzAcSnap in one of the following ways:
+
+- Install on the database server, and then complete the setup with [Db2 local connectivity](#db2-local-connectivity).
+- Install on a centralized backup system, and then complete the setup with [Db2 remote connectivity](#db2-remote-connectivity).
+
+#### Db2 local connectivity
+
+If you installed AzAcSnap on the database server, be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile. Use the following example setup.
+
+##### azacsnap user permissions
+
+The `azacsnap` user should belong to the same Db2 group as the database instance user. The following example gets the group membership of the IBM Db2 installation's database instance user `db2tst`:
+
+```bash
+id db2tst
+```
+
+```output
+uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1)
+```
+
+From the output, you can confirm the `db2tst` user has been added to the `db2iadm1` group. Add the `azacsnap` user to the group:
+
+```bash
+usermod -a -G db2iadm1 azacsnap
+```
+
+##### azacsnap user profile
+
+The `azacsnap` user needs to be able to run the `db2` command. By default, the `db2` command isn't in the `azacsnap` user's `$PATH` information.
+
+Add the following code to the user's `.bashrc` file. Use your own IBM Db2 installation value for `INSTHOME`.
+
+```output
+# The following four lines have been added to allow this user to run the DB2 command line processor.
+INSTHOME="/db2inst/db2tst"
+if [ -f ${INSTHOME}/sqllib/db2profile ]; then
+ . ${INSTHOME}/sqllib/db2profile
+fi
+```
+
+Test that the user can run the `db2` command-line processor:
+
+```bash
+su - azacsnap
+db2
+```
+
+```output
+(c) Copyright IBM Corporation 1993,2007
+Command Line Processor for DB2 Client 11.5.7.0
+
+You can issue database manager commands and SQL statements from the command
+prompt. For example:
+ db2 => connect to sample
+ db2 => bind sample.bnd
+
+For general help, type: ?.
+For command help, type: ? command, where command can be
+the first few keywords of a database manager command. For example:
+ ? CATALOG DATABASE for help on the CATALOG DATABASE command
+ ? CATALOG for help on all of the CATALOG commands.
+
+To exit db2 interactive mode, type QUIT at the command prompt. Outside
+interactive mode, all commands must be prefixed with 'db2'.
+To list the current command option settings, type LIST COMMAND OPTIONS.
+
+For more detailed help, refer to the Online Reference Manual.
+```
+
+```sql
+db2 => quit
+DB20000I The QUIT command completed successfully.
+```
+
+Now configure `azacsnap` to user `localhost`. After this preliminary test as the `azacsnap` user is working correctly, go on to configure (`azacsnap -c configure`) with `serverAddress=localhost` and test (`azacsnap -c test --test db2`) AzAcSnap database connectivity.
+
+#### Db2 remote connectivity
+
+If you installed AzAcSnap on a centralized backup system, use the following example setup to allow SSH access to the Db2 database instance.
+
+Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair:
+
+```bash
+ssh-keygen
+```
+
+```output
+Generating public/private rsa key pair.
+Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa):
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /home/azacsnap/.ssh/id_rsa.
+Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02
+The key's randomart image is:
++[RSA 2048]-+
+| ... o. |
+| . . +. |
+| .. E + o.|
+| .... B..|
+| S. . o *=|
+| . . . o o=X|
+| o. . + .XB|
+| . + + + +oX|
+| ...+ . =.o+|
++-[SHA256]--+
+```
+
+Get the contents of the public key:
+
+```bash
+cat .ssh/id_rsa.pub
+```
+
+```output
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02
+```
+
+Log in to the IBM Db2 system as the Db2 instance user.
+
+Add the contents of the AzAcSnap user's public key to the Db2 instance user's `authorized_keys` file:
+
+```bash
+echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys
+```
+
+Log in to the AzAcSnap system as the `azacsnap` user and test SSH access:
+
+```bash
+ssh <InstanceUser>@<ServerAddress>
+```
+
+```output
+[InstanceUser@ServerName ~]$
+```
+
+Test that the user can run the `db2` command-line processor:
+
+```bash
+db2
+```
+
+```output
+(c) Copyright IBM Corporation 1993,2007
+Command Line Processor for DB2 Client 11.5.7.0
+
+You can issue database manager commands and SQL statements from the command
+prompt. For example:
+ db2 => connect to sample
+ db2 => bind sample.bnd
+
+For general help, type: ?.
+For command help, type: ? command, where command can be
+the first few keywords of a database manager command. For example:
+ ? CATALOG DATABASE for help on the CATALOG DATABASE command
+ ? CATALOG for help on all of the CATALOG commands.
+
+To exit db2 interactive mode, type QUIT at the command prompt. Outside
+interactive mode, all commands must be prefixed with 'db2'.
+To list the current command option settings, type LIST COMMAND OPTIONS.
+
+For more detailed help, refer to the Online Reference Manual.
+```
+
+```sql
+db2 => quit
+DB20000I The QUIT command completed successfully.
+```
+
+```bash
+[prj@db2-02 ~]$ exit
+```
+
+```output
+logout
+Connection to <serverAddress> closed.
+```
++++
+## Configure the database
+
+This section explains how to configure the database.
+
+# [SAP HANA](#tab/sap-hana)
+
+### Configure SAP HANA
+
+There are changes that you can apply to SAP HANA to help protect the log backups and catalog. By default, `basepath_logbackup` and `basepath_catalogbackup` are set so that SAP HANA will put related files into the `$(DIR_INSTANCE)/backup/log` directory. It's unlikely that this location is on a volume that AzAcSnap is configured to snapshot, so storage snapshots won't protect these files.
+
+The following `hdbsql` command examples demonstrate setting the log and catalog paths to locations on storage volumes that AzAcSnap can snapshot. Be sure to check that the values on the command line match the local SAP HANA configuration.
+
+### Configure the log backup location
+
+This example shows a change to the `basepath_logbackup` parameter:
+
+```bash
+hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_logbackup') = '/hana/logbackups/H80' WITH RECONFIGURE"
+```
+
+### Configure the catalog backup location
+
+This example shows a change to the `basepath_catalogbackup` parameter. First, ensure that the `basepath_catalogbackup` path exists on the file system. If not, create the path with the same ownership as the directory.
+
+```bash
+ls -ld /hana/logbackups/H80/catalog
+```
+
+```output
+drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog
+```
+
+If you need to create the path, the following example creates the path and sets the correct ownership and permissions. You need to run these commands as root.
+
+```bash
+mkdir /hana/logbackups/H80/catalog
+chown --reference=/hana/shared/H80/HDB00 /hana/logbackups/H80/catalog
+chmod --reference=/hana/shared/H80/HDB00 /hana/logbackups/H80/catalog
+ls -ld /hana/logbackups/H80/catalog
+```
+
+```output
+drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog
+```
+
+The following example changes the SAP HANA setting:
+
+```bash
+hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_catalogbackup') = '/hana/logbackups/H80/catalog' WITH RECONFIGURE"
+```
+
+### Check log and catalog backup locations
+
+After you make the changes to the log and catalog backup locations, confirm that the settings are correct by using the following command.
+
+In this example, the settings appear as `SYSTEM` settings. This query also returns the `DEFAULT` settings for comparison.
+
+```bash
+hdbsql -jaxC -n <HANA_ip_address> - i 00 -U AZACSNAP "select * from sys.m_inifile_contents where (key = 'basepath_databackup' or key ='basepath_datavolumes' or key = 'basepath_logbackup' or key = 'basepath_logvolumes' or key = 'basepath_catalogbackup')"
+```
+
+```output
+global.ini,DEFAULT,,,persistence,basepath_catalogbackup,$(DIR_INSTANCE)/backup/log
+global.ini,DEFAULT,,,persistence,basepath_databackup,$(DIR_INSTANCE)/backup/data
+global.ini,DEFAULT,,,persistence,basepath_datavolumes,$(DIR_GLOBAL)/hdb/data
+global.ini,DEFAULT,,,persistence,basepath_logbackup,$(DIR_INSTANCE)/backup/log
+global.ini,DEFAULT,,,persistence,basepath_logvolumes,$(DIR_GLOBAL)/hdb/log
+global.ini,SYSTEM,,,persistence,basepath_catalogbackup,/hana/logbackups/H80/catalog
+global.ini,SYSTEM,,,persistence,basepath_datavolumes,/hana/data/H80
+global.ini,SYSTEM,,,persistence,basepath_logbackup,/hana/logbackups/H80
+global.ini,SYSTEM,,,persistence,basepath_logvolumes,/hana/log/H80
+```
+
+### Configure the log backup timeout
+
+The default setting for SAP HANA to perform a log backup is `900` seconds (15 minutes). We recommend that you reduce this value to `300` seconds (5 minutes). Then it's possible to run regular backups of these files (for example, every 10 minutes). You can take these backups by adding the `log_backup` volumes to the `OTHER` volume section of the
+configuration file.
+
+```bash
+hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'log_backup_timeout_s') = '300' WITH RECONFIGURE"
+```
+
+### Check the log backup timeout
+
+After you make the change to the log backup timeout, ensure that the timeout is set by using the following command.
+
+In this example, the settings are displayed as `SYSTEM` settings. This query also returns the `DEFAULT` settings for comparison.
+
+```bash
+hdbsql -jaxC -n <HANA_ip_address> - i 00 -U AZACSNAP "select * from sys.m_inifile_contents where key like '%log_backup_timeout%' "
+```
+
+```output
+global.ini,DEFAULT,,,persistence,log_backup_timeout_s,900
+global.ini,SYSTEM,,,persistence,log_backup_timeout_s,300
+```
+
+# [Oracle](#tab/oracle)
+
+Apply the following changes to the Oracle database to allow for monitoring by the database administrator:
+
+1. Set up Oracle alert logging.
+
+ Use the following Oracle SQL commands while you're connected to the database as `SYSDBA` to create a stored procedure under the default Oracle SYSBACKUP database account. These SQL commands allow AzAcSnap to send messages to:
+
+ - Standard output by using the `PUT_LINE` procedure in the `DBMS_OUTPUT` package.
+ - The Oracle database `alert.log` file by using the `KSDWRT` procedure in the `DBMS_SYSTEM` package.
+
+ ```bash
+ sqlplus / As SYSDBA
+ ```
+
+ ```sql
+ GRANT EXECUTE ON DBMS_SYSTEM TO SYSBACKUP;
+ CREATE PROCEDURE sysbackup.azmessage(in_msg IN VARCHAR2)
+ AS
+ v_timestamp VARCHAR2(32);
+ BEGIN
+ SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS')
+ INTO v_timestamp FROM DUAL;
+ SYS.DBMS_SYSTEM.KSDWRT(SYS.DBMS_SYSTEM.ALERT_FILE, in_msg);
+ END azmessage;
+ /
+ SHOW ERRORS
+ QUIT
+ ```
+
+# [IBM Db2](#tab/db2)
+
+No special database configuration is required for Db2 because you're using the instance user's local operating system environment.
++++
+## Next steps
+
+- [Configure storage for Azure Application Consistent Snapshot tool](azacsnap-configure-storage.md)
azure-netapp-files Azacsnap Configure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-configure-storage.md
+
+ Title: Configure storage for Azure Application Consistent Snapshot tool for Azure NetApp Files
+description: Learn how to configure storage for use with the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
++++ Last updated : 05/15/2024+++
+# Configure storage for Azure Application Consistent Snapshot tool
+
+This article provides a guide for configuring the Azure storage to be used with the Azure Application Consistent Snapshot tool (AzAcSnap).
+
+Select the storage you're using with AzAcSnap.
+
+# [Azure NetApp Files](#tab/azure-netapp-files)
+
+Either set up a system-managed identity (recommended) or generate the service principal's authentication file.
+
+When you're validating communication with Azure NetApp Files, communication might fail or time out. Check that firewall rules aren't blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
+
+ - (https://)management.azure.com:443
+ - (https://)login.microsoftonline.com:443
+
+# [Azure Large Instances (bare metal)](#tab/azure-large-instance)
+
+You'll need to generate your own self-signed certificate and then share the contents of the PEM (Privacy Enhanced Mail) file with Microsoft Operations so it can be installed to the Storage back-end to allow AzAcSnap to securely authenticate with ONTAP.
+
+Combine the PEM and KEY into a single PKCS12 file which is needed by AzAcSnap for certificate-based authentication to ONTAP.
+
+Test the PKCS12 file by using `curl` to connect to one of the nodes.
+
+> Microsoft Operations provides the storage username and storage IP address at the time of provisioning.
+++
+## Enable communication with storage
+
+This section explains how to enable communication with storage. Use the following tabs to correctly select the storage back end that you're using.
+
+# [Azure NetApp Files (with virtual machine)](#tab/azure-netapp-files)
+
+There are two ways to authenticate to the Azure Resource Manager using either a system-managed identity or a service principal file. The options are described here.
+
+### Azure system-managed identity
+
+From AzAcSnap 9, it's possible to use a system-managed identity instead of a service principal for operation. Using this feature avoids the need to store service principal credentials on a virtual machine (VM). To set up an Azure managed identity by using Azure Cloud Shell, follow these steps:
+
+1. Within a Cloud Shell session with Bash, use the following example to set the shell variables appropriately and apply them to the subscription where you want to create the Azure managed identity. Set `SUBSCRIPTION`, `VM_NAME`, and `RESOURCE_GROUP` to your site-specific values.
+
+ ```azurecli-interactive
+ export SUBSCRIPTION="99z999zz-99z9-99zz-99zz-9z9zz999zz99"
+ export VM_NAME="MyVM"
+ export RESOURCE_GROUP="MyResourceGroup"
+ export ROLE="Contributor"
+ export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}"
+ ```
+
+1. Set Cloud Shell to the correct subscription:
+
+ ```azurecli-interactive
+ az account set -s "${SUBSCRIPTION}"
+ ```
+
+1. Create the managed identity for the virtual machine. The following command sets (or shows if it's already set) the AzAcSnap VM's managed identity:
+
+ ```azurecli-interactive
+ az vm identity assign --name "${VM_NAME}" --resource-group "${RESOURCE_GROUP}"
+ ```
+
+1. Get the principal ID for assigning a role:
+
+ ```azurecli-interactive
+ PRINCIPAL_ID=$(az resource list -n ${VM_NAME} --query [*].identity.principalId --out tsv)
+ ```
+
+1. Assign the Contributor role to the principal ID:
+
+ ```azurecli-interactive
+ az role assignment create --assignee "${PRINCIPAL_ID}" --role "${ROLE}" --scope "${SCOPE}"
+ ```
+
+#### Optional RBAC
+
+It's possible to limit the permissions for the managed identity by using a custom role definition in role-based access control (RBAC). Create a suitable role definition for the virtual machine to be able to manage snapshots. You can find example permissions settings in [Tips and tricks for using the Azure Application Consistent Snapshot tool](azacsnap-tips.md).
+
+Then assign the role to the Azure VM principal ID (also displayed as `SystemAssignedIdentity`):
+
+```azurecli-interactive
+az role assignment create --assignee ${PRINCIPAL_ID} --role "AzAcSnap on ANF" --scope "${SCOPE}"
+```
+
+### Generate a service principal file
+
+1. In a Cloud Shell session, make sure you're logged on at the subscription where you want to be associated with the service principal by default:
+
+ ```azurecli-interactive
+ az account show
+ ```
+
+1. If the subscription isn't correct, use the `az account set` command:
+
+ ```azurecli-interactive
+ az account set -s <subscription name or id>
+ ```
+
+1. Create a service principal by using the Azure CLI, as shown in this example:
+
+ ```azurecli-interactive
+ az ad sp create-for-rbac --name "AzAcSnap" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
+ ```
+
+ The command should generate output like this example:
+
+ ```output
+ {
+ "clientId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
+ "clientSecret": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
+ "subscriptionId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
+ "tenantId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ "activeDirectoryGraphResourceId": "https://graph.windows.net/",
+ "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
+ "galleryEndpointUrl": "https://gallery.azure.com/",
+ "managementEndpointUrl": "https://management.core.windows.net/"
+ }
+ ```
+
+ This command automatically assigns the RBAC Contributor role to the service principal at the subscription level. You can narrow down the scope to the specific resource group where your tests will create the resources.
+
+1. Cut and paste the output content into a file called `azureauth.json` that's stored on the same system as the `azacsnap` command. Secure the file with appropriate system permissions.
+
+ Make sure the format of the JSON file is exactly as described in the previous step, with the URLs enclosed in double quotation marks (").
+
+# [Azure Large Instances (bare metal)](#tab/azure-large-instance)
+
+> [!IMPORTANT]
+> From AzAcSnap 10, communicatoin with Azure Large Instance storage is using the REST API over HTTPS. Versions prior to AzAcSnap 10 use the CLI over SSH.
+
+### Azure Large Instance REST API over HTTPS
+
+Communication with the storage back end occurs over an encrypted HTTPS channel using certificate-based authentication. The following example steps provide guidance on setup of the PKCS12 certificate for this communication:
+
+1. Generate the PEM and KEY files.
+
+ > The CN equals the SVM username, ask Microsoft Operations for this SVM username.
+
+ In this example we are using `svmadmin01` as our SVM username, modify this as necessary for your installation.
+
+ ```bash
+ openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout svmadmin01.key -out svmadmin01.pem -subj "/C=US/ST=WA/L=Redmond/O=MSFT/CN=svmadmin01"
+ ```
+
+ Refer to the following output:
+
+ ```output
+ Generating a RSA private key
+ ........................................................................................................+++++
+ ....................................+++++
+ writing new private key to 'svmadmin01.key'
+ --
+ ```
+
+1. Output the contents of the PEM file.
+
+ The contents of the PEM file are used for adding the client-ca to the SVM.
+
+ > ! Send the contents of the PEM file to the Microsoft BareMetal Infrastructure (BMI) administrator.
++
+ ```bash
+ cat svmadmin01.pem
+ ```
+
+ ```output
+ --BEGIN CERTIFICATE--
+ MIIDgTCCAmmgAwIBAgIUGlEfGBAwSzSFx8s19lsdn9EcXWcwDQYJKoZIhvcNAQEL
+ /zANBgkqhkiG9w0BAQsFAAOCAQEAFkbKiQ3AF1kaiOpl8lt0SGuTwKRBzo55pwqf
+ PmLUFF2sWuG5Yaw4sGPGPgDrkIvU6jcyHpFVsl6e1tUcECZh6lcK0MwFfQZjHwfs
+ MRAwDgYDVQQHDAdSZWRtb25kMQ0wCwYDVQQKDARNU0ZUMRMwEQYDVQQDDApzdm1h
+ ZG1pbjAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuE6H2/DK9xjz
+ TY1JSYIeArJ3GQnBz7Fw2KBT+Z9dl2kO8p3hjSE/5W1vY+5NLDjEH6HG1xH12QUO
+ y2+NoT2s4KhGgWbHuJHpQqLsNFqaOuLyc3ofK7BPz/9JHz5JKmNu1Fn9Ql8s4FRQ
+ 4GzXDf4qC+JhQBO3iSvXuwDRfGs9Ja2x1r8yOJEHxmnLgGVw6Q==
+ --END CERTIFICATE--
+ ```
+
+1. Combine the PEM and KEY into a single PKCS12 file (needed for AzAcSnap).
+
+ ```bash
+ openssl pkcs12 -export -out svmadmin01.p12 -inkey svmadmin01.key -in svmadmin01.pem
+ ```
+
+ > The file svmadmin01.p12 is used as the value for certificateFile in the aliStorageResource section of the AzAcSnap configuration file.
+
+1. Test the PKCS12 file using curl.
+
+ After getting confirmation from Microsoft Operations they have applied the certificate to the SVM to allow certificate-based login, then test connectivity to the SVM.
+
+ In this example we are using the PKCS12 file called svmadmin01.p12 to connect to the SVM host "X.X.X.X" (this IP address will be provided by Microsoft Operations).
+
+ ```bash
+ curl --cert-type P12 --cert svmadmin01.p12 -k 'https://X.X.X.X/api/cluster?fields=version'
+ ```
+
+ ```output
+ {
+ "version": {
+ "full": "NetApp Release 9.15.1: Wed Feb 21 05:56:27 UTC 2024",
+ "generation": 9,
+ "major": 15,
+ "minor": 1
+ },
+ "_links": {
+ "self": {
+ "href": "/api/cluster"
+ }
+ }
+ }
+ ```
+
+### Azure Large Instance CLI over SSH
+
+> [!WARNING]
+> These instructions are for versions prior to AzAcSnap 10 and we are no longer updating this section of the content regularly.
+
+Communication with the storage back end occurs over an encrypted SSH channel. The following example steps provide guidance on setup of SSH for this communication:
+
+1. Modify the `/etc/ssh/ssh_config` file.
+
+ Refer to the following output, which includes the `MACs hmac-sha` line:
+
+ ```output
+ # RhostsRSAAuthentication no
+ # RSAAuthentication yes
+ # PasswordAuthentication yes
+ # HostbasedAuthentication no
+ # GSSAPIAuthentication no
+ # GSSAPIDelegateCredentials no
+ # GSSAPIKeyExchange no
+ # GSSAPITrustDNS no
+ # BatchMode no
+ # CheckHostIP yes
+ # AddressFamily any
+ # ConnectTimeout 0
+ # StrictHostKeyChecking ask
+ # IdentityFile ~/.ssh/identity
+ # IdentityFile ~/.ssh/id_rsa
+ # IdentityFile ~/.ssh/id_dsa
+ # Port 22
+ Protocol 2
+ # Cipher 3des
+ # Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-
+ cbc
+ # MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd
+ MACs hmac-sha
+ # EscapeChar ~
+ # Tunnel no
+ # TunnelDevice any:any
+ # PermitLocalCommand no
+ # VisualHostKey no
+ # ProxyCommand ssh -q -W %h:%p gateway.example.com
+ ```
+
+1. Use the following example command to generate a private/public key pair. Don't enter a password when you're generating a key.
+
+ ```bash
+ ssh-keygen -t rsa ΓÇôb 5120 -C ""
+ ```
+
+1. The output of the `cat /root/.ssh/id_rsa.pub` command is the public key. Send it to Microsoft Operations, so that the snapshot tools can communicate with the storage subsystem.
+
+ ```bash
+ cat /root/.ssh/id_rsa.pub
+ ```
+
+ ```output
+ ssh-rsa
+ AAAAB3NzaC1yc2EAAAADAQABAAABAQDoaRCgwn1Ll31NyDZy0UsOCKcc9nu2qdAPHdCzleiTWISvPW
+ FzIFxz8iOaxpeTshH7GRonGs9HNtRkkz6mpK7pCGNJdxS4wJC9MZdXNt+JhuT23NajrTEnt1jXiVFH
+ bh3jD7LjJGMb4GNvqeiBExyBDA2pXdlednOaE4dtiZ1N03Bc/J4TNuNhhQbdsIWZsqKt9OPUuTfD
+ j0XvwUTLQbR4peGNfN1/cefcLxDlAgI+TmKdfgnLXIsSfbacXoTbqyBRwCi7p+bJnJD07zSc9YCZJa
+ wKGAIilSg7s6Bq/2lAPDN1TqwIF8wQhAg2C7yeZHyE/ckaw/eQYuJtN+RNBD
+ ```
++++++
+## Next steps
+
+- [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md)
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
Previously updated : 03/03/2022 Last updated : 05/15/2024
This article provides a guide for installing the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
-## Getting the snapshot tools
-
-It's recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapinstaller) from Microsoft.
-
-The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature). This file is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
-
-Once these downloads are completed, then follow the steps in this guide to install.
-
-### Verifying the download
-
-The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
-Microsoft provided file. The [Microsoft PGP Public Key used for signing Linux packages](https://packages.microsoft.com/keys/microsoft.asc) has been used to sign the signature file.
-
-The Microsoft PGP Public Key can be imported to a user's local as follows:
-
-```bash
-wget https://packages.microsoft.com/keys/microsoft.asc
-gpg --import microsoft.asc
-```
-
-The following commands trust the Microsoft PGP Public Key:
-
-1. List the keys in the store.
-2. Edit the Microsoft key.
-3. Check the fingerprint with `fpr`.
-4. Sign the key to trust it.
-
-```bash
-gpg --list-keys
-```
-
-Listed keys:
-```output
--<snip>-
-pub rsa2048 2015- 10 - 28 [SC]
-BC528686B50D79E339D3721CEB3E94ADBE1229CF
-uid [ unknown] Microsoft (Release signing) gpgsecurity@microsoft.com
-```
-
-```bash
-gpg --edit-key gpgsecurity@microsoft.com
-```
-
-Output from interactive `gpg` session signing Microsoft public key:
-```output
-gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
-This is free software: you are free to change and redistribute it.
-There is NO WARRANTY, to the extent permitted by law.
-pub rsa2048/EB3E94ADBE1229CF
-created: 2015- 10 - 28 expires: never usage: SC
-trust: unknown validity: unknown
-[ unknown] (1). Microsoft (Release signing) <gpgsecurity@microsoft.com>
-
-gpg> fpr
-pub rsa2048/EB3E94ADBE1229CF 2015- 10 - 28 Microsoft (Release signing)
-<gpgsecurity@microsoft.com>
-Primary key fingerprint: BC52 8686 B50D 79E3 39D3 721C EB3E 94AD BE12 29CF
-
-gpg> sign
-pub rsa2048/EB3E94ADBE1229CF
-created: 2015- 10 - 28 expires: never usage: SC
-trust: unknown validity: unknown
-Primary key fingerprint: BC52 8686 B50D 79E3 39D3 721C EB3E 94AD BE12 29CF
-Microsoft (Release signing) <gpgsecurity@microsoft.com>
-Are you sure that you want to sign this key with your
-key "XXX XXXX <xxxxxxx@xxxxxxxx.xxx>" (A1A1A1A1A1A1)
-Really sign? (y/N) y
-
-gpg> quit
-Save changes? (y/N) y
-```
-
-The PGP signature file for the installer can be checked as follows:
-
-```bash
-gpg --verify azacsnap_installer_v5.0.run.asc azazsnap_installer_v5.0.run
-```
-
-```output
-gpg: Signature made Sat 13 Apr 2019 07:51:46 AM STD
-gpg: using RSA key EB3E94ADBE1229CF
-gpg: Good signature from "Microsoft (Release signing)
-<gpgsecurity@microsoft.com>" [full]
-```
-
-For more information about using GPG, see [The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual/book1.html).
-
-## Supported scenarios
-
-The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and [SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
-
-## Snapshot Support Matrix from SAP
-
-The following matrix is provided as a guideline on which versions of SAP HANA are supported by SAP for Storage Snapshot Backups.
-
-
-
-| Database type | Minimum database versions | Notes |
-|||--|
-| Single Container Database | 1.0 SPS 12, 2.0 SPS 00 | |
-| MDC Single Tenant | [2.0 SPS 01](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/2194a981ea9e48f4ba0ad838abd2fb1c.html?version=2.0.01&locale=en-US) | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
-| MDC Multiple Tenants | [2.0 SPS 04](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7910eb4a498246b1b0435a4e9bf938d1.html?version=2.0.04&locale=en-US) | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
-> \* [SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7f203cf75ae4445d96ad0012c67c0480.html?version=2.0.02&locale=en-US)
----
+## Installation and setup workflow for AzAcSnap
+
+This workflow provides the main steps to install, setup and configure AzAcSnap along with your chosen database and storage option.
+
+**Steps**:
+1. [Install AzAcSnap](azacsnap-installation.md)
+1. [Configure Database](azacsnap-configure-database.md)
+ 1. [SAP HANA](azacsnap-configure-database.md?tabs=sap-hana)
+ 1. [Oracle DB](azacsnap-configure-database.md?tabs=oracle)
+ 1. [IBM Db2](azacsnap-configure-database.md?tabs=db2)
+ 1. Microsoft SQL Server (PREVIEW)
+1. [Configure Storage](azacsnap-configure-storage.md)
+ 1. [Azure NetApp Files](azacsnap-configure-storage.md?tabs=azure-netapp-files)
+ 1. [Azure Large Instance](azacsnap-configure-storage.md?tabs=azure-large-instance)
+ 1. Azure Managed Disk (PREVIEW)
+1. [Configure AzAcSnap](azacsnap-cmd-ref-configure.md)
+1. [Test AzAcSnap](azacsnap-cmd-ref-test.md)
+1. [Take a backup with AzAcSnap](azacsnap-cmd-ref-backup.md)
+
+## Technical articles
+
+The following technical articles describe how to set up AzAcSnap as part of a data protection strategy:
+
+- [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)
+- [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347)
+- [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
+- [Manual Recovery Guide for SAP Db2 on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-db2-on-azure-vms-from-azure-netapp/ba-p/3865379)
+- [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172)
+- [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620)
+- [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
+
+## Get command help
+
+To see a list of commands and examples, type `azacsnap -h` and then press the ENTER key.
+
+The general format of the commands is:
+`azacsnap -c [command] --[command] [sub-command] --[flag-name] [flag-value]`.
+
+### Command options
+
+The command options are as follows. The main bullets are commands, and the indented bullets are subcommands.
+
+- `-h` provides extended command-line help with examples on AzAcSnap usage.
+- [`-c configure`](azacsnap-cmd-ref-configure.md) provides an interactive Q&A style interface to create or modify the `azacsnap` configuration file (default = `azacsnap.json`).
+ - `--configuration new` creates a new configuration file.
+ - `--configuration edit` enables editing an existing configuration file.
+- [`-c test`](azacsnap-cmd-ref-test.md) validates the configuration file and tests connectivity.
+ - `--test <DbType>`, where DbType is one of `hana`, `oracle`, or `db2`, tests the connection to the specified database.
+ - `--test storage` tests communication with the underlying storage interface by creating a temporary storage snapshot on all the configured `data` volumes, and then removing them.
+ - `--test all` performs both the `hana` and `storage` tests in sequence.
+- [`-c backup`](azacsnap-cmd-ref-backup.md) is the primary command to execute database-consistent storage snapshots for SAP HANA data volumes and for other (for example, shared, log backup, or boot) volumes.
+ - `--volume data` takes a snapshot of all the volumes in the `dataVolume` stanza of the configuration file.
+ - `--volume other` takes a snapshot of all the volumes in the `otherVolume` stanza of the configuration file.
+ - `--volume all` takes a snapshot of all the volumes in the `dataVolume` stanza and then all the volumes in the `otherVolume` stanza of the configuration file.
+- [`-c details`](azacsnap-cmd-ref-details.md) provides information on snapshots or replication.
+ - `--details snapshots` (optional) provides a list of basic details about the snapshots for each volume that you configured.
+ - `--details replication` (optional) provides basic details about the replication status from the production site to the disaster-recovery site.
+- [`-c delete`](azacsnap-cmd-ref-delete.md) deletes a storage snapshot or a set of snapshots.
+- [`-c restore`](azacsnap-cmd-ref-restore.md) provides two methods to restore a snapshot to a volume.
+ - `--restore snaptovol` creates a new volume based on the latest snapshot on the target volume.
+ - `-c restore --restore revertvolume` reverts the target volume to a prior state, based on the most recent snapshot.
+- `[--configfile <configfilename>]` is an optional command-line parameter to provide a different file name for the JSON configuration. It's useful for creating a separate configuration file per security ID (for example, `--configfile H80.json`).
+- [`[--runbefore]` and `[--runafter]`](azacsnap-cmd-ref-runbefore-runafter.md) are optional commands to run external commands or shell scripts before and after the execution of the main AzAcSnap logic.
+- `[--preview]` is an optional command-line option that's required when you're using preview features.
+
+ For more information, see [Preview features of the Azure Application Consistent Snapshot tool](azacsnap-preview.md).
## Important things to remember -- After the setup of the snapshot tools, continuously monitor the storage space available and if
- necessary, delete the old snapshots on a regular basis to avoid storage fill out.
+- After the setup of the snapshot tools, continuously monitor the storage space available and if necessary, delete the old snapshots on a regular basis to avoid running out of storage capacity.
- Always use the latest snapshot tools.-- Use the same version of the snapshot tools across the landscape. - Test the snapshot tools to understand the parameters required and their behavior, along with the log files, before deployment into production.-- When setting up the HANA user for backup, you need to set up the user for each HANA instance. Create an SAP HANA user account to access HANA
- instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container environment, it can be set up under the tenant database.
-- Customers must provide the SSH public key for storage access. This action must be done once per node and for each user under which the command is executed.-- The number of snapshots per volume is limited to 250.-- If manually editing the configuration file, always use a Linux text editor such as "vi" and not Windows editors like Notepad. Using Windows editor may corrupt the file format.-- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.-- For DR: The snapshot tools must be tested on DR node before DR is set up.-- Monitor disk space regularly
- - Automated log deletion is managed with the `--trim` option of the `azacsnap -c backup` for SAP HANA 2 and later releases.
-- **Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA system specified in the configuration file. If this
- node becomes unavailable, there's no mechanism to automatically start communicating with another node.
- - For an **SAP HANA Scale-Out with Standby** scenario it's typical to install and configure the snapshot tools on the primary node. But, if the primary node becomes
- unavailable, the standby node will take over the primary node role. In this case, the implementation team should configure the snapshot tools on both
- nodes (Primary and Stand-By) to avoid any missed snapshots. In the normal state, the primary node will take HANA snapshots initiated by crontab. If the primary
- node fails over those snapshots will have to be executed from another node, such as the new primary node (former standby). To achieve this outcome, the standby
- node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured, `azacsnap.json` configured, and crontab commands staged
- in advance of the failover.
- - For an **SAP HANA HSR HA** scenario, it's recommended to install, configure, and schedule the snapshot tools on both (Primary and Secondary) nodes. Then, if
- the Primary node becomes unavailable, the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the Primary node
- will take HANA snapshots initiated by crontab. The Secondary node would attempt to take snapshots but fail as the Primary is functioning correctly. But,
- after Primary node failover, those snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the snapshot tool
- installed, storage communication enabled, `hdbuserstore` configured, `azacsnap.json` configured, and crontab enabled in advance of the failover.
## Guidance provided in this document
The following guidance is provided to illustrate the usage of the snapshot tools
### Taking snapshot backups - [What are the prerequisites for the storage snapshot](azacsnap-installation.md#prerequisites-for-installation)
- - [Enable communication with storage](azacsnap-installation.md#enable-communication-with-storage)
- - [Enable communication with database](azacsnap-installation.md#enable-communication-with-the-database)
+ - [Enable communication with storage](azacsnap-configure-storage.md#enable-communication-with-storage)
+ - [Enable communication with database](azacsnap-configure-database.md#enable-communication-with-the-database)
- [How to take snapshots manually](azacsnap-tips.md#take-snapshots-manually) - [How to set up automatic snapshot backup](azacsnap-tips.md#setup-automatic-snapshot-backup) - [How to monitor the snapshots](azacsnap-tips.md#monitor-the-snapshots)
The following guidance is provided to illustrate the usage of the snapshot tools
- [How to restore a `boot` snapshot](azacsnap-tips.md#restore-a-boot-snapshot) - [What are key facts to know about the snapshots](azacsnap-tips.md#key-facts-to-know-about-snapshots)
-> Snapshots are tested for both single SID and multi SID.
- ### Performing disaster recovery - [What are the prerequisites for DR setup](azacsnap-disaster-recovery.md#prerequisites-for-disaster-recovery-setup)
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
Previously updated : 08/21/2023 Last updated : 05/15/2024
This article provides a guide for installation of the Azure Application Consiste
> [!IMPORTANT] > Distributed installations are the only option for Azure Large Instances systems, because they're deployed in a private network. You must install AzAcSnap on each system to ensure connectivity.
-The downloadable self-installer makes the snapshot tools easy to set up and run with non-root user privileges (for example, `azacsnap`). The installer sets up the user and puts the snapshot tools into the user's `$HOME/bin` subdirectory. The default is `/home/azacsnap/bin`.
+AzAcSnap 10 supports more databases and operating systems, therefore a self-installer is no longer available.
-The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user who's performing the installation (for example, root). If the prerequisite steps to enable communication with storage and SAP HANA are run as root, the installation copies the private key and `hdbuserstore` to the backup user's location. A knowledgeable administrator can manually take the steps to enable communication with the storage back end and SAP HANA after the installation.
+## Download AzAcSnap
-## Prerequisites for installation
-
-Follow the guidelines to set up and run the snapshots and disaster-recovery commands. We recommend that you complete the following steps as root before you install and use the snapshot tools:
-
-1. Patch the operating system and set up SUSE Subscription Management Tool (SMT). For more information, see [Install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system).
-1. Set up time synchronization. Provide a time server that's compatible with the Network Time Protocol (NTP), and configure the operating system accordingly.
-1. Install the database. Follow the instructions for the supported database that you're using.
-1. Select the storage back end that you're using for your deployment. For more information, see [Enable communication with storage](#enable-communication-with-storage) later in this article.
+First, download the AzAcSnap executable file to any directory on your computer. AzAcSnap is provided as an executable file, so there's nothing to install.
- # [Azure NetApp Files](#tab/azure-netapp-files)
+- [Linux x86-64](https://aka.ms/azacsnap-linux) (binary)
+ - The Linux binary has an associated [Linux signature file](https://aka.ms/azacsnap-linux-signature). This file is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
- Either set up a system-managed identity or generate the service principal's authentication file.
+ > [!IMPORTANT]
+ > The installer is no longer available for Linux. Please follow the [guidelines here](azacsnap-installation.md) to setup the user's profile to run AzAcSnap and its dependencies.
- When you're validating communication with Azure NetApp Files, communication might fail or time out. Check that firewall rules aren't blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
+- [Windows 64-bit](https://aka.ms/azacsnap-windows) (executable)
+ - The Windows binary is signed by Microsoft.
- - (https://)management.azure.com:443
- - (https://)login.microsoftonline.com:443
+Once these downloads are completed, then [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md).
- # [Azure Large Instances (bare metal)](#tab/azure-large-instance)
-
- Generate a Secure Shell (SSH) private/public key pair. For each node where you'll run the snapshot tools, provide the generated public key to Microsoft Operations so it can install on the storage back end.
- Test connectivity by using SSH to connect to one of the nodes (for example, `ssh -l <Storage Username> <Storage IP Address>`). Enter `exit` to log out of the storage prompt.
+## Prerequisites for installation
- Microsoft Operations provides the storage username and storage IP address at the time of provisioning.
+Follow the guidelines to set up and run the snapshots and disaster-recovery commands. We recommend that you complete the following steps as root before you install and use the snapshot tools:
-
+1. Patch the operating system
+ 1. For SUSE on Azure Large Instances, set up SUSE Subscription Management Tool (SMT). For more information, see [Install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system).
+1. Set up time synchronization. Provide a time server that's compatible with the Network Time Protocol (NTP), and configure the operating system accordingly.
+1. Install the database. Follow the instructions for the supported database that you're using.
+1. Select the storage back end that you're using for your deployment. For more information, see [Enable communication with storage](azacsnap-configure-storage.md#enable-communication-with-storage) later in this article.
-1. Enable communication with the database. For more information, see [Enable communication with the database](#enable-communication-with-the-database) later in this article.
+1. Enable communication with the database. For more information, see [Enable communication with the database](azacsnap-configure-database.md#enable-communication-with-the-database) later in this article.
# [SAP HANA](#tab/sap-hana)
- Set up an appropriate SAP HANA user by following the instructions in the [Enable communication with the database](#enable-communication-with-the-database) section of this article.
+ Set up an appropriate SAP HANA user by following the instructions in the section to [enable communication with the database](azacsnap-configure-database.md#enable-communication-with-the-database) in the database configuration document.
After setup, you can test the connection from the command line by using the following examples. The following examples are for non-SSL communication to SAP HANA.
Follow the guidelines to set up and run the snapshots and disaster-recovery comm
# [Oracle](#tab/oracle)
- Set up an appropriate Oracle database and Oracle wallet by following the instructions in the [Enable communication with the database](#enable-communication-with-the-database) section of this article.
+ Set up an appropriate Oracle database and Oracle wallet by following the instructions in the section to [enable communication with the database](azacsnap-configure-database.md#enable-communication-with-the-database) in the database configuration document.
After setup, you can test the connection from the command line by using the following example:
Follow the guidelines to set up and run the snapshots and disaster-recovery comm
# [IBM Db2](#tab/db2)
- Set up an appropriate IBM Db2 connection method by following the instructions in the [Enable communication with the database](#enable-communication-with-the-database) section of this article.
+ Set up an appropriate IBM Db2 connection method by following the instructions in the section to [enable communication with the database](azacsnap-configure-database.md#enable-communication-with-the-database) in the database configuration document.
After setup, test the connection from the command line by using the following examples:
- - Install onto the database server, and then complete the setup with [Db2 local connectivity](#db2-local-connectivity):
+ - Install onto the database server, and then complete the setup with [Db2 local connectivity](azacsnap-configure-database.md#db2-local-connectivity):
`db2 "QUIT"`
- - Install onto a centralized backup system, and then complete the setup with [Db2 remote connectivity](#db2-remote-connectivity):
+ - Install onto a centralized backup system, and then complete the setup with [Db2 remote connectivity](azacsnap-configure-database.md#db2-remote-connectivity):
`ssh <InstanceUser>@<ServerAddress> 'db2 "QUIT"'`
Follow the guidelines to set up and run the snapshots and disaster-recovery comm
``` -
-## Enable communication with storage
-
-This section explains how to enable communication with storage. Use the following tabs to correctly select the storage back end that you're using.
-
-# [Azure NetApp Files (with virtual machine)](#tab/azure-netapp-files)
-
-There are two ways to authenticate to the Azure Resource Manager using either a system-managed identity or a service principal file. The options are described here.
-
-### Azure system-managed identity
-
-From AzAcSnap 9, it's possible to use a system-managed identity instead of a service principal for operation. Using this feature avoids the need to store service principal credentials on a virtual machine (VM). To set up an Azure managed identity by using Azure Cloud Shell, follow these steps:
-
-1. Within a Cloud Shell session with Bash, use the following example to set the shell variables appropriately and apply them to the subscription where you want to create the Azure managed identity. Set `SUBSCRIPTION`, `VM_NAME`, and `RESOURCE_GROUP` to your site-specific values.
-
- ```azurecli-interactive
- export SUBSCRIPTION="99z999zz-99z9-99zz-99zz-9z9zz999zz99"
- export VM_NAME="MyVM"
- export RESOURCE_GROUP="MyResourceGroup"
- export ROLE="Contributor"
- export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}"
- ```
-
-1. Set Cloud Shell to the correct subscription:
-
- ```azurecli-interactive
- az account set -s "${SUBSCRIPTION}"
- ```
-
-1. Create the managed identity for the virtual machine. The following command sets (or shows if it's already set) the AzAcSnap VM's managed identity:
-
- ```azurecli-interactive
- az vm identity assign --name "${VM_NAME}" --resource-group "${RESOURCE_GROUP}"
- ```
-
-1. Get the principal ID for assigning a role:
-
- ```azurecli-interactive
- PRINCIPAL_ID=$(az resource list -n ${VM_NAME} --query [*].identity.principalId --out tsv)
- ```
-
-1. Assign the Contributor role to the principal ID:
-
- ```azurecli-interactive
- az role assignment create --assignee "${PRINCIPAL_ID}" --role "${ROLE}" --scope "${SCOPE}"
- ```
-
-#### Optional RBAC
-
-It's possible to limit the permissions for the managed identity by using a custom role definition in role-based access control (RBAC). Create a suitable role definition for the virtual machine to be able to manage snapshots. You can find example permissions settings in [Tips and tricks for using the Azure Application Consistent Snapshot tool](azacsnap-tips.md).
-
-Then assign the role to the Azure VM principal ID (also displayed as `SystemAssignedIdentity`):
-
-```azurecli-interactive
-az role assignment create --assignee ${PRINCIPAL_ID} --role "AzAcSnap on ANF" --scope "${SCOPE}"
-```
-
-### Generate a service principal file
-
-1. In a Cloud Shell session, make sure you're logged on at the subscription where you want to be associated with the service principal by default:
-
- ```azurecli-interactive
- az account show
- ```
-
-1. If the subscription isn't correct, use the `az account set` command:
-
- ```azurecli-interactive
- az account set -s <subscription name or id>
- ```
-
-1. Create a service principal by using the Azure CLI, as shown in this example:
-
- ```azurecli-interactive
- az ad sp create-for-rbac --name "AzAcSnap" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
- ```
-
- The command should generate output like this example:
-
- ```output
- {
- "clientId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
- "clientSecret": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
- "subscriptionId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
- "tenantId": "00aa000a-aaaa-0000-00a0-00aa000aaa0a",
- "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
- "resourceManagerEndpointUrl": "https://management.azure.com/",
- "activeDirectoryGraphResourceId": "https://graph.windows.net/",
- "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
- "galleryEndpointUrl": "https://gallery.azure.com/",
- "managementEndpointUrl": "https://management.core.windows.net/"
- }
- ```
-
- This command automatically assigns the RBAC Contributor role to the service principal at the subscription level. You can narrow down the scope to the specific resource group where your tests will create the resources.
-
-1. Cut and paste the output content into a file called `azureauth.json` that's stored on the same system as the `azacsnap` command. Secure the file with appropriate system permissions.
-
- Make sure the format of the JSON file is exactly as described in the previous step, with the URLs enclosed in double quotation marks (").
-
-# [Azure Large Instances (bare metal)](#tab/azure-large-instance)
-
-Communication with the storage back end occurs over an encrypted SSH channel. The following example steps provide guidance on setup of SSH for this communication:
-
-1. Modify the `/etc/ssh/ssh_config` file.
-
- Refer to the following output, which includes the `MACs hmac-sha` line:
-
- ```output
- # RhostsRSAAuthentication no
- # RSAAuthentication yes
- # PasswordAuthentication yes
- # HostbasedAuthentication no
- # GSSAPIAuthentication no
- # GSSAPIDelegateCredentials no
- # GSSAPIKeyExchange no
- # GSSAPITrustDNS no
- # BatchMode no
- # CheckHostIP yes
- # AddressFamily any
- # ConnectTimeout 0
- # StrictHostKeyChecking ask
- # IdentityFile ~/.ssh/identity
- # IdentityFile ~/.ssh/id_rsa
- # IdentityFile ~/.ssh/id_dsa
- # Port 22
- Protocol 2
- # Cipher 3des
- # Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-
- cbc
- # MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd
- MACs hmac-sha
- # EscapeChar ~
- # Tunnel no
- # TunnelDevice any:any
- # PermitLocalCommand no
- # VisualHostKey no
- # ProxyCommand ssh -q -W %h:%p gateway.example.com
- ```
-
-1. Use the following example command to generate a private/public key pair. Don't enter a password when you're generating a key.
-
- ```bash
- ssh-keygen -t rsa ΓÇôb 5120 -C ""
- ```
-
-1. The output of the `cat /root/.ssh/id_rsa.pub` command is the public key. Send it to Microsoft Operations, so that the snapshot tools can communicate with the storage subsystem.
-
- ```bash
- cat /root/.ssh/id_rsa.pub
- ```
-
- ```output
- ssh-rsa
- AAAAB3NzaC1yc2EAAAADAQABAAABAQDoaRCgwn1Ll31NyDZy0UsOCKcc9nu2qdAPHdCzleiTWISvPW
- FzIFxz8iOaxpeTshH7GRonGs9HNtRkkz6mpK7pCGNJdxS4wJC9MZdXNt+JhuT23NajrTEnt1jXiVFH
- bh3jD7LjJGMb4GNvqeiBExyBDA2pXdlednOaE4dtiZ1N03Bc/J4TNuNhhQbdsIWZsqKt9OPUuTfD
- j0XvwUTLQbR4peGNfN1/cefcLxDlAgI+TmKdfgnLXIsSfbacXoTbqyBRwCi7p+bJnJD07zSc9YCZJa
- wKGAIilSg7s6Bq/2lAPDN1TqwIF8wQhAg2C7yeZHyE/ckaw/eQYuJtN+RNBD
- ```
-----
-## Enable communication with the database
-
-This section explains how to enable communication with the database. Use the following tabs to correctly select the database that you're using.
-
-# [SAP HANA](#tab/sap-hana)
-
-If you're deploying to a centralized virtual machine, you need to install and set up the SAP HANA client so that the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. You can download the SAP HANA client from the [SAP Development Tools website](https://tools.hana.ondemand.com/#hanatools).
-
-The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to initiate and release the database save point. The following example shows the setup of the SAP HANA 2.0 user and `hdbuserstore` for communication to the SAP HANA database.
-
-The following example commands set up a user (`AZACSNAP`) in SYSTEMDB on an SAP HANA 2.0 database. Change the IP address, usernames, and passwords as appropriate.
-
-1. Connect to SYSTEMDB:
-
- ```bash
- hdbsql -n <IP_address_of_host>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD>
- ```
-
- ```output
- Welcome to the SAP HANA Database interactive terminal.
-
- Type: \h for help with commands
- \q to quit
-
- hdbsql SYSTEMDB=>
- ```
-
-1. Create the user. This example creates the `AZACSNAP` user in SYSTEMDB:
-
- ```sql
- hdbsql SYSTEMDB=> CREATE USER AZACSNAP PASSWORD <AZACSNAP_PASSWORD_CHANGE_ME> NO FORCE_FIRST_PASSWORD_CHANGE;
- ```
-
-1. Grant the user permissions. This example sets the permission for the `AZACSNAP` user to allow for performing a database-consistent storage snapshot:
-
- - For SAP HANA releases up to version 2.0 SPS 03:
-
- ```sql
- hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, CATALOG READ TO AZACSNAP;
- ```
-
- - For SAP HANA releases from version 2.0 SPS 04, SAP added new fine-grained privileges:
-
- ```sql
- hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, DATABASE BACKUP ADMIN, CATALOG READ TO AZACSNAP;
- ```
-
-1. *Optional*: Prevent the user's password from expiring.
-
- > [!NOTE]
- > Check with corporate policy before you make this change.
-
- The following example disables the password expiration for the `AZACSNAP` user. Without this change, the user's password could expire and prevent snapshots from being taken correctly.
-
- ```sql
- hdbsql SYSTEMDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME;
- ```
-
-1. Set up the SAP HANA Secure User Store (change the password). This example uses the `hdbuserstore` command from the Linux shell to set up the SAP HANA Secure User Store:
-
- ```bash
- hdbuserstore Set AZACSNAP <IP_address_of_host>:30013 AZACSNAP <AZACSNAP_PASSWORD_CHANGE_ME>
- ```
-
-1. Check that you correctly set up the SAP HANA Secure User Store. Use the `hdbuserstore` command to list the output, similar to the following example. More details on using `hdbuserstore` are available on the SAP website.
-
- ```bash
- hdbuserstore List
- ```
-
- ```output
- DATA FILE : /home/azacsnap/.hdb/sapprdhdb80/SSFS_HDB.DAT
- KEY FILE : /home/azacsnap/.hdb/sapprdhdb80/SSFS_HDB.KEY
-
- KEY AZACSNAP
- ENV : <IP_address_of_host>:
- USER: AZACSNAP
- ```
-
-### Using SSL for communication with SAP HANA
-
-AzAcSnap uses SAP HANA's `hdbsql` command to communicate with SAP HANA. Using `hdbsql` allows the use of SSL options to encrypt communication with SAP HANA.
-
-AzAcSnap always uses the following options when you're using the `azacsnap --ssl` option:
--- `-e`: Enables TLS/SSL encryption. The server chooses the highest available.-- `-ssltrustcert`: Specifies whether to validate the server's certificate.-- `-sslhostnameincert "*"`: Specifies the host name that verifies the server's identity. When you specify `"*"` as the host name, the server's host name isn't validated.-
-SSL communication also requires key-store and trust-store files. It's possible for these files to be stored in default locations on a Linux installation. But to ensure that the correct key material is being used for the various SAP HANA systems (for the cases where different key-store and trust-store files are used for each SAP HANA system), AzAcSnap expects the key-store and trust-store files to be stored in the `securityPath` location. The AzAcSnap configuration file specifies this location.
-
-#### Key-store files
-
-If you're using multiple system identifiers (SIDs) with the same key material, it's easier to create links into the `securityPath` location as defined in the AzAcSnap configuration file. Ensure that these values exist for every SID that uses SSL.
--- For `openssl`: `ln $HOME/.ssl/key.pem <securityPath>/<SID>_keystore`-- For `commoncrypto`: `ln $SECUDIR/sapcli.pse <securityPath>/<SID>_keystore`-
-If you're using multiple SIDs with different key material per SID, copy (or move and rename) the files into the `securityPath` location as defined in the SID's AzAcSnap configuration file.
--- For `openssl`: `mv key.pem <securityPath>/<SID>_keystore`-- For `commoncrypto`: `mv sapcli.pse <securityPath>/<SID>_keystore`-
-When AzAcSnap calls `hdbsql`, it adds `-sslkeystore=<securityPath>/<SID>_keystore` to the `hdbsql` command line.
-
-#### Trust-store files
-
-If you're using multiple SIDs with the same key material, create hard links into the `securityPath` location as defined in the AzAcSnap configuration file. Ensure that these values exist for every SID that uses SSL.
--- For `openssl`: `ln $HOME/.ssl/trust.pem <securityPath>/<SID>_truststore`-- For `commoncrypto`: `ln $SECUDIR/sapcli.pse <securityPath>/<SID>_truststore`-
-If you're using multiple SIDs with the different key material per SID, copy (or move and rename) the files into the `securityPath` location as defined in the SID's AzAcSnap configuration file.
--- For `openssl`: `mv trust.pem <securityPath>/<SID>_truststore`-- For `commoncrypto`: `mv sapcli.pse <securityPath>/<SID>_truststore`-
-The `<SID>` component of the file names must be the SAP HANA system identifier in all uppercase (for example, `H80` or `PR1`). When AzAcSnap calls `hdbsql`, it adds `-ssltruststore=<securityPath>/<SID>_truststore` to the command line.
-
-If you run `azacsnap -c test --test hana --ssl openssl`, where `SID` is `H80` in the configuration file, it executes the `hdbsql`connections as follows:
-
-```bash
-hdbsql \
- -e \
- -ssltrustcert \
- -sslhostnameincert "*" \
- -sslprovider openssl \
- -sslkeystore ./security/H80_keystore \
- -ssltruststore ./security/H80_truststore
- "sql statement"
-```
-
-In the preceding code, the backslash (`\`) character is a command-line line wrap to improve the clarity of the multiple parameters passed on the command line.
-
-# [Oracle](#tab/oracle)
-
-The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable and disable backup mode.
-
-After AzAcSnap puts the database in backup mode, AzAcSnap queries the Oracle database to get a list of files that have backup mode as active. This file list is sent into an external file. The external file is in the same location and basename as the log file, but with a `.protected-tables` file name extension. (The AzAcSnap log file details the output file name.)
-
-The following example commands show the setup of the Oracle database user (`AZACSNAP`), the use of `mkstore` to create an Oracle wallet, and the `sqlplus` configuration files that are required for communication to the Oracle database. Change the IP address, usernames, and passwords as appropriate.
-
-1. Connect to the Oracle database:
-
- ```bash
- su ΓÇô oracle
- sqlplus / AS SYSDBA
- ```
-
- ```output
- SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 1 01:34:05 2021
- Copyright (c) 1982, 2014, Oracle. All rights reserved.
- Connected to:
- Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
- SQL>
- ```
-
-1. Create the user. This example creates the `azacsnap` user:
-
- ```sql
- SQL> CREATE USER azacsnap IDENTIFIED BY password;
- ```
-
- ```output
- User created.
- ```
-
-1. Grant the user permissions. This example sets the permission for the `azacsnap` user to allow for putting the database in backup mode:
-
- ```sql
- SQL> GRANT CREATE SESSION TO azacsnap;
- ```
-
- ```output
- Grant succeeded.
- ```
-
- ```sql
- SQL> GRANT SYSBACKUP TO azacsnap;
- ```
-
- ```output
- Grant succeeded.
- ```
-
- ```sql
- SQL> connect azacsnap/password
- ```
-
- ```output
- Connected.
- ```
-
- ```sql
- SQL> quit
- ```
-
-1. *Optional*: Prevent the user's password from expiring. Without this change, the user's password could expire and prevent snapshots from being taken correctly.
-
- > [!NOTE]
- > Check with corporate policy before you make this change.
-
- This example gets the password expiration for the `AZACSNAP` user:
-
- ```sql
- SQL> SELECT username,account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
- ```
-
- ```output
- USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
-
- AZACSNAP OPEN DD-MMM-YY DEFAULT
- ```
-
- There are a few methods for disabling password expiration in the Oracle database. Contact your database administrator for guidance. One method is to modify the `DEFAULT` user's profile so that the password lifetime is unlimited:
-
- ```sql
- SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME unlimited;
- ```
-
- After you make this change to the database setting, there should be no password expiration date for users who have the `DEFAULT` profile:
-
- ```sql
- SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
- ```
-
- ```output
- USERNAME ACCOUNT_STATUS EXPIRY_DA PROFILE
-
- AZACSNAP OPEN DEFAULT
- ```
-
-1. Set up the Oracle wallet (change the password).
-
- The Oracle wallet provides a method to manage database credentials across multiple domains. This capability uses a database connection string in the data-source definition, which is resolved with an entry in the wallet. When you use the Oracle wallet correctly, passwords in the data-source configuration are unnecessary.
-
- This setup makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, which hides details of the database connection string. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead of (potentially) many data-source definitions.
-
- Run the following commands on the Oracle database server. This example uses the `mkstore` command from the Linux shell to set up the Oracle wallet. These commands are run on the Oracle database server via unique user credentials to avoid any impact on the running database. This example creates a new user (`azacsnap`) and appropriately configures the environment variables.
-
- 1. Get the Oracle environment variables to be used in setup. Run the following commands as the root user on the Oracle database server:
-
- ```bash
- su - oracle -c 'echo $ORACLE_SID'
- ```
-
- ```output
- oratest1
- ```
-
- ```bash
- su - oracle -c 'echo $ORACLE_HOME'
- ```
-
- ```output
- /u01/app/oracle/product/19.0.0/dbhome_1
- ```
-
- 1. Create the Linux user to generate the Oracle wallet and associated `*.ora` files by using the output from the previous step.
-
- These examples use the `bash` shell. If you're using a different shell (for example, `csh`), be sure to set environment variables correctly.
-
- ```bash
- useradd -m azacsnap
- echo "export ORACLE_SID=oratest1" >> /home/azacsnap/.bash_profile
- echo "export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1" >> /home/azacsnap/.bash_profile
- echo "export TNS_ADMIN=/home/azacsnap" >> /home/azacsnap/.bash_profile
- echo "export PATH=\$PATH:\$ORACLE_HOME/bin" >> /home/azacsnap/.bash_profile
- ```
-
- 1. As the new Linux user (`azacsnap`), create the wallet and `*.ora` files.
-
- 1. Switch to the user created in the previous step:
-
- ```bash
- sudo su - azacsnap
- ```
-
- 1. Create the Oracle wallet:
-
- ```bash
- mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -create
- ```
-
- ```output
- Oracle Secret Store Tool Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
- Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
-
- Enter password: <wallet_password>
- Enter password again: <wallet_password>
- ```
-
- 1. Add the connection string credentials to the Oracle wallet. In the following example command, `AZACSNAP` is the connection string that AzAcSnap will use, `azacsnap` is the Oracle database user, and `AzPasswd1` is the Oracle user's database password.
-
- ```bash
- mkstore -wrl $TNS_ADMIN/.oracle_wallet/ -createCredential AZACSNAP azacsnap AzPasswd1
- ```
-
- ```output
- Oracle Secret Store Tool Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
- Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
-
- Enter wallet password: <wallet_password>
- ```
-
- 1. Create the `tnsnames-ora` file. In the following example command, set `HOST` to the IP address of the Oracle database server. Set `SID` to the Oracle database SID.
-
- ```bash
- echo "# Connection string
- AZACSNAP=\"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.1)(PORT=1521))(CONNECT_DATA=(SID=oratest1)))\"
- " > $TNS_ADMIN/tnsnames.ora
- ```
-
- 1. Create the `sqlnet.ora` file:
-
- ```bash
- echo "SQLNET.WALLET_OVERRIDE = TRUE
- WALLET_LOCATION=(
- SOURCE=(METHOD=FILE)
- (METHOD_DATA=(DIRECTORY=\$TNS_ADMIN/.oracle_wallet))
- ) " > $TNS_ADMIN/sqlnet.ora
- ```
-
- 1. Test the Oracle wallet:
-
- ```bash
- sqlplus /@AZACSNAP as SYSBACKUP
- ```
-
- ```output
- SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 12 00:25:32 2022
- Version 19.3.0.0.0
-
- Copyright (c) 1982, 2019, Oracle. All rights reserved.
-
-
- Connected to:
- Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
- ```
-
- ```sql
- SELECT MACHINE FROM V$SESSION WHERE SID=1;
- ```
-
- ```output
- MACHINE
- -
- oradb-19c
- ```
-
- ```sql
- quit
- ```
-
- ```output
- Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
- ```
-
- 1. Create a ZIP file archive of the Oracle wallet and `*.ora` files:
-
- ```bash
- cd $TNS_ADMIN
- zip -r wallet.zip sqlnet.ora tnsnames.ora .oracle_wallet
- ```
-
- ```output
- adding: sqlnet.ora (deflated 9%)
- adding: tnsnames.ora (deflated 7%)
- adding: .oracle_wallet/ (stored 0%)
- adding: .oracle_wallet/ewallet.p12.lck (stored 0%)
- adding: .oracle_wallet/ewallet.p12 (deflated 1%)
- adding: .oracle_wallet/cwallet.sso.lck (stored 0%)
- adding: .oracle_wallet/cwallet.sso (deflated 1%)
- ```
-
- 1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap).
-
- > [!IMPORTANT]
- > If you're deploying to a centralized virtual machine, you need to install and set up Oracle Instant Client on it so that the AzAcSnap user can run `sqlplus` commands. You can download Oracle Instant Client from the [Oracle downloads page](https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html).
- >
- > For SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
-
- 1. Complete the following steps on the system running AzAcSnap:
-
- 1. Deploy the ZIP file that you copied in the previous step.
-
- This step assumes that you already created the user running AzAcSnap (by default, `azacsnap`) by using the AzAcSnap installer.
-
- > [!NOTE]
- > It's possible to use the `TNS_ADMIN` shell variable to allow for multiple Oracle targets by setting the unique shell variable value for each Oracle system as needed.
-
- ```bash
- export TNS_ADMIN=$HOME/ORACLE19c
- mkdir $TNS_ADMIN
- cd $TNS_ADMIN
- unzip ~/wallet.zip
- ```
-
- ```output
- Archive: wallet.zip
- inflating: sqlnet.ora
- inflating: tnsnames.ora
- creating: .oracle_wallet/
- extracting: .oracle_wallet/ewallet.p12.lck
- inflating: .oracle_wallet/ewallet.p12
- extracting: .oracle_wallet/cwallet.sso.lck
- inflating: .oracle_wallet/cwallet.sso
- ```
-
- Check that the files were extracted correctly:
-
- ```bash
- ls
- ```
-
- ```output
- sqlnet.ora tnsnames.ora wallet.zip
- ```
-
- Assuming that you completed all the previous steps correctly, it should be possible to connect to the database by using the `/@AZACSNAP` connection string:
-
- ```bash
- sqlplus /@AZACSNAP as SYSBACKUP
- ```
-
- ```output
- SQL*Plus: Release 21.0.0.0.0 - Production on Wed Jan 12 13:39:36 2022
- Version 21.1.0.0.0
-
- Copyright (c) 1982, 2020, Oracle. All rights reserved.
-
-
- Connected to:
- Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
-
- ```sql
- SQL> quit
- ```
-
- ```output
- Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
- Version 19.3.0.0.0
- ```
-
- 1. Test the setup with AzAcSnap
-
- After you configure AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connection string (for example, `/@AZACSNAP`), it should be possible to connect to the Oracle database.
-
- Check that the `$TNS_ADMIN` variable is set for the correct Oracle target system. The `$TNS_ADMIN` shell variable determines where to locate the Oracle wallet and `*.ora` files, so you must set it before you run the `azacsnap` command.
-
- ```bash
- ls -al $TNS_ADMIN
- ```
-
- ```output
- total 16
- drwxrwxr-x. 3 orasnap orasnap 84 Jan 12 13:39 .
- drwx. 18 orasnap sapsys 4096 Jan 12 13:39 ..
- drwx. 2 orasnap orasnap 90 Jan 12 13:23 .oracle_wallet
- -rw-rw-r--. 1 orasnap orasnap 125 Jan 12 13:39 sqlnet.ora
- -rw-rw-r--. 1 orasnap orasnap 128 Jan 12 13:24 tnsnames.ora
- -rw-r--r--. 1 root root 2569 Jan 12 13:28 wallet.zip
- ```
-
- Run the `azacsnap` test command:
-
- ```bash
- cd ~/bin
- azacsnap -c test --test oracle --configfile ORACLE.json
- ```
-
- ```output
- BEGIN : Test process started for 'oracle'
- BEGIN : Oracle DB tests
- PASSED: Successful connectivity to Oracle DB version 1903000000
- END : Test process complete for 'oracle'
- ```
-
- You must set up the `$TNS_ADMIN` variable correctly for `azacsnap` to run correctly. You can either add it to the user's `.bash_profile` file or export it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; ./azacsnap --configfile ORACLE19c.json -c backup --volume data --prefix hourly-ora19c --retention 12`).
-
-# [IBM Db2](#tab/db2)
-
-The snapshot tools issue commands to the IBM Db2 database by using the command-line processor `db2` to enable and disable backup mode.
-
-After AzAcSnap puts the database in backup mode, it queries the IBM Db2 database to get a list of protected paths, which are part of the database where backup mode is active. This list is sent into an external file, which is in the same location and basename as the log file but has a `.\<DBName>-protected-paths` extension. (The AzAcSnap log file details the output file name.)
-
-AzAcSnap uses the IBM Db2 command-line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. So you should install AzAcSnap in one of the following ways:
--- Install on the database server, and then complete the setup with [Db2 local connectivity](#db2-local-connectivity).-- Install on a centralized backup system, and then complete the setup with [Db2 remote connectivity](#db2-remote-connectivity).-
-#### Db2 local connectivity
-
-If you installed AzAcSnap on the database server, be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile. Use the following example setup.
-
-##### azacsnap user permissions
-
-The `azacsnap` user should belong to the same Db2 group as the database instance user. The following example gets the group membership of the IBM Db2 installation's database instance user `db2tst`:
-
-```bash
-id db2tst
-```
-
-```output
-uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1)
-```
-
-From the output, you can confirm the `db2tst` user has been added to the `db2iadm1` group. Add the `azacsnap` user to the group:
-
-```bash
-usermod -a -G db2iadm1 azacsnap
-```
-
-##### azacsnap user profile
-
-The `azacsnap` user needs to be able to run the `db2` command. By default, the `db2` command isn't in the `azacsnap` user's `$PATH` information.
-
-Add the following code to the user's `.bashrc` file. Use your own IBM Db2 installation value for `INSTHOME`.
-
-```output
-# The following four lines have been added to allow this user to run the DB2 command line processor.
-INSTHOME="/db2inst/db2tst"
-if [ -f ${INSTHOME}/sqllib/db2profile ]; then
- . ${INSTHOME}/sqllib/db2profile
-fi
-```
-
-Test that the user can run the `db2` command-line processor:
-
-```bash
-su - azacsnap
-db2
-```
-
-```output
-(c) Copyright IBM Corporation 1993,2007
-Command Line Processor for DB2 Client 11.5.7.0
-
-You can issue database manager commands and SQL statements from the command
-prompt. For example:
- db2 => connect to sample
- db2 => bind sample.bnd
-
-For general help, type: ?.
-For command help, type: ? command, where command can be
-the first few keywords of a database manager command. For example:
- ? CATALOG DATABASE for help on the CATALOG DATABASE command
- ? CATALOG for help on all of the CATALOG commands.
-
-To exit db2 interactive mode, type QUIT at the command prompt. Outside
-interactive mode, all commands must be prefixed with 'db2'.
-To list the current command option settings, type LIST COMMAND OPTIONS.
-
-For more detailed help, refer to the Online Reference Manual.
-```
-
-```sql
-db2 => quit
-DB20000I The QUIT command completed successfully.
-```
-
-Now configure `azacsnap` to user `localhost`. After this preliminary test as the `azacsnap` user is working correctly, go on to configure (`azacsnap -c configure`) with `serverAddress=localhost` and test (`azacsnap -c test --test db2`) AzAcSnap database connectivity.
-
-#### Db2 remote connectivity
-
-If you installed AzAcSnap on a centralized backup system, use the following example setup to allow SSH access to the Db2 database instance.
-
-Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair:
-
-```bash
-ssh-keygen
-```
-
-```output
-Generating public/private rsa key pair.
-Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa):
-Enter passphrase (empty for no passphrase):
-Enter same passphrase again:
-Your identification has been saved in /home/azacsnap/.ssh/id_rsa.
-Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub.
-The key fingerprint is:
-SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02
-The key's randomart image is:
-+[RSA 2048]-+
-| ... o. |
-| . . +. |
-| .. E + o.|
-| .... B..|
-| S. . o *=|
-| . . . o o=X|
-| o. . + .XB|
-| . + + + +oX|
-| ...+ . =.o+|
-+-[SHA256]--+
-```
-
-Get the contents of the public key:
-
-```bash
-cat .ssh/id_rsa.pub
-```
-
-```output
-ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02
-```
-
-Log in to the IBM Db2 system as the Db2 instance user.
-
-Add the contents of the AzAcSnap user's public key to the Db2 instance user's `authorized_keys` file:
-
-```bash
-echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys
-```
-
-Log in to the AzAcSnap system as the `azacsnap` user and test SSH access:
-
-```bash
-ssh <InstanceUser>@<ServerAddress>
-```
-
-```output
-[InstanceUser@ServerName ~]$
-```
-
-Test that the user can run the `db2` command-line processor:
-
-```bash
-db2
-```
-
-```output
-(c) Copyright IBM Corporation 1993,2007
-Command Line Processor for DB2 Client 11.5.7.0
-
-You can issue database manager commands and SQL statements from the command
-prompt. For example:
- db2 => connect to sample
- db2 => bind sample.bnd
-
-For general help, type: ?.
-For command help, type: ? command, where command can be
-the first few keywords of a database manager command. For example:
- ? CATALOG DATABASE for help on the CATALOG DATABASE command
- ? CATALOG for help on all of the CATALOG commands.
-
-To exit db2 interactive mode, type QUIT at the command prompt. Outside
-interactive mode, all commands must be prefixed with 'db2'.
-To list the current command option settings, type LIST COMMAND OPTIONS.
-
-For more detailed help, refer to the Online Reference Manual.
-```
-
-```sql
-db2 => quit
-DB20000I The QUIT command completed successfully.
-```
-
-```bash
-[prj@db2-02 ~]$ exit
-```
-
-```output
-logout
-Connection to <serverAddress> closed.
-```
---- ## Install the snapshot tools
-The downloadable self-installer makes the snapshot tools easy to set up and run with non-root user privileges (for example, `azacsnap`). The installer sets up the user and puts the snapshot tools into the user's `$HOME/bin` subdirectory. The default is `/home/azacsnap/bin`.
-
-The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the previous setup steps to enable communication with storage and SAP HANA were run as root, the installation copies the private key and `hdbuserstore` to the backup user's location. A knowledgeable administrator can manually take the steps to enable communication with the storage back end and database after the installation.
-
-> [!NOTE]
-> For earlier installations of SAP HANA on Azure Large Instances, the directory of preinstalled snapshot tools was `/hana/shared/<SID>/exe/linuxx86_64/hdb`.
-
-With the [prerequisite steps](#prerequisites-for-installation) completed, it's now possible to install the snapshot tools by using the self-installer as follows:
-
-1. Copy the downloaded self-installer to the target system.
-1. Run the self-installer as the root user. If necessary, make the file executable by using the `chmod +x *.run` command.
-
-Running the self-installer command without any arguments displays help on using the installer as follows:
-
-```bash
-chmod +x azacsnap_installer_v5.0.run
-./azacsnap_installer_v5.0.run
-```
-
-```output
-Usage: ./azacsnap_installer_v5.0.run [-v] -I [-u <HLI Snapshot Command user>]
-./azacsnap_installer_v5.0.run [-v] -X [-d <directory>]
-./azacsnap_installer_v5.0.run [-h]
-
-Switches enclosed in [] are optional for each command line.
-- h prints out this usage.-- v turns on verbose output.-- I starts the installation.-- u is the Linux user to install the scripts into, by default this is
-'azacsnap'.
-- X will only extract the commands.-- d is the target directory to extract into, by default this is
-'./snapshot_cmds'.
-Examples of a target directory are ./tmp or /usr/local/bin
-```
-
-The self-installer has an option to extract (`-X`) the snapshot tools from the bundle without performing any user creation and setup. An experienced administrator can then complete the setup steps manually or copy the commands to upgrade an existing installation.
-
-### Use the easy installation of snapshot tools (default)
-
-The installer can quickly install the snapshot tools for SAP HANA on Azure. By default, if you run the installer with only the `-I` option, it does the following steps:
+With the [prerequisite steps](#prerequisites-for-installation) completed, the steps to install AzAcSnap are as follows:
1. Create snapshot user `azacsnap`, create the home directory, and set group membership. 1. Configure the `azacsnap` user's login `~/.profile` information.
-1. Search the file system for directories to add to `$PATH` for AzAcSnap. This task allows the user who runs AzAcSnap to use SAP HANA commands, such as `hdbsql` and `hdbuserstore`.
-1. Search the file system for directories to add to `$LD_LIBRARY_PATH` for AzAcSnap. Many commands require you to set a library path to run them correctly. This task configures it for the installed user.
-1. Copy the SSH keys for back-end storage for AzAcSnap from the root user (the user running the installation).
-
- This task assumes that the root user has already configured connectivity to the storage. For more information, see the earlier section [Enable communication with storage](#enable-communication-with-storage).
-1. Copy the SAP HANA connection's secure user store for the target user, `azacsnap`. This task assumes that the root user has already configured the secure user store. For more information, see the earlier section [Enable communication with the database](#enable-communication-with-the-database).
-1. The snapshot tools are extracted into `/home/azacsnap/bin/`.
-1. The commands in `/home/azacsnap/bin/` have their permissions set, including ownership and executable bit.
-
-The following example shows the correct output of the installer when you run it by using the default installation option:
-
-```bash
-./azacsnap_installer_v5.0.run -I
-```
-
-```output
-+--+
-| Azure Application Consistent Snapshot tool Installer |
-+--+
-|-> Installer version '5.0'
-|-> Create Snapshot user 'azacsnap', home directory, and set group membership.
-|-> Configure azacsnap .profile
-|-> Search filesystem for directories to add to azacsnap's $PATH
-|-> Search filesystem for directories to add to azacsnap's $LD_LIBRARY_PATH
-|-> Copying SSH keys for back-end storage for azacsnap.
-|-> Copying HANA connection keystore for azacsnap.
-|-> Extracting commands into /home/azacsnap/bin/.
-|-> Making commands in /home/azacsnap/bin/ executable.
-|-> Creating symlink for hdbsql command in /home/azacsnap/bin/.
-+--+
-| Install complete! Follow the steps below to configure. |
-+--+
-+--+
-| Install complete! Follow the steps below to configure. |
-+--+
-
-1. Change into the snapshot user account.....
- su - azacsnap
-2. Set up the HANA Secure User Store..... (command format below)
- hdbuserstore Set <ADMIN_USER> <HOSTNAME>:<PORT> <admin_user> <password>
-3. Change to location of commands.....
- cd /home/azacsnap/bin/
-4. Configure the customer details file.....
- azacsnap -c configure --configuration new
-5. Test the connection to storage.....
- azacsnap -c test --test storage
-6. Test the connection to HANA.....
- a. without SSL
- azacsnap -c test --test hana
- b. with SSL, you will need to choose the correct SSL option
- azacsnap -c test --test hana --ssl=<commoncrypto|openssl>
-7. Run your first snapshot backup..... (example below)
- azacsnap -c backup --volume=data --prefix=hana_test --frequency=15min --retention=1
-```
-
-### Uninstall the snapshot tools
-
-If you installed the snapshot tools by using the default settings, uninstallation requires only removing the user that you installed the commands for. The default is `azacsnap`.
-
-```bash
-userdel -f -r azacsnap
-```
-
-### Manually install the snapshot tools
+1. Search the file system for directories to add to `$PATH` (Linux) or `%PATH%` (Windows) for AzAcSnap. This task allows the user who runs AzAcSnap to use database specific commands, such as `hdbsql` and `hdbuserstore`.
+1. Search the file system for directories to add to `$LD_LIBRARY_PATH` (Linux) for AzAcSnap. Many commands require you to set a library path to run them correctly.
+1. Copy AzAcSnap binary into a location on the user's `$PATH` (Linux) or `%PATH%` (Windows).
+1. On Linux it may be necessary to set the `azacsnap` binary permissions set correctly, including ownership and executable bit.
-In some cases, it's necessary to install the tools manually. But we recommend that you use the installer's default option to ease this process.
+Performing the following steps to get azacsnap running:
-Each line that starts with a pound (`#`) character demonstrates that the root user runs the example commands after the character. The backslash (`\`) at the end of a line is the standard line-continuation character for a shell command.
+- For Linux via a shell session:
+ 1. As the root superuser, create a Linux User
+ 1. `useradd -m azacsnap`
+ 1. Log in as the user
+ 1. `su ΓÇô azacsnap`
+ 1. `cd $HOME/bin`
+ 1. Download [azacsnap](https://aka.ms/azacsnap-linux)
+ 1. `wget -O azacsnap https://aka.ms/azacsnap-linux`
+ 1. Run azacsnap
+ 1. `azacsnap -c about`
-As the root superuser, you can follow these steps for a manual installation:
+- For Windows via a GUI:
+ 1. Create a Windows User
+ 1. Log in as the user
+ 1. Download [`azacsnap.exe`](https://aka.ms/azacsnap-windows)
+ 1. Open a terminal session and run azacsnap
+ 1. `azacsnap.exe -c about`
-1. Get the `sapsys` group ID. In this case, the group ID is `1010`.
- ```bash
- grep sapsys /etc/group
- ```
-
- ```output
- sapsys:x:1010:
- ```
-
-1. Create snapshot user `azacsnap`, create the home directory, and set group membership by using the group ID from step 1:
+## Update user profile
- ```bash
- useradd -m -g 1010 -c "Azure SAP HANA Snapshots User" azacsnap
- ```
+The user running AzAcSnap needs to have any environment variables updated to ensure AzAcSnap can run the database specific commands without needing the command's full path. This method allows for overriding the database commands if needed for special purposes.
-1. Make sure the login `.profile` information for the `azacsnap` user exists:
+- SAP HANA requires `hdbuserstore` and `hdbsql`.
+- OracleDB requires `sqlplus`.
+- IBM Db2 requires `db2` and `ssh` (for remote access to Db2 when doing a centralized installation).
- ```bash
- echo "" >> /home/azacsnap/.profile
- ```
+### Linux
-1. Search the file system for directories to add to `$PATH` for AzAcSnap. These directories are typically the paths to the SAP HANA tools, such as `hdbsql` and `hdbuserstore`.
+On Linux setup of the user's `$PATH` is typically done by updating the users `$HOME/.profile` with the appropriate `$PATH` information for locating binaries, and potentially the `LD_LIBRARY_PATH` variable to ensure availability of shared objects for the Linux binaries.
- ```bash
- HDBSQL_PATH=`find -L /hana/shared/[A-z0-9][A-z0-9][A-z0-9]/HDB*/exe /usr/sap/hdbclient -name hdbsql -exec dirname {} + 2> | sort | uniq | tr '\n' ':'`
- ```
+1. Search the file system for directories to add to `$PATH` for AzAcSnap.
-1. Add the updated `$PATH` information to the user's profile:
-
- ```bash
- echo "export PATH=\"\$PATH:$HDBSQL_PATH\"" >> /home/azacsnap/.profile
- ```
-
-1. Search the file system for directories to add to `$LD_LIBRARY_PATH` for AzAcSnap:
+ For example:
```bash
+ # find the path for the hdbsql command
+ export DBCMD="hdbsql"
+ find / -name ${DBCMD} -exec dirname {} + 2> | sort | uniq | tr '\n' ':'
+ /hana/shared/PR1/exe/linuxx86_64/HDB_2.00.040.00.1553674765_c8210ee40a82860643f1874a2bf4ffb67a7b2add
+ #
+ # add the output to the user's profile
+ echo "export PATH=\"\$PATH:/hana/shared/PR1/exe/linuxx86_64/HDB_2.00.040.00.1553674765_c8210ee40a82860643f1874a2bf4ffb67a7b2add\"" >> /home/azacsnap/.profile
+ #
+ # add any shared objects to the $LD_LIBRARY_PATH
+ export SHARED_OBJECTS='*.so'
NEW_LIB_PATH=`find -L /hana/shared/[A-z0-9][A-z0-9][A-z0-9]/HDB*/exe /usr/sap/hdbclient -name "*.so" -exec dirname {} + 2> | sort | uniq | tr '\n' ':'`
+ #
+ # add the output to the user's profile
+ echo "export LD_LIBRARY_PATH=\"\$LD_LIBRARY_PATH:$NEW_LIB_PATH\"" >> /home/azacsnap/.profile
```
+
+### Windows
-1. Add the updated library path to the user's profile:
+Use the Windows specific tools to find the location of the commands and add their directories to the users profile.
- ```bash
- echo "export LD_LIBRARY_PATH=\"\$LD_LIBRARY_PATH:$NEW_LIB_PATH\"" >> /home/azacsnap/.profile
- ```
1. Take the following actions, depending on the storage back end: # [Azure NetApp Files (with VM)](#tab/azure-netapp-files)
- Configure the user's `DOTNET_BUNDLE_EXTRACT_BASE_DIR` path according to the .NET Core single-file extract guidance.
-
- Use the following code for SUSE Linux:
-
- ```bash
- echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.profile
- echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.profile
- ```
-
- Use the following code for RHEL:
-
- ```bash
- echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.bash_profile
- echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.bash_profile
- ```
+ No special actions for Azure NetApp Files.
# [Azure Large Instances (bare metal)](#tab/azure-large-instance)
- 1. Copy the SSH keys for back-end storage for AzAcSnap from the root user (the user running the installation). This step assumes that the root user has already configured connectivity to the storage. For more information, see the earlier section [Enable communication with storage](#enable-communication-with-storage).
-
- ```bash
- cp -pr ~/.ssh /home/azacsnap/.
- ```
-
- 1. Set the user permissions correctly for the SSH files:
-
- ```bash
- chown -R azacsnap.sapsys /home/azacsnap/.ssh
- ```
+ 1. Make sure the PKCS12 certificate file is available for the AzAcSnap user to read. This step assumes connectivity to the storage is already configured. For more information, see the earlier section [Enable communication with storage](azacsnap-configure-storage.md#enable-communication-with-storage).
-1. Copy the SAP HANA connection's secure user store for the target user, `azacsnap`. This step assumes that the root user has already configured the secure user store. For more information, see the earlier section [Enable communication with the database](#enable-communication-with-the-database).
-
- ```bash
- cp -pr ~/.hdb /home/azacsnap/.
- ```
-
-1. Set the user permissions correctly for the `hdbuserstore` files:
-
- ```bash
- chown -R azacsnap.sapsys /home/azacsnap/.hdb
- ```
-
-1. Extract the snapshot tools into `/home/azacsnap/bin/`:
- ```bash
- ./azacsnap_installer_v5.0.run -X -d /home/azacsnap/bin
- ```
-
-1. Make the commands executable:
-
- ```bash
- chmod 700 /home/azacsnap/bin/*
- ```
-
-1. Ensure that the correct ownership permissions are set on the user's home directory:
+### Uninstall the snapshot tools
- ```bash
- chown -R azacsnap.sapsys /home/azacsnap/*
- ```
+If you installed the snapshot tools by using the default settings, uninstallation requires only removing the user that you installed the commands for and deleting the AzAcSnap binary.
### Complete the setup of snapshot tools
-The installer provides steps to complete after you install the snapshot tools.
-
-The following output shows the steps to complete after you run the installer with the default installation options. Follow these steps to configure and test the snapshot tools.
-
-```output
-1. Change into the snapshot user account.....
- su - azacsnap
-2. Set up the HANA Secure User Store.....
- hdbuserstore Set <ADMIN_USER> <HOSTNAME>:<PORT> <admin_user> <password>
-3. Change to location of commands.....
- cd /home/azacsnap/bin/
-4. Configure the customer details file.....
- azacsnap -c configure --configuration new
-5. Test the connection to storage.....
- azacsnap -c test --test storage
-6. Test the connection to HANA.....
- a. without SSL
- azacsnap -c test --test hana
- b. with SSL, you will need to choose the correct SSL option
- azacsnap -c test --test hana --ssl=<commoncrypto|openssl>
-7. Run your first snapshot backup.....
- azacsnap -c backup --volume=data --prefix=hana_test --retention=1
-```
-
-Step 2 is necessary if you didn't [enable communication with the database](#enable-communication-with-the-database) before the installation.
+These steps can be followed to configure and test the snapshot tools.
+
+1. Log in to the AzAcSnap user account.
+ a. For Linux, `su - azacsnap`.
+ a. For Windows, log in as the AzAcSnap user.
+1. If you have added the AzAcSnap binary to the user's `$PATH` (Linux) or `%PATH%` (Windows), then run AzAcSnap with `azacsnap`, or you need to add the full path to the AzAcSnap binary (for example. `/home/azacsnap/bin/azacsnap` (Linux) or `C:\Users\AzAcSnap\azacsnap.exe` (Windows)).
+1. Configure the customer details file.
+ `azacsnap -c configure --configuration new`
+1. Test the connection to storage.
+ `azacsnap -c test --test storage`
+1. Test the connection to the database.
+ a. SAP HANA
+ `azacsnap -c test --test hana`
+ a. Oracle DB
+ `azacsnap -c test --test oracle`
+ a. IBM Db2
+ `azacsnap -c test --test db2`
If the test commands run correctly, the test is successful. You can then perform the first database-consistent storage snapshot.
-## Configure the database
-
-This section explains how to configure the database.
-
-# [SAP HANA](#tab/sap-hana)
-
-### Configure SAP HANA
-
-There are changes that you can apply to SAP HANA to help protect the log backups and catalog. By default, `basepath_logbackup` and `basepath_catalogbackup` are set so that SAP HANA will put related files into the `$(DIR_INSTANCE)/backup/log` directory. It's unlikely that this location is on a volume that AzAcSnap is configured to snapshot, so storage snapshots won't protect these files.
-
-The following `hdbsql` command examples demonstrate setting the log and catalog paths to locations on storage volumes that AzAcSnap can snapshot. Be sure to check that the values on the command line match the local SAP HANA configuration.
-
-### Configure the log backup location
-
-This example shows a change to the `basepath_logbackup` parameter:
-
-```bash
-hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_logbackup') = '/hana/logbackups/H80' WITH RECONFIGURE"
-```
-
-### Configure the catalog backup location
-
-This example shows a change to the `basepath_catalogbackup` parameter. First, ensure that the `basepath_catalogbackup` path exists on the file system. If not, create the path with the same ownership as the directory.
-
-```bash
-ls -ld /hana/logbackups/H80/catalog
-```
-
-```output
-drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog
-```
-
-If you need to create the path, the following example creates the path and sets the correct ownership and permissions. You need to run these commands as root.
-
-```bash
-mkdir /hana/logbackups/H80/catalog
-chown --reference=/hana/shared/H80/HDB00 /hana/logbackups/H80/catalog
-chmod --reference=/hana/shared/H80/HDB00 /hana/logbackups/H80/catalog
-ls -ld /hana/logbackups/H80/catalog
-```
-
-```output
-drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog
-```
-
-The following example changes the SAP HANA setting:
-
-```bash
-hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_catalogbackup') = '/hana/logbackups/H80/catalog' WITH RECONFIGURE"
-```
-
-### Check log and catalog backup locations
+- `azacsnap -c backup --volume data --prefix adhoc_test --retention 1`
-After you make the changes to the log and catalog backup locations, confirm that the settings are correct by using the following command.
-
-In this example, the settings appear as `SYSTEM` settings. This query also returns the `DEFAULT` settings for comparison.
-
-```bash
-hdbsql -jaxC -n <HANA_ip_address> - i 00 -U AZACSNAP "select * from sys.m_inifile_contents where (key = 'basepath_databackup' or key ='basepath_datavolumes' or key = 'basepath_logbackup' or key = 'basepath_logvolumes' or key = 'basepath_catalogbackup')"
-```
-
-```output
-global.ini,DEFAULT,,,persistence,basepath_catalogbackup,$(DIR_INSTANCE)/backup/log
-global.ini,DEFAULT,,,persistence,basepath_databackup,$(DIR_INSTANCE)/backup/data
-global.ini,DEFAULT,,,persistence,basepath_datavolumes,$(DIR_GLOBAL)/hdb/data
-global.ini,DEFAULT,,,persistence,basepath_logbackup,$(DIR_INSTANCE)/backup/log
-global.ini,DEFAULT,,,persistence,basepath_logvolumes,$(DIR_GLOBAL)/hdb/log
-global.ini,SYSTEM,,,persistence,basepath_catalogbackup,/hana/logbackups/H80/catalog
-global.ini,SYSTEM,,,persistence,basepath_datavolumes,/hana/data/H80
-global.ini,SYSTEM,,,persistence,basepath_logbackup,/hana/logbackups/H80
-global.ini,SYSTEM,,,persistence,basepath_logvolumes,/hana/log/H80
-```
-
-### Configure the log backup timeout
-
-The default setting for SAP HANA to perform a log backup is `900` seconds (15 minutes). We recommend that you reduce this value to `300` seconds (5 minutes). Then it's possible to run regular backups of these files (for example, every 10 minutes). You can take these backups by adding the `log_backup` volumes to the `OTHER` volume section of the
-configuration file.
-
-```bash
-hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'log_backup_timeout_s') = '300' WITH RECONFIGURE"
-```
-
-### Check the log backup timeout
-
-After you make the change to the log backup timeout, ensure that the timeout is set by using the following command.
-
-In this example, the settings are displayed as `SYSTEM` settings. This query also returns the `DEFAULT` settings for comparison.
-
-```bash
-hdbsql -jaxC -n <HANA_ip_address> - i 00 -U AZACSNAP "select * from sys.m_inifile_contents where key like '%log_backup_timeout%' "
-```
-
-```output
-global.ini,DEFAULT,,,persistence,log_backup_timeout_s,900
-global.ini,SYSTEM,,,persistence,log_backup_timeout_s,300
-```
-
-# [Oracle](#tab/oracle)
-
-Apply the following changes to the Oracle database to allow for monitoring by the database administrator:
-
-1. Set up Oracle alert logging.
-
- Use the following Oracle SQL commands while you're connected to the database as `SYSDBA` to create a stored procedure under the default Oracle SYSBACKUP database account. These SQL commands allow AzAcSnap to send messages to:
-
- - Standard output by using the `PUT_LINE` procedure in the `DBMS_OUTPUT` package.
- - The Oracle database `alert.log` file by using the `KSDWRT` procedure in the `DBMS_SYSTEM` package.
-
- ```bash
- sqlplus / As SYSDBA
- ```
-
- ```sql
- GRANT EXECUTE ON DBMS_SYSTEM TO SYSBACKUP;
- CREATE PROCEDURE sysbackup.azmessage(in_msg IN VARCHAR2)
- AS
- v_timestamp VARCHAR2(32);
- BEGIN
- SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS')
- INTO v_timestamp FROM DUAL;
- SYS.DBMS_SYSTEM.KSDWRT(SYS.DBMS_SYSTEM.ALERT_FILE, in_msg);
- END azmessage;
- /
- SHOW ERRORS
- QUIT
- ```
-
-# [IBM Db2](#tab/db2)
-
-No special database configuration is required for Db2 because you're using the instance user's local operating system environment.
-- ## Next steps -- [Configure the Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md)
+- [Configure the database for Azure Application Consistent Snapshot tool](azacsnap-configure-database.md)
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
Previously updated : 08/21/2023 Last updated : 05/15/2024
The Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases. It handles all the orchestration required to put those databases into an application-consistent state before taking a storage snapshot. After the snapshot, the tool returns the databases to an operational state.
-## Supported databases, operating systems, and Azure platforms
--- **Databases**
- - SAP HANA (see the [support matrix](azacsnap-get-started.md#snapshot-support-matrix-from-sap) for details)
- - Oracle Database release 12 or later (see [Oracle VM images and their deployment on Microsoft Azure](../virtual-machines/workloads/oracle/oracle-vm-solutions.md) for details)
- - IBM Db2 for LUW on Linux-only version 10.5 or later (see [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details)
+Check out the steps to [get started with the Azure Application Consistent Snapshot tool](azacsnap-get-started.md).
-- **Operating systems**
- - SUSE Linux Enterprise Server 12+
- - Red Hat Enterprise Linux 7+
- - Oracle Linux 7+
+## Architecture overview
-- **Azure platforms**
- - Azure Virtual Machines with Azure NetApp Files storage
- - Azure Large Instances (on bare-metal infrastructure)
+You can install AzAcSnap on the same host as the database, or you can install it on a centralized system. But, you must have network connectivity to the database servers and the storage back end (Azure Resource Manager for Azure NetApp Files or HTTPS for Azure Large Instances).
-> [!TIP]
-> If you're looking for new features (or support for other databases, operating systems, and platforms), see [Preview features of the Azure Application Consistent Snapshot tool](azacsnap-preview.md). You can also provide [feedback or suggestions](https://aka.ms/azacsnap-feedback).
+AzAcSnap is a lightweight application that's typically run from an external scheduler. On most Linux systems, this operation is `cron`, which is what the documentation focuses on. But the scheduler could be an alternative tool, as long as it can import the `azacsnap` user's shell profile. Importing the user's environment settings ensures that file paths and permissions are initialized correctly.
## Benefits of using AzAcSnap
AzAcSnap uses the volume snapshot and replication functionalities in Azure NetAp
- **Rapid backup snapshots independent of database size**
- AzAcSnap takes snapshot backups regardless of the size of the volumes or the database by using the snapshot technology of storage. It takes snapshots in parallel across all the volumes, to allow multiple volumes to be part of the database storage.
+ AzAcSnap takes an almost instantaneous snapshot of the database with zero performance hit, regardless of the size of the database volumes. It takes snapshots in parallel across all the volumes, to allow multiple volumes to be part of the database storage.
In tests, the tool took less than two minutes to take a snapshot backup of a database of 100+ tebibytes (TiB) stored across 16 volumes.
+
- **Application-consistent data protection**
- You can deploy AzAcSnap as a centralized or distributed solution for backing up critical database files. It ensures database consistency before it performs a storage volume snapshot. As a result, it ensures that you can use the storage volume snapshot for database recovery.
+ You can deploy AzAcSnap as a centralized or distributed solution for backing up critical database files. It ensures database consistency before it performs a storage volume snapshot. As a result, it ensures that you can use the storage volume snapshot for database recovery. Database roll forward options are available when used with log files.
- **Database catalog management**
AzAcSnap uses the volume snapshot and replication functionalities in Azure NetAp
- **Ad hoc volume protection**
- This capability is helpful for non-database volumes that don't need application quiescing before the tool takes a storage snapshot. Examples include SAP HANA log-backup volumes or SAPTRANS volumes.
+ This capability is helpful for non-database volumes that don't need application quiescing before the tool takes a storage snapshot. These can be any unstructured file-system, which includes database files like SAP HANA log-backup volumes and shared file systems, or SAPTRANS volumes.
- **Cloning of storage volumes**
- This capability provides space-efficient storage volume clones for development and test purposes.
+ This capability provides space-efficient storage volume clones for rapid development and test purposes.
- **Support for disaster recovery**
AzAcSnap uses the volume snapshot and replication functionalities in Azure NetAp
AzAcSnap is a single binary. It doesn't need additional agents or plug-ins to interact with the database or the storage (Azure NetApp Files via Azure Resource Manager, and Azure Large Instances via Secure Shell [SSH]).
-AzAcSnap must be installed on a system that has connectivity to the database and the storage. However, the flexibility of installation and configuration allows for either a single centralized installation (Azure NetApp Files only) or a fully distributed installation (Azure NetApp Files and Azure Large Instances) with copies installed on each database installation.
-
-## Architecture overview
-
-You can install AzAcSnap on the same host as the database (SAP HANA), or you can install it on a centralized system. But, you must have network connectivity to the database servers and the storage back end (Azure Resource Manager for Azure NetApp Files or SSH for Azure Large Instances).
-
-AzAcSnap is a lightweight application that's typically run from an external scheduler. On most Linux systems, this operation is `cron`, which is what the documentation focuses on. But the scheduler could be an alternative tool, as long as it can import the `azacsnap` user's shell profile. Importing the user's environment settings ensures that file paths and permissions are initialized correctly.
-
-## Technical articles
-The following technical articles describe where AzAcSnap has been used as part of a data protection strategy:
+## Supported databases, operating systems, and Azure platforms
-- [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)-- [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347)-- [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)-- [Manual Recovery Guide for SAP Db2 on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-db2-on-azure-vms-from-azure-netapp/ba-p/3865379)-- [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172)-- [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620)-- [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
+- **Databases**
+ - SAP HANA (see the [support matrix](#snapshot-support-matrix-from-sap) for details)
+ - Oracle Database release 12 or later (see [Oracle VM images and their deployment on Microsoft Azure](../virtual-machines/workloads/oracle/oracle-vm-solutions.md) for details)
+ - IBM Db2 for LUW on Linux-only version 10.5 or later (see [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details)
-## Command synopsis
+- **Operating systems**
+ - SUSE Linux Enterprise Server 12+
+ - Red Hat Enterprise Linux 7+
+ - Oracle Linux 7+
-The general format of the commands is:
-`azacsnap -c [command] --[command] [sub-command] --[flag-name] [flag-value]`.
+- **Azure platforms**
+ - Azure Virtual Machines with Azure NetApp Files storage
+ - Azure Large Instances (on bare-metal infrastructure)
-## Command options
+> [!TIP]
+> If you're looking for new features (or support for other databases, operating systems, and platforms), see [Preview features of the Azure Application Consistent Snapshot tool](azacsnap-preview.md). You can also provide [feedback or suggestions](https://aka.ms/azacsnap-feedback).
-The command options are as follows. The main bullets are commands, and the indented bullets are subcommands.
+## Supported scenarios
-- `-h` provides extended command-line help with examples on AzAcSnap usage.-- `-c configure` provides an interactive Q&A style interface to create or modify the `azacsnap` configuration file (default = `azacsnap.json`).
- - `--configuration new` creates a new configuration file.
- - `--configuration edit` enables editing an existing configuration file.
+The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and [SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
- For more information, see the [configure command reference](azacsnap-cmd-ref-configure.md).
-- `-c test` validates the configuration file and tests connectivity.
- - `--test hana` tests the connection to the SAP HANA instance.
- - `--test storage` tests communication with the underlying storage interface by creating a temporary storage snapshot on all the configured `data` volumes, and then removing them.
- - `--test all` performs both the `hana` and `storage` tests in sequence.
+## Snapshot Support Matrix from SAP
- For more information, see the [test command reference](azacsnap-cmd-ref-test.md).
-- `-c backup` is the primary command to execute database-consistent storage snapshots for SAP HANA data volumes and for other (for example, shared, log backup, or boot) volumes.
- - `--volume data` takes a snapshot of all the volumes in the `dataVolume` stanza of the configuration file.
- - `--volume other` takes a snapshot of all the volumes in the `otherVolume` stanza of the configuration file.
- - `--volume all` takes a snapshot of all the volumes in the `dataVolume` stanza and then all the volumes in the `otherVolume` stanza of the configuration file.
+The following matrix is provided as a guideline on which versions of SAP HANA are supported by SAP for Storage Snapshot Backups.
+
+| Database type | Minimum database versions | Notes |
+|||--|
+| Single Container Database | 1.0 SPS 12, 2.0 SPS 00 | |
+| MDC Single Tenant | [2.0 SPS 01](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/2194a981ea9e48f4ba0ad838abd2fb1c.html?version=2.0.01&locale=en-US) | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
+| MDC Multiple Tenants | [2.0 SPS 04](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7910eb4a498246b1b0435a4e9bf938d1.html?version=2.0.04&locale=en-US) | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
+> \* [SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7f203cf75ae4445d96ad0012c67c0480.html?version=2.0.02&locale=en-US)
- For more information, see the [backup command reference](azacsnap-cmd-ref-backup.md).
-- `-c details` provides information on snapshots or replication.
- - `--details snapshots` provides a list of basic details about the snapshots for each volume that you configured.
- - `--details replication` provides basic details about the replication status from the production site to the disaster-recovery site.
- For more information, see the [details command reference](azacsnap-cmd-ref-details.md).
-- `-c delete` deletes a storage snapshot or a set of snapshots.
+**Additional SAP deployment considerations:**
- You can use either the SAP HANA backup ID (as found in HANA Studio) or the storage snapshot name. The backup ID is tied to only the `hana` snapshots, which are created for the data and shared volumes. Otherwise, if you enter the snapshot name, the command searches for all snapshots that match the entered snapshot name.
+- When setting up the HANA user for backup, you need to set up the user for each HANA instance. Create an SAP HANA user account to access HANA instance under the SYSTEMDB (and not in the tenant database).
+- Automated log deletion is managed with the `--trim` option of the `azacsnap -c backup` for SAP HANA 2 and later releases.
- For more information, see the [delete command reference](azacsnap-cmd-ref-delete.md).
-- `-c restore` provides two methods to restore a snapshot to a volume.
- - `--restore snaptovol` creates a new volume based on the latest snapshot on the target volume.
- - `-c restore --restore revertvolume` reverts the target volume to a prior state, based on the most recent snapshot.
+> [!IMPORTANT]
+> The snapshot tools only interact with the node of the SAP HANA system specified in the configuration file. If this node becomes unavailable, there's no mechanism to automatically start communicating with another node.
- For more information, see the [restore command reference](azacsnap-cmd-ref-restore.md).
-- `[--configfile <configfilename>]` is an optional command-line parameter to provide a different file name for the JSON configuration. It's useful for creating a separate configuration file per security ID (for example, `--configfile H80.json`).-- `[--runbefore]` and `[--runafter]` are optional commands to run external commands or shell scripts before and after the execution of the main AzAcSnap logic.
+ - For an **SAP HANA Scale-Out with Standby** scenario it's typical to install and configure the snapshot tools on the primary node. But, if the primary node becomes
+ unavailable, the standby node will take over the primary node role. In this case, the implementation team should configure the snapshot tools on both
+ nodes (Primary and Stand-By) to avoid any missed snapshots. In the normal state, the primary node will take HANA snapshots initiated by crontab. If the primary
+ node fails over those snapshots will have to be executed from another node, such as the new primary node (former standby). To achieve this outcome, the standby
+ node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured, `azacsnap.json` configured, and crontab commands staged
+ in advance of the failover.
+ - For an **SAP HANA HSR HA** scenario, it's recommended to install, configure, and schedule the snapshot tools on both (Primary and Secondary) nodes. Then, if
+ the Primary node becomes unavailable, the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the Primary node
+ will take HANA snapshots initiated by crontab. The Secondary node would attempt to take snapshots but fail as the Primary is functioning correctly. But,
+ after Primary node failover, those snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the snapshot tool
+ installed, storage communication enabled, `hdbuserstore` configured, `azacsnap.json` configured, and crontab enabled in advance of the failover.
- For more information, see the [runbefore/runafter command reference](azacsnap-cmd-ref-runbefore-runafter.md).
-- `[--preview]` is an optional command-line option that's required when you're using preview features.
+ > See the technical article on [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
- For more information, see [Preview features of the Azure Application Consistent Snapshot tool](azacsnap-preview.md).
## Next steps
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
Previously updated : 08/21/2023 Last updated : 05/15/2024
This article provides a guide on setup and usage of the new features in preview for the Azure Application Consistent Snapshot tool (AzAcSnap). For basic information about the tool, see [What is the Azure Application Consistent Snapshot tool?](./azacsnap-introduction.md).
-The preview features provided with AzAcSnap 9 are:
+The preview features provided with AzAcSnap 10 are:
+- Microsoft SQL Server
- Azure NetApp Files backup - Azure managed disks > [!NOTE]
-> Previews are provided "as is," "with all faults," and "as available." They're excluded from the service-level agreements and limited warranty. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Previews are provided "as is," "with all faults," and "as available," and are excluded from the service-level agreements and may not be covered by customer support.
+> Previews are subject to the supplemental terms of use for Microsoft Azure Previews found at https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+## Using AzAcSnap preview features
+
+AzAcSnap preview features are offered together with generally available features. Using the preview features requires the use of the `--preview` command-line option. To set up and install AzAcSnap, see [Get started with the Azure Application Consistent Snapshot tool](azacsnap-get-started.md).
## Providing feedback You can provide feedback on AzAcSnap, including this preview, [online](https://aka.ms/azacsnap-feedback).
-## Using AzAcSnap preview features
+## Microsoft SQL Server
-AzAcSnap preview features are offered together with generally available features. Using the preview features requires the use of the `--preview` command-line option. To set up and install AzAcSnap, see [Get started with the Azure Application Consistent Snapshot tool](azacsnap-get-started.md).
+### Supported platforms and operating systems
+
+> [!NOTE]
+> Support for Microsoft SQL Server is Preview feature.
+> This section's content supplements [What is Azure Application Consistent Snapshot tool](azacsnap-introduction.md) page.
+
+New database platforms and operating systems supported with this preview release.
+
+- **Databases**
+ - Microsoft SQL Server 2022 (or later) on Windows Server 2019 (or later) only is in preview.
++
+### Enable communication with database
+
+> [!NOTE]
+> Support for Microsoft SQL Server is Preview feature.
+> This section's content supplements [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) page.
+This section explains how to enable communication with the database. Ensure the database you're using is correctly selected from the tabs.
+
+# [Microsoft SQL Server](#tab/mssql)
+
+The snapshot tools issue commands to the Microsoft SQL Server database directly to enable and disable backup mode.
+
+AzAcSnap connects directly to Microsoft SQL Server using the provided connect-string to issue SQL commands, such as `ALTER SERVER CONFIGURATION SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON` or `ALTER SERVER CONFIGURATION SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF`. The connect-string will determine if the installation is on the database server or a centralized "backup" server. Typical installations of AzAcSnap would be onto the database server to ensure features such as flushing file buffers can work as expected. If AzAcSnap has been installed onto the database server, then be sure the user running azacsnap has the required permissions.
+
+##### `azacsnap` user permissions
+
+Refer to [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md)
+The `azacsnap` user should have permissions to put Microsoft SQL Server into backup mode, and have permissions to flush I/O buffers to the volumes configured.
+
+Configure (`.\azacsnap.exe -c configure`) with the correct values for Microsoft SQL Server and test (`.\azacsnap.exe -c test --test mssql`) azacsnap database connectivity.
+Run the `azacsnap` test command
+```shell
+.\azacsnap.exe -c test --test mssql
+```
+
+```output
+BEGIN : Test process started for 'mssql'
+BEGIN : Database tests
+PASSED: Successful connectivity to MSSQL version 16.00.1115
+END : Test process complete for 'mssql'
+```
+
+### Configuring the database
+This section explains how to configure the data base.
+# [Microsoft SQL Server](#tab/mssql)
+No special database configuration is required for Microsoft SQL Server as we are using the User's local operating system environment.
+++
+### Configuring AzAcSnap
+
+This section explains how to configure AzAcSnap for the specified database.
+
+> [!NOTE]
+> Support for Microsoft SQL Server is Preview feature.
+> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
+### Details of required values
+The following sections provide detailed guidance on the various values required for the configuration file.
+# [Microsoft SQL Server](#tab/mssql)
+#### Microsoft SQL Server Database values for configuration
+When adding a Microsoft SQL Server database to the configuration, the following values are required:
+- **connectionString** = The Connection String used to connect to the database. For a typical AzAcSnap installation on to the system running Microsoft SQL Server where the Database Instance is MSSQL2022 the connection string = "Trusted_Connection=True;Persist Security Info=True;Data Source=MSSQL2022;TrustServerCertificate=true".
+- **instanceName** = The database instance name.
+- **metaDataFileLocation** = The location where Microsoft SQL Server will write out the backup meta-data file (for example, "C:\\MSSQL_BKP\\").
++ ## Azure NetApp Files backup
For more information about this feature, see [Configure the Azure Application Co
Microsoft provides many storage options for deploying databases such as SAP HANA. For details about some of these options, see [Azure Storage types for SAP workload](../virtual-machines/workloads/sap/planning-guide-storage.md). There's also a [cost-conscious solution with Azure premium storage](../virtual-machines/workloads/sap/hana-vm-premium-ssd-v1.md#cost-conscious-solution-with-azure-premium-storage).
-AzAcSnap can take application-consistent database snapshots when you deploy it on this type of architecture (that is, a virtual machine [VM] with managed disks). But the setup for this platform is slightly more complicated because in this scenario, you need to block I/O to the mount point (by using `xfs_freeze`) before you take a snapshot of the managed disks in the mounted logical volumes.
+AzAcSnap can take application-consistent database snapshots when you deploy it on this type of architecture (that is, a virtual machine [VM] with managed disks). But the setup for this platform is slightly more complicated because in this scenario AzAcSnap takes an additional step to try and flush all I/O buffers and ensure they are written out to persistent storage. On Linux AzAcSnap will call the `sync` command to flush file buffers, on Windows it uses the kernel call to FlushFileBuffers, before it takes a snapshot of the managed disks in the mounted logical volumes.
> [!IMPORTANT]
-> The Linux system must have `xfs_freeze` available to block disk I/O.
-
-Take extra care to configure AzAcSnap with the correct mount points (file systems), because `xfs_freeze` blocks I/O to the device that the Azure managed disk's mount point specifies. This behavior could inadvertently block a running application until `azacsnap` finishes running.
+> AzAcSnap will need appropriate operating system permissions for the volume so it can perform the flush.
Here's the architecture at a high level: 1. Attach Azure managed disks to the VM by using the Azure portal. 1. Create a logical volume from these managed disks. 1. Mount the logical volume to a Linux directory.
-1. Create the service principal in the same way as for Azure NetApp Files in the [AzAcSnap installation](azacsnap-installation.md?tabs=azure-netapp-files%2Csap-hana#enable-communication-with-storage).
+1. Enable communication in the same way as for Azure NetApp Files in the [AzAcSnap installation](azacsnap-configure-storage.md?tabs=azure-netapp-files#enable-communication-with-storage).
1. Install and configure AzAcSnap.
- The configurator has a new option to define the mount point for the logical volume. After you put the database into backup mode and after the I/O cache is flushed (dependent on Linux kernel parameter `fs.xfs.xfssyncd_centisecs`), this parameter is passed to `xfs_freeze` to block the I/O.
-1. Install and configure `xfs_freeze` to be run as a non-privileged user:
-
- 1. Create an executable file called `$HOME/bin/xfs_freeze` with the following content:
-
- ```bash
- #!/bin/sh
- /usr/bin/sudo /usr/sbin/xfs_freeze $1 $2
- ```
-
- 1. Create a sudoers file called `/etc/sudoers.d/azacsnap` to allow the `azacsnap` user to run `xfs_freeze` with the following content:
-
- ```bash
- #
- # What: azacsnap
- # Why: Allow the azacsnap user to run "specific" commands with elevated privileges.
- #
- # User_Alias = SAP HANA Backup administrator user.
- User_Alias AZACSNAP = azacsnap
- #
- AZACSNAP ALL=(ALL) NOPASSWD: /usr/sbin/xfs_freeze
- ```
-
- 1. Test that the `azacsnap` user can freeze and unfreeze I/O to the target mount point by running the following code as the `azacsnap` user.
-
- This example runs each command twice to show that it worked the first time, because there's no command to confirm if `xfs_freeze` has frozen I/O.
-
- Freeze I/O:
-
- ```bash
- su - azacsnap
- xfs_freeze -f /hana/data
- xfs_freeze -f /hana/data
- ```
-
- ```output
- xfs_freeze: cannot freeze filesystem at /hana/data: Device or resource busy
- ```
-
- Unfreeze I/O:
-
- ```bash
- su - azacsnap
- xfs_freeze -u /hana/data
- xfs_freeze -u /hana/data
- ```
-
- ```output
- xfs_freeze: cannot unfreeze filesystem mounted at /hana/data: Invalid argument
- ```
- For more information about using Azure managed disks as a storage back end, see [Configure the Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md). ### Example configuration file
Here's an example configuration file. Note the hierarchy for `dataVolume`, `moun
"hdbUserStoreName": "AZACSNAP", "savePointAbortWaitSeconds": 600, "autoDisableEnableBackint": false,
- "hliStorage": [],
- "anfStorage": [],
- "amdStorage": [
+ "storage": [
{
- "dataVolume": [
+ "dataVolumes": [
{ "mountPoint": "/hana/data",
+ "aliStorageResources": [
"azureManagedDisks": [ { "resourceId": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/<disk01>",
- "authFile": "azureauth.json"
+ "authFile": ""
}, { "resourceId": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/<disk02>",
- "authFile": "azureauth.json"
+ "authFile": ""
} ] }
- ],
- "otherVolume": []
+ ]
} ]
- },
- "oracle": null
+ }
} ] }
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
Previously updated : 04/17/2024 Last updated : 07/02/2024
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
+## May-2024
+
+### AzAcSnap 10 (Build: 1B55F1*)
+
+AzAcSnap 10 is being released with the following fixes and improvements:
+
+- Features added to [Preview](azacsnap-preview.md):
+ - **Microsoft SQL Server** support adding options to configure, test, and snapshot backup Microsoft SQL Server in an application consistent manner.
+- Features moved to GA (generally available):
+ - **Windows** support with AzAcSnap now able to be run on supported Linux distributions and Windows.
+ - New configuration file layout.
+ - To upgrade pre-AzAcSnap 10 configurations use the `azacsnap -c configure --configuration new` command to create a new configuration file and use the values in your existing configuration file.
+ - Azure Large Instance storage management via REST API over HTTPS.
+ - This allows the use of Consistency Group snapshots on supported Azure Large Instance storage.
+- Fixes and Improvements:
+ - New `--flush` option which will flush in memory file buffers for local storage, useful for Azure Large Instance and Azure Managed Disk when connected as block storage.
+ - Logging improvements.
+- Features removed:
+ - AzAcSnap installer for Linux.
+ - AzAcSnap is now downloadable as a binary for supported versions of Linux and Windows. This simplifies access to the AzAcSnap program allowing you to get started quickly.
+ - Azure Large Instance storage management via CLI over SSH.
+ - CLI over SSH replaced with the REST API over HTTPS.
+
+Download the binary of [AzAcSnap 10 for Linux](https://aka.ms/azacsnap-10-linux) or [AzAcSnap 10 for Windows](https://aka.ms/azacsnap-10-windows).
+ ## Apr-2024 ### AzAcSnap 9a (Build: 1B3B458)
AzAcSnap 9 is being released with the following fixes and improvements:
- Features moved to GA (generally available): - IBM Db2 Database support.
- - [System Managed Identity](azacsnap-installation.md#azure-system-managed-identity) support for easier setup while improving security posture.
+ - [System Managed Identity](azacsnap-configure-storage.md#azure-system-managed-identity) support for easier setup while improving security posture.
- Fixes and Improvements: - Configure (`-c configure`) changes: - Allows for a blank value for `authFile` in the configuration file when using System Managed Identity.
AzAcSnap v5.0 (Build: 20210421.6349) is now Generally Available and for this bui
AzAcSnap v5.0 Preview (Build: 20210318.30771) is released with the following fixes and improvements: -- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with database](azacsnap-installation.md#enable-communication-with-the-database) section.
+- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with database](azacsnap-configure-database.md#enable-communication-with-the-database) section.
- Fix to allow a [restore](azacsnap-cmd-ref-restore.md) with volumes configured with Manual QOS. - Added mutex control to throttle SSH connections for Azure Large Instance. - Fix installer for handling path names with spaces and other related issues.
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
Previously updated : 01/16/2023 Last updated : 05/15/2024
To troubleshoot this error:
``` > [!TIP]
-> For more information on generating a new Service Principal, refer to the section [Enable communication with Storage](azacsnap-installation.md?tabs=azure-netapp-files%2Csap-hana#enable-communication-with-storage) in the [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) guide.
+> For more information on generating a new Service Principal, refer to the section [Enable communication with Storage](azacsnap-configure-storage.md?tabs=azure-netapp-files#enable-communication-with-storage) in the [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) guide.
## Troubleshoot failed 'test hana' command
To troubleshoot this error:
### Insufficient privilege error
-If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check that the user has the appropriate AZACSNAP database user privileges set up per the [installation guide](azacsnap-installation.md#enable-communication-with-the-database). Verify the user's privileges with the following command:
+If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check that the user has the appropriate AZACSNAP database user privileges set up per the [installation guide](azacsnap-configure-database.md#enable-communication-with-the-database). Verify the user's privileges with the following command:
```bash hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges " | grep -i -e GRANTEE -e azacsnap
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
Previously updated : 02/21/2023 Last updated : 06/25/2024 # Troubleshoot volume errors for Azure NetApp Files
-This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
+If a volume CRUD operation is performed on a volume that is not in a terminal state, then the operation will fail. Automation workflows and portal users should check for the terminal state of the volume before executing another asynchronous operation on the volume.
## Errors for SMB and dual-protocol volumes
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | managedClusters | resource group | 1-63 | Alphanumerics, underscores, and hyphens.<br><br>Start and end with alphanumeric. | > | managedClusters / agentPools | managed cluster | 1-12 for Linux<br>1-6 for Windows | Lowercase letters and numbers.<br><br>Can't start with a number. |
-> | openShiftManagedClusters | resource group | 1-30 | Alphanumerics. |
## Microsoft.CustomerInsights
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
This article shows how to configure your Azure SignalR Service resource and code
The first step is to configure managed identities.
-This example shows you how to configure a system-assigned managed identity on a virtual machine (VM) by using the Azure portal:
+This example shows you how to configure a system-assigned managed identity on an App Service by using the Azure portal:
-1. In the [Azure portal](https://portal.azure.com/), search for and select a VM.
-1. Under **Settings**, select **Identity**.
-1. On the **System assigned** tab, switch **Status** to **On**.
+1. Access your app's settings in the [Azure portal](https://portal.azure.com) under the **Settings** group in the left navigation pane.
+
+1. Select **Identity**.
- ![Screenshot of selections for turning on system-assigned managed identities for a virtual machine.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png)
-1. Select the **Save** button to confirm the change.
+1. Within the **System assigned** tab, switch **Status** to **On**. Click **Save**.
-To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity).
+ ![Screenshot that shows where to switch Status to On and then select Save.](../app-service/media/app-service-managed-service-identity/system-assigned-managed-identity-in-azure-portal.png)
-To learn more about configuring managed identities, see one of these articles:
+To learn more how to configure managed identities in other ways for Azure App Service and Azure Functions, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
-- [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)-- [Configure managed identities for Azure resources on an Azure VM using PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)-- [Configure managed identities for Azure resources on an Azure VM using the Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)-- [Configure managed identities for Azure resources on an Azure VM using templates](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)-- [Configure a VM with managed identities for Azure resources using an Azure SDK](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)-
-To learn how to configure managed identities for Azure App Service and Azure Functions, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
+To learn more about configuring managed identities on an Azure VM, see [Configure managed identities on Azure virtual machines (VMs)](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
## Add role assignments in the Azure portal
azure-signalr Signalr Howto Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-guide.md
description: Learn how to troubleshoot common issues
Previously updated : 07/18/2022 Last updated : 07/02/2024 ms.devlang: csharp
For **Standard** instances, **concurrent** connection count limit **per unit** i
The connections include both client and server connections. check [here](./signalr-concept-messages-and-connections.md#how-connections-are-counted) for how connections are counted.
-### Too many negotiate requests at the same time
+### NegotiateThrottled
-We suggest having a random delay before reconnecting, check [here](#restart_connection) for retry samples.
+When there are too many client negotiate requests at the **same** time, it may get throttled. The limit relates to the unit counts that more units has a higher limit. Besides, we suggest having a random delay before reconnecting, check [here](#restart_connection) for retry samples.
[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
# Remove Arc-enabled Azure VMware Solution vSphere resources from Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, use the information in this article to perform the following actions:
azure-web-pubsub Socketio Troubleshoot Admin Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-admin-ui.md
+
+ Title: Admin UI
+description: This article explains how to use Admin UI when you're using Web PubSub for Socket.IO.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO logging, Socket.IO debugging, socketio, azure socketio
++ Last updated : 07/02/2024++++
+# Azure Socket.IO Admin UI
+
+[Socket.IO Admin UI](https://socket.io/docs/v4/admin-ui/) is a website tool developed by Socket.IO official team and it can be used to have an overview of the state of your Socket.IO deployment. See how it works and explore its advanced usage in [Socket.IO Admin UI Doc](https://socket.io/docs/v4/admin-ui/).
+
+[Azure Socket.IO Admin UI](https://github.com/Azure/azure-webpubsub/tree/main/tools/azure-socketio-admin-ui) is a customized version of it for Azure Socket.IO.
+
+## Deploy the website
+Azure Socket.IO Admin UI doesn't have a hosted version so far. Users should host the website by themselves.
+
+The static website files could be either downloaded from release or built from source code:
+
+### Download the released version
+1. Download the released zip file such as `azure-socketio-admin-ui-0.1.0.zip` from [release page](https://github.com/Azure/azure-webpubsub/releases)
+
+2. Extract the zip file
+
+### Build from source code
+1. Clone the repository
+ ```bash
+ git clone https://github.com/Azure/azure-webpubsub.git
+ ```
+
+2. Build the project
+ ```bash
+ cd tools/azure-socketio-admin-ui
+ yarn install
+ yarn build
+ ```
+
+3. Host the static files using any HTTP server. Let's use [a tiny static HTTP server](https://www.npmjs.com/package/http-server) as an example:
+ ```bash
+ cd dist
+ npm install -g http-server
+ http-server
+ ```
+
+ The http server is hosted on port 8080 by default.
+
+4. Visit `http://localhost:8080` in browser
+
+## Update server-side code
+1. install the `@socket.io/admin-ui` package:
+
+ ```bash
+ npm i @socket.io/admin-ui
+ ```
+
+2. Invoke the instrument method on your Socket.IO server:
+
+ ```javascript
+ const azure = require("@azure/web-pubsub-socket.io");
+ const { Server } = require("socket.io");
+ const { instrument } = require("@socket.io/admin-ui");
+ const httpServer = require('http').createServer(app);
+
+ const wpsOptions = {
+ hub: "eio_hub",
+ connectionString: process.argv[2] || process.env.WebPubSubConnectionString
+ };
+
+ const io = await new Server(httpServer).useAzureSocketIO(wpsOptions);
+ instrument(io, { auth: false, mode: "development" });
+
+ // Note: The next three lines are necessary to make the development mode work
+ Namespace.prototype["fetchSockets"] = async function() {
+ return this.local.fetchSockets();
+ };
+
+ httpServer.listen(3000);
+ ```
+
+## Open Admin UI website
+1. Visit `http://localhost:8080` in browser.
+
+2. You should see the following modal:
++
+3. Fill in your service endpoint and hub name.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal using Azure Backup
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 04/04/2024 Last updated : 07/02/2024
As one of the [restore options](#restore-options), you can replace an existing V
![Restore configuration wizard Replace Existing](./media/backup-azure-arm-restore-vms/restore-configuration-replace-existing.png)
-## Assign network access settings during restore (preview)
+## Assign network access settings during restore
Azure Backup also allows you to configure the access options for the restored disks once the restore operation is complete. You can set the disk access preferences at the time of initiating the restore. >[!Note]
->This feature is currently in preview and is available only for backed-up VMs that use private endpoint-enabled disks.
+>This feature is generally available for backed-up VMs that use private endpoint-enabled disks.
To enable disk access on restored disks during [VM restore](#choose-a-vm-restore-configuration), choose one of the following options:
To enable disk access on restored disks during [VM restore](#choose-a-vm-restore
:::image type="content" source="./media/backup-azure-arm-restore-vms/restored-disk-access-configuration-options.png" alt-text="Screenshot shows the access configuration options for restored disks." lightbox="./media/backup-azure-arm-restore-vms/restored-disk-access-configuration-options.png":::
+>[!Note]
+>The option to choose the network configuration of the restored disks the same as that of the source disks or specify the access from specific networks only is currently not available from Azure PowerShell/ Azure CLI.
+ ## Cross Region Restore As one of the [restore options](#restore-options), Cross Region Restore (CRR) allows you to restore Azure VMs in a secondary region, which is an Azure paired region.
backup Backup Azure Policy Supported Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-policy-supported-skus.md
# Supported VM SKUs for Azure Policy > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Backup provides a built-in policy (using Azure Policy) that can be assigned to **all Azure VMs in a specified location within a subscription or resource group**. When this policy is assigned to a given scope, all new VMs created in that scope are automatically configured for backup to an **existing vault in the same location and subscription**. The table below lists all the VM SKUs supported by this policy.
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
# Recover files from Azure virtual machine backup > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Backup provides the capability to restore [Azure virtual machines (VMs) and disks](./backup-azure-arm-restore-vms.md) from Azure VM backups, also known as recovery points. This article explains how to recover files and folders from an Azure VM backup. Restoring files and folders is available only for Azure VMs deployed using the Resource Manager model and protected to a Recovery Services vault.
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about the new features in the Azure Backup service. Previously updated : 05/02/2024 Last updated : 07/02/2024 - ignite-2023
Azure Backup is constantly improving and releasing new features that enhance the
You can learn more about the new releases by bookmarking this page or by [subscribing to updates here](https://azure.microsoft.com/updates/?query=backup). ## Updates summary-
+- July 2024
+ - [Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available](#backup-and-restore-of-virtual-machines-with-private-endpoint-enabled-disks-is-now-generally-available)
- May 2024 - [Migration of Azure VM backups from standard to enhanced policy (preview)](#migration-of-azure-vm-backups-from-standard-to-enhanced-policy-preview) - March 2024
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available
+
+Azure Backup now allows you to back up the Azure Virtual Machines that use disks with private endpoints (disk access). This support is extended for Virtual Machines that are backed up using Enhanced backup policies, along with the existing support for those that were backed up using Standard backup policies. While initiating the restore operation, you can specify the network access settings required for the restored disks. You can choose to keep the network configuration of the restored disks the same as that of the source disks, specify the access from specific networks only, or allow public access from all networks.
+
+For more information, see [Assign network access settings during restore](backup-azure-arm-restore-vms.md#assign-network-access-settings-during-restore).
+ ## Migration of Azure VM backups from standard to enhanced policy (preview) Azure Backup now supports migration to the enhanced policy for Azure VM backups using standard policy. The migration of VM backups to enhanced policy enables you to schedule multiple backups per day (up to every 4 hours), retain snapshots for longer duration, and use multi-disk crash consistency for VM backups. Snapshot-tier recovery points (created using enhanced policy) are zonally resilient. The migration of VM backups to enhanced policy also allows you to migrate your VMs to Trusted Launch and use Premium SSD v2 and Ultra-disks for the VMs without disrupting the existing backups.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 06/06/2024 Last updated : 07/01/2024
# Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## June 2024 Guest OS
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 24-06 | 5039217 | Latest Cumulative Update(LCU) | [6.72] | Jun 11, 2024 |
+| Rel 24-06 | 5039227 | Latest Cumulative Update(LCU) | [7.42] | Jun 11, 2024 |
+| Rel 24-06 | 5039214 | Latest Cumulative Update(LCU) | [5.96] | Jun 11, 2024 |
+| Rel 24-06 | 5036626 | .NET Framework 3.5 Security and Quality Rollup | [2.152] | May 14, 2024 |
+| Rel 24-06 | 5036607 | .NET Framework 4.7.2 Cumulative Update LKG | [2.152] | Apr 9, 2024 |
+| Rel 24-06 | 5036627 | .NET Framework 3.5 Security and Quality Rollup LKG |[4.132] | May 14, 2024 |
+| Rel 24-06 | 5036606 | .NET Framework 4.7.2 Cumulative Update LKG |[4.132] | Apr 9, 2024 |
+| Rel 24-06 | 5036624 | .NET Framework 3.5 Security and Quality Rollup LKG | [3.140] | May 14, 2024 |
+| Rel 24-06 | 5036605 | .NET Framework 4.7.2 Cumulative Update LKG | [3.140] | Apr 9, 2024 |
+| Rel 24-06 | 5036604 | . NET Framework Dot Net | [6.72] | Apr 9, 2024 |
+| Rel 24-06 | 5036613 | .NET Framework 4.8 Security and Quality Rollup LKG | [7.42] | Apr 9, 2024 |
+| Rel 24-06 | 5039289 | Monthly Rollup | [2.152] | Jun 11, 2024 |
+| Rel 24-06 | 5039260 | Monthly Rollup | [3.140] | Jun 11, 2024 |
+| Rel 24-06 | 5039294 | Monthly Rollup | [4.132] | Jun 11, 2024 |
+| Rel 24-06 | 5039342 | Servicing Stack Update | [3.140] | Jun 11, 2024 |
+| Rel 24-06 | 5039340 | Servicing Stack Update | [4.132] | Jun 11, 2024 |
+| Rel 24-06 | 5039334 | Servicing Stack Update | [5.96] | Jun 11, 2024 |
+| Rel 24-06 | 5039339 | Servicing Stack Update LKG | [2.152] | Jun 11, 2024 |
+| Rel 24-06 | 5039335 | Servicing Stack Update | [7.42] | Jun 11, 2024 |
+| Rel 24-06 | 5039343 | Servicing Stack Update | [6.72] | Jun 11, 2024 |
+| Rel 24-06 | 4494175 | January '20 Microcode | [5.96] | Sep 1, 2020 |
+| Rel 24-06 | 4494175 | January '20 Microcode | [6.72] | Sep 1, 2020 |
+
+[5039217]: https://support.microsoft.com/kb/5039217
+[5039227]: https://support.microsoft.com/kb/5039227
+[5039214]: https://support.microsoft.com/kb/5039214
+[5036626]: https://support.microsoft.com/kb/5036626
+[5036607]: https://support.microsoft.com/kb/5036607
+[5036627]: https://support.microsoft.com/kb/5036627
+[5036606]: https://support.microsoft.com/kb/5036606
+[5036624]: https://support.microsoft.com/kb/5036624
+[5036605]: https://support.microsoft.com/kb/5036605
+[5036604]: https://support.microsoft.com/kb/5036604
+[5036613]: https://support.microsoft.com/kb/5036613
+[5039289]: https://support.microsoft.com/kb/5039289
+[5039260]: https://support.microsoft.com/kb/5039260
+[5039294]: https://support.microsoft.com/kb/5039294
+[5039342]: https://support.microsoft.com/kb/5039342
+[5039340]: https://support.microsoft.com/kb/5039340
+[5039334]: https://support.microsoft.com/kb/5039334
+[5039339]: https://support.microsoft.com/kb/5039339
+[5037960]: https://support.microsoft.com/kb/5037960
+[5039335]: https://support.microsoft.com/kb/5039335
+[4494175]: https://support.microsoft.com/kb/4494175
+[2.152]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.140]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.132]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.96]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.72]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.42]: ./cloud-services-guestos-update-matrix.md#family-7-releases
+ ## May 2024 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
# Use service management from Python > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
You can find more general guidance on how to set up your service architecture to
You can follow the documentation for [creating request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). ## Acquiring phone numbers
-Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/).
+Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The following limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/).
| Operation | Scope | Timeframe | Limit (number of requests) | ||--|--|--|
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
## Email
-There is a limit on the number of email messages you can send for a given period of time. If you exceed the following limits on your subscription, your requests are rejected. You can attempt these requests again, when the Retry-After time has passed. You can make a request to raise the sending volume limits if needed.
+You can send a limited number of email messages. If you exceed the following limits for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed.
### Rate Limits
There is a limit on the number of email messages you can send for a given period
|Number of recipients in Email|50 | |Total email request size (including attachments) |10 MB |
+### Send attachments larger than 10 MB
+
+To email file attachments up to 30 MB, complete a [support request](../support.md).
+
+If you need to send email file attachments larger than 30 MB, you can use this alternative solution. Store the files in an Azure Blob Storage account and include a link to the files in your email. You can secure the files with a Shared Access Signature (SAS). SAS provides secure delegated access to resources in your storage account. By using SAS, you have granular control over how clients can access your data.
+
+Benefits of using an Azure Blob Storage account:
+
+ - You can handle large scale files.
+ - You can use SAS keys to precisely manage file access.
+
+For more information, see:
+
+ - [Introduction to Azure Blob Storage](/azure/storage/blobs/storage-blobs-introduction)
+ - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](/azure/storage/common/storage-sas-overview)
+ ### Action to take
-This sandbox setup is to help developers start building the application. Once you have established a sender reputation by sending mails, you can request to increase the sending volume limits. Submit a [support request](https://azure.microsoft.com/support/create-ticket/) to raise your desired email sending limit if you require sending a volume of messages exceeding the rate limits. Email quota increase requests aren't automatically approved. The reviewing team considers your overall sender reputation, which includes factors such as your email delivery failure rates, your domain reputation, and reports of spam and abuse when determining approval status.
+
+To increase your email quota, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md).
> [!NOTE] > Email quota increase requests may take up to 72 hours to be evaluated and approved, especially for requests that come in on Friday afternoon.
This sandbox setup is to help developers start building the application. Once yo
### Rate Limits | **Operation** | **Scope** | **Limit per 10 seconds** | **Limit per minute** |
-|--|--|--|--|
-|Create chat thread|per User|10|-|
-|Delete chat thread|per User|10|-|
-|Update chat thread|per Chat thread|5|-|
-|Add participants / remove participants|per Chat thread|10|30|
-|Get chat thread / List chat threads|per User|50|-|
-|Get chat message|per User per chat thread|50|-|
-|Get chat message|per Chat thread|250|-|
-|List chat messages|per User per chat thread|50|200|
-|List chat messages|per Chat thread|250|400|
-|Get read receipts (20 participant limit**) |per User per chat thread|5|-|
-|Get read receipts (20 participant limit**) |per Chat thread|100|-|
-|List chat thread participants|per User per chat thread|10|-|
-|List chat thread participants|per Chat thread|250|-|
-|Send message / update message / delete message|per Chat thread|10|30|
-|Send read receipt|per User per chat thread|10|30|
-|Send typing indicator|per User per chat thread|5|15|
-|Send typing indicator|per Chat thread|10|30|
+| | | | |
+| Create chat thread | per User | 10 | - |
+| Delete chat thread | per User | 10 | - |
+| Update chat thread | per Chat thread | 5 | - |
+| Add participants / remove participants | per Chat thread | 10 | 30 |
+| Get chat thread / List chat threads | per User | 50 | - |
+| Get chat message | per User per chat thread | 50 | - |
+| Get chat message | per Chat thread | 250 | - |
+| List chat messages | per User per chat thread | 50 | 200 |
+| List chat messages | per Chat thread | 250 | 400 |
+| Get read receipts (20 participant limit\*) | per User per chat thread | 5 | - |
+| Get read receipts (20 participant limit\*) | per Chat thread | 100 | - |
+| List chat thread participants | per User per chat thread | 10 | - |
+| List chat thread participants | per Chat thread | 250 | - |
+| Send message / update message / delete message | per Chat thread | 10 | 30 |
+| Send read receipt | per User per chat thread | 10 | 30 |
+| Send typing indicator | per User per chat thread | 5 | 15 |
+| Send typing indicator | per Chat thread | 10 | 30 |
> [!NOTE]
-> ** Read receipts and typing indicators are not supported on chat threads with more than 20 participants.
+> \* Read receipts and typing indicators are not supported on chat threads with more than 20 participants.
### Chat storage+ Azure Communication Services stores chat messages according to the retention policy you set when you create a chat thread. [!INCLUDE [public-preview-notice.md](../includes/public-preview-include-document.md)]
If you have strict compliance needs, we recommend that you delete chat threads u
### PSTN Call limitations
-| **Name** | **Scope** | Limit |
-|--|--|--|
-|Default number of outbound* concurrent calls |per Number | 2
+| **Name** | **Scope** | Limit |
+| | | |
+| Default number of outbound* concurrent calls | per Number | 2 |
-*: no limits on inbound concurrent calls. You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the outbound concurrent calls limit and it will be reviewed by our vetting team.
+> [!NOTE]
+> \* No limits on inbound concurrent calls. You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the outbound concurrent calls limit, which is reviewed by our vetting team.
### Call maximum limitations
-| **Name** | Limit |
-|--|--|
-|Number of participants | 350
+| **Name** | Limit |
+| | |
+| Number of participants | 350 |
### Calling SDK streaming support The Communication Services Calling SDK supports the following streaming configurations:
-| Limit | Web | Windows/Android/iOS |
-| - | | -- |
-| **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing |
-| **Maximum # of incoming remote streams that you can render simultaneously** | 9 videos + one screen sharing | 9 videos + one screen sharing |
+| Limit | Web | Windows/Android/iOS |
+| | | |
+| **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing |
+| **Maximum # of incoming remote streams that you can render simultaneously** | nine videos + one screen sharing | nine videos + one screen sharing |
-While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded.
+The Calling SDK doesn't enforce these limits, but your users might experience performance degradation if you exceed these limits.
### Calling SDK timeouts
The following timeouts apply to the Communication Services Calling SDKs:
### Action to take
-For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase some of the limits and then it is reviewed by our vetting team.
+For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase some of the limits, pending review by our vetting team.
## Job Router
-When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests will be dropped until the token of bucket to handle requests is replenished after a certain time.
+When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests fail until the token of bucket to handle requests is replenished after a certain time.
Rate Limits for Job Router:
Using a Teams interoperability scenario, you'll likely use some Microsoft Graph
Each service offered through Microsoft Graph has different limitations; service-specific limits are [described here](/graph/throttling) in more detail. ### Action to take
-When you implement error handling, use the HTTP error code 429 to detect throttling. The failed response includes the ```Retry-After``` response header. Backing off requests using the ```Retry-After``` delay is the fastest way to recover from throttling because Microsoft Graph continues to log resource usage while a client is being throttled.
+When you implement error handling, use the HTTP error code 429 to detect throttling. The failed response includes the `Retry-After` response header. Backing off requests using the `Retry-After` delay is the fastest way to recover from throttling because Microsoft Graph continues to log resource usage while a client is being throttled.
You can find more information on Microsoft Graph [throttling](/graph/throttling) limits in the [Microsoft Graph](/graph/overview) documentation. ## Next steps
-See the [help and support](../support.md) options.
+See the [help and support](../support.md) options.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Azure Communication Services customers can use Azure Event Grid to receive incom
### Can I receive messages from any country/region on toll-free numbers?
-Toll-free numbers are not capable of sending or receiving messages to/from countries/regions outside of US, CA, and PR.
+Toll-free numbers aren't capable of sending or receiving messages to/from countries/regions outside of US, CA, and PR.
### Can I receive messages from any country/region on short codes?
-Short codes are domestic numbers and are not capable of sending or receiving messages to/from outside of the country/region it was registered for. *Example: US short code can only send and receive messages to/from US recipients.*
+Short codes are domestic numbers and aren't capable of sending or receiving messages to/from outside of the country/region it was registered for. *Example: US short code can only send and receive messages to/from US recipients.*
### How are messages sent to landline numbers treated?
-In the United States, Azure Communication Services does not check for landline numbers and attempts to send it to carriers for delivery. Customers are charged for messages sent to landline numbers.
+In the United States, Azure Communication Services doesn't check for landline numbers and attempts to send it to carriers for delivery. Customers are charged for messages sent to landline numbers.
### Can I send messages to multiple recipients?
Yes, you can make one request with multiple recipients. Follow this [quickstart]
### I received an HTTP Status 202 from the Send SMS API but the SMS didn't reach my phone, what do I do now?
-The 202 returned by the service means that your message has been queued to be sent and not delivered. Use this [quickstart](../../quickstarts/sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
+The 202 returned by the service means that your message was queued to be sent and not delivered. Use this [quickstart](../../quickstarts/sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
### How to send shortened URLs in messages?
-Shortened URLs are a good way to keep messages short and readable. However, US carriers prohibit the use of free publicly available URL shortener services. This is because the ΓÇÿfree-publicΓÇÖ URL shorteners are used by bad-actors to evade detection and get their SPAM messages passed through text messaging platforms. When sending messages in US, we encourage using custom URL shorteners to create URLs with dedicated domain that belongs to your brand. Many US carriers block SMS traffic if they contain publicly available URL shorteners.
+Shortened URLs are a good way to keep messages short and readable. However, US carriers prohibit the use of free publicly available URL shortener services. This is because the ΓÇÿfree-publicΓÇÖ URL shorteners are used by bad-actors to evade detection and get their SPAM messages passed through text messaging platforms. When sending messages in the US, we encourage using custom URL shorteners to create URLs with dedicated domain that belongs to your brand. Many US carriers block SMS traffic if they contain publicly available URL shorteners.
-Below is a list with examples of common URL shorteners you should avoid to maximize deliverability:
+Following is a list with examples of common URL shorteners you should avoid to maximize deliverability:
- bit.ly - goo.gl - tinyurl.com
Below is a list with examples of common URL shorteners you should avoid to maxim
## Opt-out handling ### How does Azure Communication Services handle opt-outs for toll-free numbers?
-Opt-outs for US toll-free numbers are mandated and enforced by US carriers and cannot be overridden.
+Opt-outs for US toll-free numbers are mandated and enforced by US carriers and can't be overridden.
- **STOP** - If a text message recipient wishes to opt out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: *"NETWORK MSG: You replied with the word "stop", which blocks all texts sent from this number. Text back "unstop" to receive messages again."* - **START/UNSTOP** - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for START/UNSTOP: *ΓÇ£NETWORK MSG: You have replied ΓÇ£unstopΓÇ¥ and will begin receiving messages again from this number.ΓÇ¥*-- Azure Communication Services detects STOP messages and blocks all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥-- The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
+- Azure Communication Services detects STOP messages and blocks all further messages to the recipient. The delivery report indicates a failed delivery with the status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥
+- The STOP, UNSTOP, and START messages are relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message-send attempts are made to recipients who opted out of your communications.
-### How does Azure Communication Services handle opt-outs for short codes?
-Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you are asked for your preference to manage opt-outs. If you opt-in, the opt-out management service automatically uses your responses in the program brief for Opt in/ Opt out/ Help keywords in response to STOP/START/HELP keyword.
+### How does Azure Communication Services handle opt-outs for short codes in United States?
+Azure communication service offers an opt-out management service for short codes in US that allows customers to configure responses to mandatory keywords STOP/START/HELP. Before you provision your short code, you're asked for your preference to manage opt-outs. If you opt in, the opt-out management service automatically uses your responses in the program brief for Opt in/ Opt out/ Help keywords in response to STOP/START/HELP keyword.
*Example:* - **STOP** - If a text message recipient wishes to opt out, they can send ΓÇÿSTOPΓÇÖ to the short code. Azure Communication Services sends your configured response for STOP: *"Contoso Alerts: YouΓÇÖre opted out and will receive no further messages."*
Azure Communication Services detects STOP messages and blocks all further messag
### How does Azure Communication Services handle opt outs for alphanumeric sender ID?
-Alphanumeric sender ID is not capable of receiving inbound messages or STOP messages. Azure Communication Services does not enforce or manage opt-out lists for alphanumeric sender ID. You must provide customers with instructions to opt out using other channels such as, calling support, providing an opt-out link in the message, or emailing support. See [messaging policy guidelines](./messaging-policy.md#how-we-handle-opt-out-requests-for-sms) for further details.
+Alphanumeric sender ID isn't capable of receiving inbound messages or STOP messages. Azure Communication Services doesn't enforce or manage opt-out lists for alphanumeric sender ID. You must provide customers with instructions to opt out using other channels such as, calling support, providing an opt-out link in the message, or emailing support. See [messaging policy guidelines](./messaging-policy.md#how-we-handle-opt-out-requests-for-sms) for further details.
+
+### How does Azure Communication Services handle opt outs for short codes in Canada and United Kingdom?
+
+Azure Communication Services doesn't control or implement opt-out mechanisms for short codes within Canada and the United Kingdom. Recipients of text messages have the option to text ΓÇÿSTOPΓÇÖ to unsubscribe or ΓÇÿSTARTΓÇÖ to subscribe to the short code. These requests are relayed as incoming messages to your event grid. It is your responsibility to act on these messages by resubscribing recipients or ceasing message delivery accordingly.
## Short codes ### What is the eligibility to apply for a short code?
-Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
+Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes can't be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
### Can you text to a toll-free number from a short code?
-Azure Communication Services toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to Azure Communication Services toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
+Azure Communication Services toll-free numbers are enabled to receive messages from short codes. However, short codes aren't typically enabled to send messages to toll-free numbers. If your messages from short codes to Azure Communication Services toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
### How should a short code be formatted?
-Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes page without any prefix.
+Short codes don't fall under E.164 formatting guidelines and don't have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes page without any prefix.
### How long does it take to get a short code? What happens after a short code program brief application is submitted?
-Once you have submitted the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. All updates and the status changes for your applications are communicated via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+After you submit the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. All updates and the status changes for your applications are communicated via the email you provide in the application. For more questions about your submitted application, email acstnrequest@microsoft.com.
## Alphanumeric sender ID
-> [!IMPORTANT]
-> Effective **November 30, 2023**, unregistered alphanumeric sender IDs sending messages to Australia and Italy phone numbers will have its traffic blocked. To prevent this from happening, a [registration application](https://forms.office.com/r/pK8Jhyhtd4) needs to be submitted and be in approved status.
> [!IMPORTANT] > Effective **June 30, 2024**, unregistered alphanumeric sender IDs sending messages to UK phone numbers will have its traffic blocked. To prevent this from happening, a [registration application](https://forms.office.com/r/pK8Jhyhtd4) needs to be submitted and be in approved status.
Once you have submitted the short code program brief application in the Azure po
- Spaces ### Is a number purchase required to use alphanumeric sender ID?
-The use of alphanumeric sender ID does not require purchase of any phone number. Alphanumeric sender ID can be enabled through the Azure portal. See [enable alphanumeric sender ID quickstart](../../quickstarts/sms/enable-alphanumeric-sender-id.md) for instructions.
+The use of alphanumeric sender ID doesn't require purchase of any phone number. Alphanumeric sender ID can be enabled through the Azure portal. See [enable alphanumeric sender ID quickstart](../../quickstarts/sms/enable-alphanumeric-sender-id.md) for instructions.
### Can I send SMS immediately after enabling alphanumeric sender ID? We recommend waiting for 10 minutes before you start sending messages for best results. ### Why is my alphanumeric sender ID getting replaced by a number?
-Alphanumeric sender ID replacement with a number may occur when a certain wireless carrier does not support alphanumeric sender ID. This is done to ensure high delivery rate.
+Alphanumeric sender ID replacement with a number may occur when a certain wireless carrier doesn't support alphanumeric sender ID. This is done to ensure high delivery rate.
## Toll-Free Verification
-> [!IMPORTANT]
-> Effective **November 8, 2023**, unverified toll-free numbers sending messages to US phone numbers will have its traffic **blocked**. At this time, there is no change to limits on sending from pending TFNs. To unblock the traffic, a verification application needs to be submitted and be in [verified status](#what-do-the-different-application-statuses-verified-and-unverified-mean).
- > [!IMPORTANT] > Effective **January 31, 2024**, only fully verified toll-free numbers will be able to send traffic. Unverified toll-free numbers sending messages to US and CA phone numbers will have its traffic **blocked**.
New limits are as follows:
> [!IMPORTANT] > Unverified SMS traffic that exceeds the daily limit or is filtered for spam will have a [4010 error code](../troubleshooting-info.md#sms-error-codes) returned for both scenarios.
-#### SMS to Canadian phone numbers
-Effective **October 1, 2022**, unverified toll-free numbers sending messages to Canadian destinations will have its traffic **blocked**. To unblock the traffic, a verification application needs to be submitted and in [verified status](#what-do-the-different-application-statuses-verified-and-unverified-mean).
-
-### What do the different application statuses (verified and unverified) mean?
-- **Verified:** Verified numbers have gone through the toll-free verification process and have been approved. Their traffic is subjected to limited filters. If traffic does trigger any filters, that specific content is blocked but the number is not automatically blocked.-- **Unverified:** Unverified numbers have either 1) not submitted a verification application, 2) have submitted the verification application and are waiting for a decision, or 3) have had their application denied. These numbers will not be able to send any SMS traffic.- ### What happens after I submit the toll-free verification form?
-After submission of the form, we will coordinate with our downstream peer to get the application verified by the toll-free messaging aggregator. While we are reviewing your application, we may reach out to you for more information.
+After submission of the form, we'll coordinate with our downstream peer to get the application verified by the toll-free messaging aggregator. While we are reviewing your application, we may reach out to you for more information.
- From Application Submitted to Pending = **1-5 business days** - From Pending to Verdict (Verfied/Rejected/More info needed) = **4-5 weeks**. The toll-free aggregator is currently facing a high volume of applications due to which applications can take around eight weeks to get approved. The whole toll-free verification process takes about **5-6 weeks**. These timelines are subject to change depending on the volume of applications to the toll-free messaging aggregator and the [quality](#what-is-considered-a-high-quality-toll-free-verification-application) of your application. The toll-free aggregator is currently facing a high volume of applications due to which applications can take around eight weeks to get approved.
-Updates for changes and the status of your applications will be communicated via the regulatory blade in Azure portal.
+Updates for changes and the status of your applications are communicated via the regulatory blade in Azure portal.
### How do I submit a toll-free verification? To submit a toll-free verification application, navigate to Azure Communication Service resource that your toll-free number is associated with in Azure portal and navigate to the Phone numbers blade. Select on the Toll-Free verification application link displayed as "Submit Application" in the infobox at the top of the phone numbers blade. Complete the form. ### What is considered a high quality toll-free verification application?
-The higher the quality of the application the higher chances your application enters [verified state](#what-do-the-different-application-statuses-verified-and-unverified-mean) faster.
+The better the quality of your application, the greater the likelihood of it being approved.
-Pointers to ensure you are submitting a high quality application:
+Pointers to ensure you're submitting a high-quality application:
- Phone number(s) listed is/are Toll-free number(s) - All required fields completed-- The use case is not listed on our [Ineligible Use Case](#what-are-the-ineligible-use-cases-for-toll-free-verification) list
+- The use case isn't listed on our [Ineligible Use Case](#what-are-the-ineligible-use-cases-for-toll-free-verification) list
- Opt-in process is documented/detailed - Opt-in image URL is provided and publicly accessible - [CTIA guidelines](https://www.ctia.org/the-wireless-industry/industry-commitments/messaging-interoperability-sms-mms) are being followed
This table shows the maximum number of characters that can be sent per SMS segme
|Hello world|Text|GSM Standard|GSM-7|160| |你好|Unicode|Unicode|UCS-2|70|
-### Can I send/receive long messages (>2048 chars)?
+### Can I send/receive long messages (>2,048 chars)?
Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages. We recommend keeping SMS messages to a length of 320 characters and reducing the use of accents to ensure maximum delivery.
Rate Limits for SMS:
## Carrier Fees ### What are the carrier fees for SMS?
-US and CA carriers charge an added fee for SMS messages sent and/or received from toll-free numbers and short codes. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Refer to [SMS pricing](../sms-pricing.md) for more details.
+US and CA carriers charge an added fee for SMS messages sent and/or received from toll-free numbers and short codes. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. For more information, see [SMS pricing](../sms-pricing.md).
### When do we come to know of changes to these surcharges?
-As with similar Azure services, customers are notified at least 30 days prior to the implementation of any price changes. These charges are reflected on our SMS pricing page along with the effective dates.
+As with similar Azure services, customers are notified at least 30 days before the implementation of any price changes. These charges are reflected on our SMS pricing page along with the effective dates.
## Emergency support ### Can a customer use Azure Communication Services for emergency purposes?
-Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you are responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+Azure Communication Services doesn't support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you're responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
If you want to clean up and remove a Communication Services subscription, you ca
## Toll-free verification
-If you have a new toll-free number and want to send a [high volume of SMS messages](../../concepts/sms/sms-faq.md#what-happens-if-i-dont-verify-my-toll-free-numbers) or send SMS messages to Canadian phone numbers, see [SMS FAQ > How do I submit a toll-free verification](../../concepts/sms/sms-faq.md#how-do-i-submit-a-toll-free-verification) to learn how to verify your toll-free number.
+To utilize a new toll-free number for sending SMS messages, it is mandatory to undergo a toll-free verification process. For guidance on how to complete the verification of your toll-free number, please refer to the [Quickstart for submitting a toll-free verification](./apply-for-toll-free-verification.md). Note that only toll-free numbers that have been fully verified are authorized to send out SMS traffic. Any SMS traffic from unverified toll-free numbers directed to US and CA phone numbers will be blocked.
## Next steps
communication-services Get Started Teams Interop Group Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop-group-calls.md
The text boxes are used to enter the Teams user IDs planning to call and add in
Hang Up </button> </div>
- <script src="./client.js"></script>
+ <script src="./main.js"></script>
</body> </html> ```
The text boxes are used to enter the Teams user IDs planning to call and add in
Replace content of client.js file with following snippet. ```javascript
-import { CallClient } from "@azure/communication-calling";
-import { Features } from "@azure/communication-calling";
-import { AzureCommunicationTokenCredential } from '@azure/communication-common';
+const { CallClient, Features } = require('@azure/communication-calling');
+const { AzureCommunicationTokenCredential } = require('@azure/communication-common');
+const { AzureLogger, setLogLevel } = require("@azure/logger");
let call; let callAgent;
init();
hangUpButton.addEventListener("click", async () => { await call.hangUp(); hangUpButton.disabled = true;
- teamsMeetingJoinButton.disabled = false;
+ placeInteropGroupCallButton.disabled = false;
callStateElement.innerText = '-'; });
placeInteropGroupCallButton.addEventListener("click", () => {
const participants = teamsIdsInput.value.split(',').map(id => { const participantId = id.replace(' ', ''); return {
- microsoftTeamsUserId: `8:orgid:${participantId}`
+ microsoftTeamsUserId: `${participantId}`
}; })
In results get the "id" field.
Run the following command to bundle your application host on a local webserver: ```console
-npx webpack-dev-server --entry ./client.js --output bundle.js --debug --devtool inline-source-map
+npx webpack serve --config webpack.config.js
``` Open your browser and navigate to http://localhost:8080/. You should see the following screen:
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
# Creating a Client Certificate > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during ledger creation or a ledger update can be used to call the confidential ledger Functional APIs.
container-instances Container State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-state.md
Title: Azure Container Instances states description: Learn about the states of Azure Container Instances provisioning operations, containers, and container groups.--- Previously updated : 03/25/2021+++++ Last updated : 07/02/2024 # Azure Container Instances states
Azure Container Instances displays several independent state values. This articl
## Where to find state values
-In the Azure portal, state is shown in various locations. All state values are accessible via the JSON definition of the resource. This value can be found under Essentials in the Overview blade, shown below.
+In the Azure portal, state is shown in various locations. All state values are accessible via the JSON definition of the resource. This value can be found under Essentials in the Overview blade, shown in the following image.
:::image type="content" source="./media/container-state/provisioning-state.png" alt-text="The Overview blade in the Azure portal is shown. The link 'JSON view' is highlighted.":::
This value is the state of the deployed container group on the backend.
:::image type="content" source="./media/container-state/container-group-state.png" alt-text="The overview blade for the resource in the Azure portal is shown in a web browser. The text 'Status: Running' is highlighted."::: -- **Running**: The container group is running and will continue to try to run until a user action or a stop caused by the restart policy occurs.
+- **Running**: The container group is running and continues to run until a user action or a stop caused by the restart policy occurs.
-- **Stopped**: The container group has been stopped and will not be scheduled to run without user action.
+- **Stopped**: The container group is stopped and won't run without user action.
- **Pending**: The container group is waiting to initialize (finish running init containers, mount Azure file volumes if applicable). The container continues to attempt to get to the **Running** state unless a user action (stop/delete) happens. -- **Succeeded**: The container group has run to completion successfully. Only applicable for *Never* and *On Failure* restart policies.
+- **Succeeded**: The container group ran to completion successfully. Only applicable for *Never* and *On Failure* restart policies.
-- **Failed**: The container group failed to run to completion. Only applicable with a *Never* restart policy. This state indicates either an infrastructure failure (example: incorrect Azure file share credentials) or user application failure (example: application references an environment variable that does not exist).
+- **Failed**: The container group failed to run to completion. Only applicable with a *Never* restart policy. This state indicates either an infrastructure failure (example: incorrect Azure file share credentials) or user application failure (example: application references an environment variable that doesn't exist).
The following table shows what states are applicable to a container group based on the designated restart policy:
The following table shows what states are applicable to a container group based
## Containers
-There are two state values for containers- a current state and a previous state. In the Azure portal, shown below, only current state is displayed. All state values are applicable for any given container regardless of the container group's restart policy.
+There are two state values for containers- a current state and a previous state. In the Azure portal, shown in the following image, only current state is displayed. All state values are applicable for any given container regardless of the container group's restart policy.
> [!NOTE] > The JSON values of `currentState` and `previousState` contain additional information, such as an exit code or a reason, that is not shown elsewhere in the Azure portal.
There are two state values for containers- a current state and a previous state.
- **Waiting**: The container is waiting to run. This state indicates either init containers are still running, or the container is backing off due to a crash loop. -- **Terminated**: The container has terminated, accompanied with an exit code value.
+- **Terminated**: The container terminated, accompanied with an exit code value.
## Provisioning
This value is the state of the last operation performed on a container group. Ge
> [!IMPORTANT] > Additionally, users should not create dependencies on non-terminal provisioning states. Dependencies on **Succeeded** and **Failed** states are acceptable.
-In addition to the JSON view, provisioning state can be also be found in the [response body of the HTTP call](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#response).
+In addition to the JSON view, provisioning state can also be found in the [response body of the HTTP call](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#response).
### Create, start, and restart operations
cosmos-db Distance Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/distance-functions.md
+
+ Title: Distance functions
+description: Distance functions overview.
++++ Last updated : 07/01/2024++
+# What are distance functions?
+
+Distance functions are mathematical formulas used to measure the similarity or dissimilarity between vectors (see [vector search](vector-search-overview.md)). Common examples include Manhattan distance, Euclidean distance, cosine similarity, and dot product. These measurements are crucial for determining how closely related two pieces of data.
+
+## Manhattan distance
+
+This measures the distance between two points by adding up the absolute differences of their coordinates. Imagine walking in a grid-like city, such as many neighborhoods in Manhattan; it is the total number of blocks you walk north-south and east-west.
+
+## Euclidean distance
+
+Euclidean distance measures the straight-line distance between two points. It is named after the ancient Greek mathematician Euclid, who is often referred to as the ΓÇ£father of geometryΓÇ¥.
+
+## Cosine similarity
+
+Cosine similarity measures the cosine of the angle between two vectors projected in a multidimensional space. Two documents may be far apart by Euclidean distance because of document sizes, but they could still have a smaller angle between them and therefore high cosine similarity.
+
+## Dot product
+
+Two vectors are multiplied to return a single number. It combines the two vectors' magnitudes, as well as the cosine of the angle between them, showing how much one vector goes in the direction of another.
+
+## Related content
+- [VectorDistance system function](../nosql/query/vectordistance.md) in Azure Cosmos DB NoSQL
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- [What is vector search?](vector-search-overview.md)
+- LLM [tokens](tokens.md)
+- Vector [embeddings](vector-embeddings.md)
+- [kNN vs ANN vector search algorithms](knn-vs-ann.md)
cosmos-db Knn Vs Ann https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/knn-vs-ann.md
+
+ Title: kNN vs ANN
+description: Explanation and comparison of kNN and ANN algorithms.
++++ Last updated : 07/01/2024++
+# kNN vs ANN
+
+Two popular vector search algorithms are k-Nearest Neighbors (kNN) and Approximate Nearest Neighbors (ANN, not to be confused with Artificial Neural Network). kNN is precise but computationally intensive, making it less suitable for large datasets. ANN, on the other hand, offers a balance between accuracy and efficiency, making it better suited for large-scale applications.
+
+## How kNN works
+
+1. Vectorization: Each data point in the dataset is represented as a vector in a multi-dimensional space.
+2. Distance Calculation: To classify a new data point (query point), the algorithm calculates the distance between the query point and all other points in the dataset using a [distance function](distance-functions.md).
+3. Finding Neighbors: The algorithm identifies the k closest data points (neighbors) to the query point based on the calculated distances. The value of k (the number of neighbors) is crucial. A small k can be sensitive to noise, while a large k can smooth out details.
+4. Making Predictions:
+ - Classification: For classification tasks, kNN assigns the class label to the query point that is most common among the k neighbors. Essentially, it performs a "majority vote."
+ - Regression: For regression tasks, kNN predicts the value for the query point as the average (or sometimes weighted average) of the values of the k neighbors.
+
+## How ANN works
+
+1. Vectorization: Each data point in the dataset is represented as a vector in a multi-dimensional space.
+2. Indexing and Data Structures: ANN algorithms use advanced data structures (e.g., KD-trees, locality-sensitive hashing, or graph-based methods) to index the data points, allowing for faster searches.
+3. Distance Calculation: Instead of calculating the exact distance to every point, ANN algorithms use heuristics to quickly identify regions of the space that are likely to contain the nearest neighbors.
+4. Finding Neighbors: The algorithm identifies a set of data points that are likely to be close to the query point. These neighbors are not guaranteed to be the exact closest points but are close enough for practical purposes.
+4. Making Predictions:
+ - Classification: For classification tasks, ANN assigns the class label to the query point that is most common among the identified neighbors, similar to kNN.
+ - Regression: For regression tasks, ANN predicts the value for the query point as the average (or weighted average) of the values of the identified neighbors.
+
+## Related content
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- [What is vector search?](vector-search-overview.md)
+- LLM [tokens](tokens.md)
+- Vector [embeddings](vector-embeddings.md)
+- [Distance functions](distance-functions.md)
cosmos-db Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/tokens.md
+
+ Title: LLM tokens
+description: Overview of tokens in large language models.
++++ Last updated : 07/01/2024++
+# What are tokens?
+
+Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like GPT-3.5 or GPT-4 break words into tokens for processing.
+
+## Related content
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- [What is vector search?](vector-search-overview.md)
+- Vector [embeddings](vector-embeddings.md)
+- [Distance functions](distance-functions.md)
+- [kNN vs ANN vector search algorithms](knn-vs-ann.md)
cosmos-db Vector Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/vector-embeddings.md
+
+ Title: Vector embeddings
+description: Vector embeddings overview.
++++ Last updated : 07/01/2024++
+# What are vector embeddings?
+
+Vectors, also known as embeddings or vector embeddings, are mathematical representations of data in a high-dimensional space. They represent various types of information ΓÇö text, images, audio ΓÇö a format that machine learning models can process. When an AI model receives text input, it first tokenizes the text into tokens. Each token is then converted into its corresponding embedding. This conversion process can be done using an embedding generation model, such as [Azure OpenAI Embeddings](../../ai-services/openai/how-to/embeddings.md) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure). The model processes these embeddings through multiple layers, capturing complex patterns and relationships within the text. The output embeddings can then be converted back into tokens if needed, generating readable text.
+
+Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. These high-dimensional representations capture semantic meaning, making it easier to perform tasks like searching, clustering, and classifying.
+
+Here are two examples of texts represented as vectors:
+
+Image source: [OpenAI](https://openai.com/index/introducing-text-and-code-embeddings/)
+
+Each box containing floating-point numbers corresponds to a dimension, and each dimension corresponds to a feature or attribute that may or may not be comprehensible to humans. Large language model text embeddings typically have a few thousand dimensions, while more complex data models may have tens of thousands of dimensions.
+
+Between the two vectors in the above example, some dimensions are similar while other dimensions are different, which are due to the similarities and differences in the meaning of the two phrases.
+
+This image shows the spatial closeness of vectors that are similar, contrasting vectors that are drastically different:
+
+Image source: [OpenAI](https://openai.com/index/introducing-text-and-code-embeddings/)
+
+You can see more examples in this [interactive visualization](https://openai.com/index/introducing-text-and-code-embeddings/#_1Vr7cWWEATucFxVXbW465e) that transforms data into a three-dimensional space.
+
+## Related content
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- [What is vector search?](vector-search-overview.md)
+- LLM [tokens](tokens.md)
+- [Distance functions](distance-functions.md)
+- [kNN vs ANN vector search algorithms](knn-vs-ann.md)
cosmos-db Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/vector-search-overview.md
+
+ Title: Vector search concept overview
+description: Vector search concept overview
++++ Last updated : 07/01/2024++
+# What is vector search?
+
+Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the [vector embeddings](vector-embeddings.md) of your data and query, and then measuring the [distance](distance-functions.md) between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically.
+
+This [interactive visualization](https://openai.com/index/introducing-text-and-code-embeddings/#_1Vr7cWWEATucFxVXbW465e) shows some examples of closeness and distance between vectors.
+
+Two popular types of vector search algorithms are [k-nearest neighbors (kNN) and approximate nearest neighbor (ANN)](knn-vs-ann.md). Some well-known vector search algorithms belonging to these categories include Inverted File (IVF), Hierarchical Navigable Small World (HNSW), and the state-of-the-art DiskANN.
+
+Using an integrated vector search feature in a fully featured database ([as opposed to a pure vector database](../vector-database.md#integrated-vector-database-vs-pure-vector-database)) offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
+
+## Related content
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- LLM [tokens](tokens.md)
+- Vector [embeddings](vector-embeddings.md)
+- [Distance functions](distance-functions.md)
+- [kNN vs ANN vector search algorithms](knn-vs-ann.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
When constructing the [REST API authorization header](/rest/api/cosmos-db/access
## Use data explorer
-> [!NOTE]
-> The data explorer exposed in the Azure portal does not support the Azure Cosmos DB role-based access control yet. To use your Microsoft Entra identity when exploring your data, you must use the [Azure Cosmos DB Explorer](https://cosmos.azure.com/?feature.enableAadDataPlane=true) instead.
+The use of Azure Cosmos DB role-based access control within Data Explorer (either exposed in the Azure Portal or at [https://cosmos.azure.com](https://cosmos.azure.com)) is governed by the **Enable Entra ID RBAC** setting. You can access this setting via the "wheel" icon at the upper right-hand side of the Data Explorer interface.
+
+The setting has three possible values:
+- **Automatic (default)**: In this mode, role-based access control will be automatically used if the account has [disabled the use of keys](#disable-local-auth). Otherwise, Data Explorer will use account keys for data requests.
+
+- **True**: In this mode, role-based access will always be used for Data Explorer data requests. If the account has not been enabled for role-based access , then the requests will fail.
-When you access the [Azure Cosmos DB Explorer](https://cosmos.azure.com/?feature.enableAadDataPlane=true) with the specific `?feature.enableAadDataPlane=true` query parameter and sign in, the following logic is used to access your data:
+- **False**: In this mode, account keys will always be used for Data Explorer data requests. If the account has disabled the use of keys, then the requests will fail.
-1. A request to fetch the account's primary key is attempted on behalf of the identity signed in. If this request succeeds, the primary key is used to access the account's data.
-1. If the identity signed in isn't allowed to fetch the account's primary key, this identity is directly used to authenticate data access. In this mode, the identity must be [assigned with proper role definitions](#role-assignments) to ensure data access.
+When using modes that enable role-based access in the Azure Portal Data Explorer, you must click on the **Login for Entra ID RBAC** button (located on the Data Explorer command bar) prior to making any data requests. This is not necessary when using the Cosmos Explorer at cosmos.azure.com. Please ensure that the signed in identity has been [assigned with proper role definitions](#role-assignments) to enable data access.
+
+Also note that changing the mode to one that uses account keys may trigger a request to fetch the primary key on behalf of the identity that is signed in.
+
+> [!NOTE]
+> Previously, role-based access was only supported in Cosmos Explorer using `https://cosmos.azure.com/?feature.enableAadDataPlane=true`. This is still supported and will override the value of the **Enable Entra ID RBAC** setting. Using this query parameter is equivalent to using the 'Automatic' mode mentioned above.
## Audit data requests
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md
Vector indexing and search in Azure Cosmos DB for NoSQL has some limitations whi
- [Vector index overview](../index-overview.md#vector-indexes) - [Vector index policies](../index-policy.md#vector-indexes) - [Manage index](how-to-manage-indexing-policy.md#vector-indexing-policy-examples)
+- Integrations:
+ - [LangChain, Python](https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db_no_sql/)
+ - [Semantic Kernel, .NET](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Connectors/Connectors.Memory.AzureCosmosDBNoSQL)
+ - [Semantic Kernel, Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb_no_sql)
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-section-invoice.md
Once you customized your billing account based on your needs, you can link subsc
### Link existing subscriptions and products
-If you have existing Azure subscriptions or other products such as Azure Marketplace and App source resources, you can move them from their existing invoice section to another invoice section to reorganize your costs. However, you can't change the invoice section for a reservation or savings plan.
+If you have existing Azure subscriptions or other products such as Azure Marketplace and App source resources, you can move them from their existing invoice section to another invoice section to reorganize your costs. However, you can't change the invoice section for a reservation, savings plan, or seat-based subscriptions.
1. Sign in to the [Azure portal](https://portal.azure.com).
data-factory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md
This archive page retains updates from older months.
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly updates.
+## August 2023
+
+### Change Data Capture
+
+- Azure Synapse Analytics target availability in top-level CDC resource [Learn more](concepts-change-data-capture-resource.md#azure-synapse-analytics-as-target)
+- Snowflake connector in Mapping Data Flows support for Change Data Capture in public preview [Learn more](connector-snowflake.md?tabs=data-factory#mapping-data-flow-properties)
+
+### Data flow
+
+- Integer type available for pipeline variables [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/integer-type-available-for-pipeline-variables/ba-p/3902472)
+- Snowflake CDC source connector available in top-level CDC resource [Learn more](concepts-change-data-capture-resource.md)
+- Native UI support of parameterization for more linked services [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types)
+
+### Data movement
+
+- Managed private endpoints support for Application Gateway and MySQL Flexible Server [Learn more](managed-virtual-network-private-endpoint.md#time-to-live)
+- Managed virtual network time-to-live (TTL) general availability [Learn more](managed-virtual-network-private-endpoint.md#time-to-live)
+
+### Integration runtime
+
+Self-hosted integration runtime now supports self-contained interactive authoring (Preview) [Learn more](create-self-hosted-integration-runtime.md?tabs=data-factory#self-contained-interactive-authoring-preview)
+ ## July 2023 ### Change Data Capture
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## June 2024
+
+### Data movement
+
+The new ServiceNow connector provides improved native support in Copy and Lookup activities. [Learn more](connector-servicenow.md)
+ ## April 2024 ### Data flow
Azure Data Factory is generally available in Poland Central [Learn more](https:/
Added support for metadata driven pipelines for dynamic full and incremental processing in Azure SQL [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/metadata-driven-pipelines-for-dynamic-full-and-incremental/ba-p/3925362)
-## August 2023
-
-### Change Data Capture
--- Azure Synapse Analytics target availability in top-level CDC resource [Learn more](concepts-change-data-capture-resource.md#azure-synapse-analytics-as-target)-- Snowflake connector in Mapping Data Flows support for Change Data Capture in public preview [Learn more](connector-snowflake.md?tabs=data-factory#mapping-data-flow-properties)-
-### Data flow
--- Integer type available for pipeline variables [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/integer-type-available-for-pipeline-variables/ba-p/3902472)-- Snowflake CDC source connector available in top-level CDC resource [Learn more](concepts-change-data-capture-resource.md)-- Native UI support of parameterization for more linked services [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types)-
-### Data movement
--- Managed private endpoints support for Application Gateway and MySQL Flexible Server [Learn more](managed-virtual-network-private-endpoint.md#time-to-live)-- Managed virtual network time-to-live (TTL) general availability [Learn more](managed-virtual-network-private-endpoint.md#time-to-live)-
-### Integration runtime
-
-Self-hosted integration runtime now supports self-contained interactive authoring (Preview) [Learn more](create-self-hosted-integration-runtime.md?tabs=data-factory#self-contained-interactive-authoring-preview)
- ## Related content - [What's new archive](whats-new-archive.md)
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
# Use Azure Marketplace image to create VM image for your Azure Stack Edge Pro GPU > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [applies-to-gpu-pro-pro2-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-pro-2-pro-r-sku.md)]
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
> The information contained within this section applies to orders placed after April 1, 2024. > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This tutorial describes procedures to connect to Azure Data Box Blob storage via REST APIs over *http* or *https*. Once connected, the steps required to copy the data to Data Box Blob storage and prepare the Data Box to ship, are also described.
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md
After the disks are connected and unlocked, you can copy data from your source d
> The information contained within this section applies to orders placed after April 1, 2024. > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly.
This tutorial describes how to copy data from your host computer and generate checksums to verify data integrity.
databox Data Box Disk Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-set-up.md
> Azure Data Box disk with hardware encryption requires a SATA III connection. All other connections, including USB, are not supported. > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This tutorial describes how to unpack, connect, and unlock your Azure Data Box Disk.
databox Data Box Disk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md
# Azure Data Box Disk system requirements > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the important system requirements for your Microsoft Azure Data Box Disk solution and for the clients connecting to the Data Box Disk. We recommend that you review the information carefully before you deploy your Data Box Disk, and then refer back to it as necessary during the deployment and subsequent operation.
databox Data Box Heavy Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-copy-data-via-rest.md
# Tutorial: Copy data to Azure Data Box Blob storage via REST APIs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This tutorial describes procedures to connect to Azure Data Box Blob storage via REST APIs over *http* or *https*. Once connected, the steps required to copy the data to Data Box Blob storage are described.
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
ExpressRoute virtual network gateway is designed to exchange network routes and route network traffic. FastPath is designed to improve the data path performance between your on-premises network and your virtual network. When enabled, FastPath sends network traffic directly to virtual machines in the virtual network, bypassing the gateway. + ## Requirements ### Circuits
expressroute Configure Expressroute Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/configure-expressroute-private-peering.md
In this tutorial, you learn how to:
* **ExpressRoute circuit**: Specify the ExpressRoute circuit that you wish to connect with the virtual network gateway. * **Redeem authorization**: Leave this box unchecked since you're connecting to a circuit in the same subscription. * **Routing weight**: Leave the default value of **0**. This weight value is used to determine which ExpressRoute circuit is preferred when multiple ExpressRoute circuits are linked to the same virtual network gateway.
- * **FathPath**: Leave this box unchecked. FastPath is a feature that improves data path performance between on-premises and Azure by bypassing the Azure VPN gateway for data traffic. For more information, see [FastPath](about-fastpath.md).
+ * **FastPath**: Leave this box unchecked. FastPath is a feature that improves data path performance between on-premises and Azure by bypassing the Azure VPN gateway for data traffic. For more information, see [FastPath](about-fastpath.md).
1. Select **Create** to create the connection. The connection is created and the virtual network gateway is linked to the ExpressRoute circuit.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
The ExpressRoute virtual network gateway facilitates connectivity to private end
> [!IMPORTANT] > * Throughput and control plane capacity may be half compared to connectivity to non-private-endpoint resources. > * During a maintenance period, you may experience intermittent connectivity issues to private endpoint resources.
-> * Customers need to ensure their on-premises configuration/router/firewall are correctly configured to ensure that packets for the IP 5-tuple transits via a single next hop (Microsoft Enterprise Edge router - MSEE) unless there is a maintenance event. If a customer's on-premises firewall or router configuration is causing the same IP 5-tuple to frequently switch next hops, then the customer will experience connectivity issues.
+> * Customers need to ensure their on-premises configuration, including router & firewall settings are correctly setup to ensure that packets for the IP 5-tuple transits via a single next hop (Microsoft Enterprise Edge router - MSEE) unless there is a maintenance event. If a customer's on-premises firewall or router configuration is causing the same IP 5-tuple to frequently switch next hops, then the customer will experience connectivity issues.
### Private endpoint connectivity and planned maintenance events
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
The steps for this tutorial use the values in the following configuration refere
1. The **Name** for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the autofilled **Address range** values to match your configuration requirements. **You need to create the GatewaySubnet with a /27 or larger** (/26, /25, and so on.). /28 or smaller subnets are not supported for new deployments. If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger.
- If you're using a dual stack virtual network and plan to use IPv6-based private peering over ExpressRoute, select **Add IP6 address space** and enter **IPv6 address range** values.
+ If you're using a dual stack virtual network and plan to use IPv6-based private peering over ExpressRoute, select **Add IPv6 address space** and enter **IPv6 address range** values.
Then, select **OK** to save the values and create the gateway subnet.
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
Previously updated : 06/06/2024 Last updated : 07/02/2024 # Configure Azure Firewall rules
-You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy. Azure Firewall denies all traffic by default, until rules are manually configured to allow traffic.
+You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy. Azure Firewall denies all traffic by default, until rules are manually configured to allow traffic. The rules are terminating, so rule proceisng stops on a match.
## Rule processing using classic rules
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
# Understanding Azure Machine Configuration > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Policy's machine configuration feature provides native capability to audit or configure operating system settings as code for machines running in Azure and hybrid
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Title: Details of the Azure Policy attestation structure description: Describes the components of the Azure Policy attestation JSON object. Previously updated : 09/23/2022 Last updated : 07/01/2024 + # Azure Policy attestation structure
-Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state.
+Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effect-manual.md). They also allow users to provide more metadata or link to evidence that accompanies the attested compliance state.
> [!NOTE]
-> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation).
+> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation).
## Best practices
-Attestations can be used to set the compliance state of an individual resource for a given manual policy. This means that each applicable resource requires one attestation per manual policy assignment. For ease of management, manual policies should be designed to target the scope which defines the boundary of resources whose compliance state needs to be attested.
+Attestations can be used to set the compliance state of an individual resource for a given manual policy. Each applicable resource requires one attestation per manual policy assignment. For ease of management, manual policies should be designed to target the scope that defines the boundary of resources whose compliance state needs to be attested.
-For example, suppose an organization divides teams by resource group, and each team is required to attest to development of procedures for handling resources within that resource group. In this scenario, the conditions of the policy rule should specify that type equals `Microsoft.Resources/resourceGroups`. This way, one attestation is required for the resource group, rather than for each individual resource within. Similarly, if the organization divides teams by subscriptions, the policy rule should target `Microsoft.Resources/subscriptions`.
+For example, suppose an organization divides teams by resource group, and each team is required to attest to development of procedures for handling resources within that resource group. In this scenario, the conditions of the policy rule should specify that type equals `Microsoft.Resources/resourceGroups`. This way, one attestation is required for the resource group, rather than for each individual resource within. Similarly, if the organization divides teams by subscriptions, the policy rule should target `Microsoft.Resources/subscriptions`.
-Typically, the provided evidence should correspond with relevant scopes of the organizational structure. This pattern prevents the need to duplicate evidence across many attestations. Such duplications would make manual policies difficult to manage, and indicate that the policy definition targets the wrong resource(s).
+Typically, the provided evidence should correspond with relevant scopes of the organizational structure. This pattern prevents the need to duplicate evidence across many attestations. Such duplications would make manual policies difficult to manage, and indicate that the policy definition targets the wrong resources.
## Example attestation
-Below is an example of creating a new attestation resource which sets the compliance state for a resource group targeted by a manual policy assignment:
+The following example creates a new attestation resource that sets the compliance state for a resource group targeted by a manual policy assignment:
```http PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01
PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{r
## Request body
-Below is a sample attestation resource JSON object:
+The following code is a sample attestation resource JSON object:
```json "properties": {
- "policyAssignmentId": "/subscriptions/{subscriptionID}/providers/microsoft.authorization/policyassignments/{assignmentID}",
- "policyDefinitionReferenceId": "{definitionReferenceID}",
- "complianceState": "Compliant",
- "expiresOn": "2023-07-14T00:00:00Z",
- "owner": "{AADObjectID}",
- "comments": "This subscription has passed a security audit. See attached details for evidence",
- "evidence": [
- {
- "description": "The results of the security audit.",
- "sourceUri": "https://gist.github.com/contoso/9573e238762c60166c090ae16b814011"
- },
- {
- "description": "Description of the attached evidence document.",
- "sourceUri": "https://storagesamples.blob.core.windows.net/sample-container/contingency_evidence_adendum.docx"
- },
- ],
- "assessmentDate": "2022-11-14T00:00:00Z",
- "metadata": {
- "departmentId": "{departmentID}"
- }
+ "policyAssignmentId": "/subscriptions/{subscriptionID}/providers/microsoft.authorization/policyassignments/{assignmentID}",
+ "policyDefinitionReferenceId": "{definitionReferenceID}",
+ "complianceState": "Compliant",
+ "expiresOn": "2023-07-14T00:00:00Z",
+ "owner": "{AADObjectID}",
+ "comments": "This subscription has passed a security audit. See attached details for evidence",
+ "evidence": [
+ {
+ "description": "The results of the security audit.",
+ "sourceUri": "https://gist.github.com/contoso/9573e238762c60166c090ae16b814011"
+ },
+ {
+ "description": "Description of the attached evidence document.",
+ "sourceUri": "https://contoso.blob.core.windows.net/contoso-container/contoso_file.docx"
+ },
+ ],
+ "assessmentDate": "2022-11-14T00:00:00Z",
+ "metadata": {
+ "departmentId": "{departmentID}"
+ }
} ```
-|Property |Description |
-|||
-|`policyAssignmentId` |Required assignment ID for which the state is being set. |
-|`policyDefinitionReferenceId` |Optional definition reference ID, if within a policy initiative. |
-|`complianceState` |Desired state of the resources. Allowed values are `Compliant`, `NonCompliant`, and `Unknown`. |
-|`expiresOn` |Optional date on which the compliance state should revert from the attested compliance state to the default state |
-|`owner` |Optional Azure AD object ID of responsible party. |
-|`comments` |Optional description of why state is being set. |
-|`evidence` |Optional array of links to attestation evidence. |
-|`assessmentDate` |Date at which the evidence was assessed. |
-|`metadata` |Optional additional information about the attestation. |
+| Property | Description |
+| - | - |
+| `policyAssignmentId` | Required assignment ID for which the state is being set. |
+| `policyDefinitionReferenceId` | Optional definition reference ID, if within a policy initiative. |
+| `complianceState` | Desired state of the resources. Allowed values are `Compliant`, `NonCompliant`, and `Unknown`. |
+| `expiresOn` | Optional date on which the compliance state should revert from the attested compliance state to the default state. |
+| `owner` | Optional Microsoft Entra ID object ID of responsible party. |
+| `comments` | Optional description of why state is being set. |
+| `evidence` | Optional array of links to attestation evidence. |
+| `assessmentDate` | Date at which the evidence was assessed. |
+| `metadata` | Optional additional information about the attestation. |
-Because attestations are a separate resource from policy assignments, they have their own lifecycle. You can PUT, GET and DELETE attestations using the ARM API. Attestations are removed if the related manual policy assignment or policyDefinitionReferenceId are deleted, or if a resource unique to the attestation is deleted. See the [Policy REST API Reference](/rest/api/policy) for more details.
+Because attestations are a separate resource from policy assignments, they have their own lifecycle. You can PUT, GET, and DELETE attestations using the Azure Resource Manager API. Attestations are removed if the related manual policy assignment or `policyDefinitionReferenceId` are deleted, or if a resource unique to the attestation is deleted. For more information, go to [Policy REST API Reference](/rest/api/policy) for more details.
## Next steps -- Review [Understanding policy effects](effects.md).-- Study the [initiative definition structure](./initiative-definition-structure.md)-- Review examples at [Azure Policy samples](../samples/index.md).
+- [Azure Policy definitions effect basics](effect-basics.md).
+- [Azure Policy initiative definition structure](./initiative-definition-structure.md).
+- [Azure Policy samples](../samples/index.md).
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Title: Azure Policy applicability logic description: Describes the rules Azure Policy uses to determine whether the policy is applied to its assigned resources. Previously updated : 09/22/2022 Last updated : 07/01/2024 + # What is applicability in Azure Policy?
-When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it's considered **applicable** to the given policy assignment.
+When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource is only assessed for compliance if it's considered **applicable** to the given policy assignment.
-Applicability is determined by several factors:
+Several factors determine applicability:
- **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure-policy-rule.md#conditions). - **Mode** of the policy definition. - **Excluded scopes** specified in the assignment. - **Resource selectors** specified in the assignment. - **Exemptions** of resources or resource hierarchies.
-Condition(s) in the `if` block of the policy rule are evaluated for applicability in slightly different ways based on the effect.
+Conditions in the `if` block of the policy rule are evaluated for applicability in slightly different ways based on the effect.
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state. ## Resource Manager modes
-### -IfNotExists policy effects
+### ifNotExists policy effects
The applicability of `AuditIfNotExists` and `DeployIfNotExists` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy
Following are special cases to the previously described applicability logic:
-|Scenario |Result |
-|||
-|Any invalid aliases in the `if` conditions |The policy isn't applicable |
-|When the `if` conditions consist of only `kind` conditions |The policy is applicable to all resources |
-|When the `if` conditions consist of only `name` conditions |The policy is applicable to all resources |
-|When the `if` conditions consist of only `type` and `kind` conditions |Only `type` conditions are considered when deciding applicability |
-|When the `if` conditions consist of only `type` and `name` conditions |Only `type` conditions are considered when deciding applicability |
-|When the `if` conditions consist of `type`, `kind`, and other conditions |Both `type` and `kind` conditions are considered when deciding applicability |
-|When the `if` conditions consist of `type`, `name`, and other conditions |Both `type` and `name` conditions are considered when deciding applicability |
-|When any conditions (including deployment parameters) include a `location` condition |Won't be applicable to subscriptions |
+| Scenario | Result |
+| - | - |
+| Any invalid aliases in the `if` conditions | The policy isn't applicable |
+| When the `if` conditions consist of only `kind` conditions | The policy is applicable to all resources |
+| When the `if` conditions consist of only `name` conditions | The policy is applicable to all resources |
+| When the `if` conditions consist of only `type` and `kind` conditions | Only `type` conditions are considered when deciding applicability |
+| When the `if` conditions consist of only `type` and `name` conditions | Only `type` conditions are considered when deciding applicability |
+| When the `if` conditions consist of `type`, `kind`, and other conditions | Both `type` and `kind` conditions are considered when deciding applicability |
+| When the `if` conditions consist of `type`, `name`, and other conditions | Both `type` and `name` conditions are considered when deciding applicability |
+| When any conditions (including deployment parameters) include a `location` condition | Isn't applicable to subscriptions |
## Resource provider modes
The applicability of `Microsoft.Kubernetes.Data` policies is based off the entir
### Microsoft.KeyVault.Data, Microsoft.ManagedHSM.Data, Microsoft.DataFactory.Data, and Microsoft.MachineLearningServices.v2.Data
-Policies with these RP Modes are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type.
+Policies with these resource provider modes are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type.
Key Vault component types:-- Microsoft.KeyVault.Data/vaults/certificates-- Microsoft.KeyVault.Data/vaults/keys-- Microsoft.KeyVault.Data/vaults/secrets
+- `Microsoft.KeyVault.Data/vaults/certificates`
+- `Microsoft.KeyVault.Data/vaults/keys`
+- `Microsoft.KeyVault.Data/vaults/secrets`
-Managed HSM component type:
-- Microsoft.ManagedHSM.Data/managedHsms/keys
+Managed Hardware Security Module (HSM) component type:
+- `Microsoft.ManagedHSM.Data/managedHsms/keys`
Azure Data Factory component type:-- Microsoft.DataFactory.Data/factories/outboundTraffic
+- `Microsoft.DataFactory.Data/factories/outboundTraffic`
Azure Machine Learning component type:-- Microsoft.MachineLearningServices.v2.Data/workspaces/deployments
+- `Microsoft.MachineLearningServices.v2.Data/workspaces/deployments`
### Microsoft.Network.Data Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type:-- Microsoft.Network/virtualNetworks
+- `Microsoft.Network/virtualNetworks`
## Not Applicable Resources There could be situations in which resources are applicable to an assignment based on conditions or scope, but they shouldn't be applicable due to business reasons. At that time, it would be best to apply [exclusions](./assignment-structure.md#excluded-scopes) or [exemptions](./exemption-structure.md). To learn more on when to use either, review [scope comparison](./scope.md#scope-comparison) > [!NOTE]
-> By design, Azure Policy does not evaluate resources under the `Microsoft.Resources` resource provider (RP) from
-policy evaluation, except for subscriptions and resource groups.
+> By design, Azure Policy does not evaluate resources under the `Microsoft.Resources` resource provider from policy evaluation, except for subscriptions and resource groups.
## Next steps
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 06/21/2024 Last updated : 07/02/2024
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 06/21/2024 Last updated : 07/02/2024
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Cis Azure 2 0 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Gov Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 (Azure Government) description: Details of the System and Organization Controls (SOC) 2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Guest Configuration Baseline Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-docker.md
# Docker security baseline > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article details the configuration settings for Docker hosts as applicable in the following implementations:
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
# Linux security baseline > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article details the configuration settings for Linux guests as applicable in the following implementations:
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Mcfs Baseline Confidential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-confidential.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Confidential Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Confidential Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Mcfs Baseline Global https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-global.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Global Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Global Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Nl Bio Cloud Theme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md
Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Pci Dss 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md
Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Rbi Itf Banks 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Rbi Itf Nbfc 2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 description: Details of the System and Organization Controls (SOC) 2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Spain Ens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/spain-ens.md
Title: Regulatory Compliance details for Spain ENS description: Details of the Spain ENS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Swift Csp Cscf 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Swift Csp Cscf 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 06/20/2024 Last updated : 07/02/2024
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
::: moniker range="=iotedge-1.4" > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
::: moniker-end [!INCLUDE [iot-edge-version-1.4-or-1.5](includes/iot-edge-version-1-4-or-1-5.md)]
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
# Solutions to common issues for Azure IoT Edge > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/p
Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | West US 3 Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | Not available Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, France Central, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
machine-learning Ubuntu Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/ubuntu-upgrade.md
Last updated 05/08/2024
# Upgrade your Data Science Virtual Machine to Ubuntu 20.04 > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
If you have a Data Science Virtual Machine (DSVM) that runs an older release, such as Ubuntu 18.04 or CentOS, you should migrate your DSVM to Ubuntu 20.04. This migration ensures that you get the latest operating system patches, drivers, preinstalled software, and library versions. This document tells you how to migrate from either older Ubuntu versions or from CentOS.
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
As an administrator, you can create a compute instance on behalf of a data scien
* [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md). You can also find these values in the Microsoft Entra admin center.
+To further enhance security, when you create a compute instance on behalf of a data scientist and assign the instance to them, single sign on (SSO) will be disabled.
+The assigned to user needs to enable SSO on compute instance themselves after the compute is assigned to them by updating the SSO setting on the compute instance.
+ ## Assign managed identity You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
machine-learning How To Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md
Certain models in the model catalog can be deployed as a service with pay-as-you
- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one. > [!IMPORTANT]
- > The pay-as-you-go model deployment offering for eligible models in the Mistral family is only available in workspaces created in the **East US 2** and **Sweden Central** regions. For _Mistral Large_, the pay-as-you-go offering is also available in the **France Central** region.
+ > The pay-as-you-go model deployment offering for eligible models in the Mistral family is only available in workspaces created in the **East US 2** and **Sweden Central** regions.
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
The following steps demonstrate the deployment of Mistral Large, but you can use
To create a deployment: 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
-1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2**, **Sweden Central**, or **France Central** region.
+1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region.
1. Choose the model (Mistral-large) that you want to deploy from the [model catalog](https://ml.azure.com/model/catalog). Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
# Train models with Azure Machine Learning datasets > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
# Migration and modernization: Common questions > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article answers common questions about the Migration and modernization tool. If you've other questions, check these resources:
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
# Assessment overview (migrate to Azure VMs) > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article provides an overview of assessments in the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. The tool can assess on-premises servers in VMware virtual and Hyper-V environment, and physical servers for migration to Azure.
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md
ms. Previously updated : 02/06/2024 Last updated : 07/02/2024
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### Run the script
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:
-includes/security-hash-value.md
-)]
> [!NOTE] > The same script can be used to set up Physical appliance for Azure Government cloud with either public or private endpoint connectivity.
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script.md
ms. Previously updated : 02/06/2024 Last updated : 07/02/2024
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
Previously updated : 02/12/2024 Last updated : 07/02/2024
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` > 3. Download the [latest version](https://go.microsoft.com/fwlink/?linkid=2191847) of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### 3. Run the Azure Migrate installer script
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
ms. Previously updated : 02/06/2024 Last updated : 07/02/2024
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
# Support matrix for Hyper-V migration > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes support settings and limitations for migrating Hyper-V VMs with [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing Hyper-V VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-hyper-v.md).
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
# Prepare on-premises machines for migration to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to prepare on-premises machines before you migrate them to Azure using the [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) tool.
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-appliance.md
# Troubleshoot the Azure Migrate appliance > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article helps you troubleshoot issues when you deploy the [Azure Migrate](migrate-services-overview.md) appliance and use the appliance to discover on-premises servers.
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Last updated 04/03/2024
# Java web app containerization and migration to Azure App Service > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure App Service](https://azure.microsoft.com/services/app-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure App Service.
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
Last updated 04/03/2024
# Java web app containerization and migration to Azure Kubernetes Service > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
- For Azure Government: **Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
ms. Previously updated : 04/05/2024 Last updated : 07/02/2024 #Customer intent: As a server admin I want to discover my GCP instances.
Check that the zipped file is secure before you deploy it.
**Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
- For Azure Government: **Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
Hash value is:
**Hash** | **Value** |
-SHA256 | 0dd9d0e2774bb8b33eb7ef7d97d44a90a7928a4b1a30686c5b01ebd867f3bd68
+SHA256 | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### Create an account to access servers
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download** | **SHA256** | |
- Hyper-V (8.91 GB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191848) | 952e493a63a45f97ecdc0945807d504f4bd2f0f4f8248472b784c3e6bd25eb13
+ Hyper-V (8.91 GB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191848) | 952e493a63a45f97ecdc0945807d504f4bd2f0f4f8248472b784c3e6bd25eb13
- For Azure Government: **Scenario*** | **Download** | **SHA256** | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### 3. Create an appliance
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
# Tutorial: Build a business case or assess servers using an imported CSV file > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 04/05/2024 Last updated : 07/02/2024 #Customer intent: As a server admin I want to discover my on-premises server inventory.
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/how-to-set-up-appliance-vmware.md
Before you deploy the OVA file, verify that the file is secure:
**Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
#### Create the appliance server
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware-migration.md
This article summarizes support settings and limitations for migrating VMware vSphere VMs with [Migration and modernization](../migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing VMware vSphere VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-vmware.md). > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
## Migration options
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md
This article provides an overview of the changes performed when you [migrate VMware VMs to Azure via the agentless migration](./tutorial-migrate-vmware.md) method using the Migration and modernization tool. > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Before you migrate your on-premises VM to Azure, you may require a few changes to make the VM ready for Azure. These changes are important to ensure that the migrated VM can boot successfully in Azure and connectivity to the Azure VM can be established-. Azure Migrate automatically handles these configuration changes for the following operating system versions for both Linux and Windows. This process is called *Hydration*.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md
ms. Previously updated : 04/11/2024 Last updated : 07/02/2024 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
Before you deploy the OVA file, verify that the file is secure:
**Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | a551f3552fee62ca5c7ea11648960a09a89d226659febd26314e222a37c7d857
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
#### Create the appliance server
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/concepts-migrate-mydumper-myloader.md
# Migrate large databases to Azure Database for MySQL using mydumper/myloader > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Last updated 05/03/2023
# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
The networking services in Azure provide various networking capabilities that can be used together or separately. Select any of the following key capabilities to learn more about them: - [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, NAT Gateway, Azure DNS, Peering service, Azure Virtual Network Manager, Route Server, and Azure Bastion. - [**Application protection services**](#protect): Protect your applications using any or a combination of these networking services in Azure - Load Balancer, Private Link, DDoS protection, Firewall, Network Security Groups, Web Application Firewall, and Virtual Network Endpoints.-- [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Content Delivery Network (CDN), Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer.
+- [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer.
- [**Network monitoring**](#monitor): Monitor your network resources using any or a combination of these networking services in Azure - Network Watcher, ExpressRoute Monitor, Azure Monitor, or VNet Terminal Access Point (TAP). ## <a name="connect"></a>Connectivity services
The following diagram shows url path-based routing with Application Gateway.
:::image type="content" source="./media/networking-overview/figure1-720.png" alt-text="Application Gateway example":::
-### <a name="cdn"></a>Content Delivery Network
-
-[Azure Content Delivery Network (CDN)](../../cdn/cdn-overview.md). offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world.
-- ## <a name="monitor"></a>Network monitoring services This section describes networking services in Azure that help monitor your network resources - Azure Network Watcher, Azure Monitor Network Insights, Azure Monitor, and ExpressRoute Monitor.
open-datasets Dataset Chicago Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-chicago-safety.md
description: Learn how to use the Chicago Safety Data dataset in Azure Open Data
Previously updated : 04/16/2021+ Last updated : 06/17/2024 # Chicago Safety Data
-311 service requests from the city of Chicago, including historical sanitation code complaints, pot holes reported, and street light issues
+Chicago 311 service request call data covers
-All open sanitation code complaints made to 311 and all requests completed since January 1, 2011. The Department of Streets and Sanitation investigates and remedies reported violations of ChicagoΓÇÖs sanitation code. Residents may request service for violations such as overflowing dumpsters and garbage in the alley. 311 sometimes receives duplicate sanitation code complaints. Requests that have been labeled as duplicates are in the same geographic area as a previous request and have been entered into 311ΓÇÖs Customer Service Request (CSR) system at around the same time. Duplicate complaints are labeled as such in the status field, as either ΓÇ£Open - DupΓÇ¥ or ΓÇ£Completed - Dup.ΓÇ¥
+- all open sanitation code complaints made to 311
+- all requests completed since January 1, 2011
+- historical sanitation code complaints
+- pot hole reports
+- street light issues
+
+The Department of Streets and Sanitation investigates and remedies reported violations of ChicagoΓÇÖs sanitation code. Residents can request service for overflowing dumpster and alley garbage violations, for example. The Chicago 311 service sometimes receives duplicate sanitation code complaints. Identified duplicate service requests are located in the same geographic area as a previous request, and were entered into the 311 Customer Service Request (CSR) system at about the same time. Duplicate complaints are labeled as such in the status field, as either "Open - Dup" or "Completed - Dup."
[!INCLUDE [Open Dataset usage notice](./includes/open-datasets-usage-note.md)]
-The Chicago Department of Transportation (CDOT) oversees the patching of potholes on over 4,000 miles of arterial and residential streets in Chicago. CDOT receives reports of potholes through the 311 call center. CDOT uses a mapping and tracking system to identify pothole locations and schedule crews.
+The Chicago Department of Transportation (CDOT) oversees pothole repair for 4,000 miles of arterial and residential streets in Chicago. CDOT receives pothole reports through the 311 call center. CDOT uses a mapping and tracking system to identify pothole locations and schedule crews.
+
+One call to 311 can generate multiple pothole repairs. When a crew arrives to repair a pothole based on a 311 call, that crew fills all the other potholes found on that block. Pothole repairs are completed within seven days from the first report of a pothole to 311. Weather conditions, frigid temps, and precipitation can influence the time needed to complete a pothole repair. On days of cooperative weather and no precipitation, crews can fill several thousand potholes.
-One call to 311 can generate multiple pothole repairs. When a crew arrives to repair a 311 pothole, it fills all the other potholes within the block. Pothole repairs are completed within seven days from the first report of a pothole to 311. Weather conditions, frigid temps, and precipitation, influence how long a repair takes. On days when weather is cooperative and there's no precipitation, crews can fill several thousand potholes.
+If a previous request is already open for a buffer of four addresses, the request gets a status of "Duplicate (Open)" status. For example, for an existing CSR for address **6535 N Western**, and 311 receives a new CSR for address **6531 N Western**, then the new request receives a status of ΓÇ£Duplicate (Open)ΓÇ¥, the new request receives a "Duplicate (Open)" status. In this example, the new CSR is at an address within four addresses of the original CSR. Once the crews repair the street, the CSR status reads ΓÇ£CompletedΓÇ¥ for the original request, and ΓÇ£Duplicate (Closed)ΓÇ¥ for any duplicate requests. A service request also receives the status of ΓÇ£CompletedΓÇ¥ when the reported address is inspected but no potholes are found or were filled. If another issue is found with the street, such as a ΓÇ£cave-inΓÇ¥ or ΓÇ£failed utility cutΓÇ¥, then the issue is directed to the appropriate department or contractor.
-If a previous request is already open for a buffer of four addresses, the request is given the status of ΓÇ£Duplicate (Open)ΓÇ¥. For example, if there's an existing CSR for 6535 N Western and a new request is received for 6531 N Western (which is within four addresses of the original CSR) then the new request is given a status of ΓÇ£Duplicate (Open)ΓÇ¥. Once the street is repaired, the status in CSR will read ΓÇ£CompletedΓÇ¥ for the original request and ΓÇ£Duplicate (Closed)ΓÇ¥ for any duplicate requests. A service request also receives the status of ΓÇ£CompletedΓÇ¥ when the reported address is inspected but no potholes are found or have already been filled. If another issue is found with the street, such as a ΓÇ£cave-inΓÇ¥ or ΓÇ£failed utility cutΓÇ¥, then it's directed to the appropriate department or contractor.
+Open reports made to 311 of street light outages involving three or more lights are defined as "Street Lights - All Out." The Chicago Department of Transportation (CDOT) oversees approximately 250,000 street lights that illuminate arterial and residential streets in Chicago. CDOT performs repairs and bulb replacements in response to residentsΓÇÖ reports of street light outages. Whenever CDOT receives a report of an ΓÇ£All OutΓÇ¥ the electrician assigned to make the repair looks at the lights in that circuit (each circuit has 8-16 lights) to make sure they all operate properly. If a second request of lights out in the same circuit is made within four calendar days of the original request, the newest request is automatically given the status of ΓÇ£Duplicate (Open).ΓÇ¥ Since the CDOT electrician looks at the lights in a circuit to verify their operation, any ΓÇ£Duplicate (Open)ΓÇ¥ address is automatically observed and repaired. Once the street lights are repaired, the status in CSR reads ΓÇ£CompletedΓÇ¥ for the original request and ΓÇ£Duplicate (Closed)ΓÇ¥ for any duplicate requests. A service request also receives the status of ΓÇ£CompletedΓÇ¥ when
-All open reports of ΓÇ£Street Lights - All OutΓÇ¥ (an outage of three or more lights) made to 311 and all requests completed since January 1, 2011. The Chicago Department of Transportation (CDOT) oversees approximately 250,000 street lights that illuminate arterial and residential streets in Chicago. CDOT performs repairs and bulb replacements in response to residentsΓÇÖ reports of street light outages. Whenever CDOT receives a report of an ΓÇ£All OutΓÇ¥ the electrician assigned to make the repair looks at the lights in that circuit (each circuit has 8-16 lights) to make sure they're working properly. If a second request of lights out in the same circuit is made within four calendar days of the original request, the newest request is automatically given the status of ΓÇ£Duplicate (Open).ΓÇ¥ Since CDOTΓÇÖs electrician will be looking at the lights in a circuit to verify they're working, any ΓÇ£Duplicate (Open)ΓÇ¥ address will automatically be observed and repaired. Once the street lights are repaired, the status in CSR will read ΓÇ£CompletedΓÇ¥ for the original request and ΓÇ£Duplicate (Closed)ΓÇ¥ for any duplicate requests. A service request also receives the status of ΓÇ£CompletedΓÇ¥ when the reported lights are inspected but found to be in good repair and functioning; when the service request is for a non-existent address; or when the lights are maintained by a contractor. Data is updated daily.
+- the reported lights are inspected but found to be in good repair and functioning
+- the service request is for a nonexistent address
+- a contractor maintains the lights
+
+The data resource received daily updates.
## Volume and retention
-This dataset is stored in Parquet format. It is updated daily, and contains about 1M rows (80 MB) in total as of 2018.
+This dataset is stored in Parquet format. It received daily updates, and contains about 1M rows (80 MB) in total as of 2019.
This dataset contains historical records accumulated from 2011 to 2018. You can use parameter settings in our SDK to fetch data within a specific time range.
This dataset is stored in the East US Azure region. Allocating compute resources
This dataset is sourced from city of Chicago government.
-Reference here for the terms of using this dataset. Email dataportal@cityofchicago.org if you have any questions about the data source.
+Reference here for the terms of use for this dataset resource. Email [dataportal@cityofchicago.org](mailto:dataportal@cityofchicago.org) with questions about the data source.
## Columns
-| Name | Data type | Unique | Values (sample) | Description |
-|-|-|-|-|-|
-| address | string | 140,612 | \" \" 1 City Hall Plz Boston MA 02108 | Location. |
-| category | string | 54 | Street Cleaning Sanitation | Reason of the service request. |
-| dataSubtype | string | 1 | 311_All | ΓÇ£311_AllΓÇ¥ |
-| dataType | string | 1 | Safety | ΓÇ£SafetyΓÇ¥ |
-| dateTime | timestamp | 1,529,075 | 2015-07-23 10:51:00 2015-07-23 10:47:00 | Open date and time of the service request. |
-| latitude | double | 1,622 | 42.3594 42.3603 | This is the latitude value. Lines of latitude are parallel to the equator. |
-| longitude | double | 1,806 | -71.0587 -71.0583 | This is the longitude value. Lines of longitude run perpendicular to lines of latitude, and all pass through both poles. |
-| source | string | 7 | Constituent Call Citizens Connect App | Original source of the case. |
-| status | string | 2 | Closed Open | Case status. |
-| subcategory | string | 209 | Parking Enforcement Requests for Street Cleaning | Type of the service request. |
-
-## Preview
-
-| dataType | dataSubtype | dateTime | category | subcategory | status | address | latitude | longitude | source | extendedProperties |
-|-|-|-|-|-|-|-|-|-|-|-|
-| Safety | 311_All | 4/25/2021 11:55:04 PM | Street Light Out Complaint | null | Open | 4800 W WASHINGTON BLVD | 41.882148426 | -87.74556256 | null | |
-| Safety | 311_All | 4/25/2021 11:54:31 PM | 311 INFORMATION ONLY CALL | null | Completed | 2111 W Lexington ST | | | null | |
-| Safety | 311_All | 4/25/2021 11:52:11 PM | 311 INFORMATION ONLY CALL | null | Completed | 2111 W Lexington ST | | | null | |
-| Safety | 311_All | 4/25/2021 11:49:56 PM | 311 INFORMATION ONLY CALL | null | Completed | 2111 W Lexington ST | | | null | |
-| Safety | 311_All | 4/25/2021 11:48:53 PM | Garbage Cart Maintenance | null | Open | 3409 E 106TH ST | 41.702545562 | -87.540917602 | null | |
-| Safety | 311_All | 4/25/2021 11:46:01 PM | 311 INFORMATION ONLY CALL | null | Completed | 2111 W Lexington ST | | | null | |
-| Safety | 311_All | 4/25/2021 11:45:46 PM | Aircraft Noise Complaint | null | Completed | 10510 W ZEMKE RD | | | null | |
-| Safety | 311_All | 4/25/2021 11:45:02 PM | 311 INFORMATION ONLY CALL | null | Completed | 2111 W Lexington ST | | | null | |
-| Safety | 311_All | 4/25/2021 11:44:24 PM | Sewer Cave-In Inspection Request | null | Open | 7246 W THORNDALE AVE | 41.987984339 | -87.808702917 | null | |
+311 Service Requests - Street Lights - All Out - Historical
+
+| Name | Data type | Values (sample) | Description |
+|-|-|-|-|
+| Creation Date | Floating Timestamp | 10/9/2017 | Request creation date |
+| Status | Text | Completed - Dup | Request status |
+| Completion Date | Floating Timestamp | 10/11/2017 | Request completion date |
+| Service Request Number | Text | 17-06773249 | Service number of the request |
+| Type of Service Request | Text | Street Lights - All/Out | Service request type |
+| Street Address | Text | 2826 N TALMAN AVE | Address of request |
+| ZIP Code | Number | 60618 | ZIP code value of request address |
+| X Coordinate | Number | 1158230.1582963 | X Coordinate value |
+| Y Coordinate | Number | 1918676.90199051 | Y Coordinate value |
+| Ward | Number | 33 | Ward Number value |
+| Police District | Number | 14 | Police District number |
+| Community Area | Number | 21 | Community Area number |
+| Latitude | Number | 41.93259686594802 | The request location latitude value. Latitude lines are parallel to the equator. |
+| Longitude | Number | -87.6939355144751 | The request location longitude value. Longitude lines run perpendicular to lines of latitude, and all pass through both poles. |
+| Location | Location | (41.932596865948, -87.693935514475) | Combined latitude and longitude values for the address |
+
+Preview
+
+| Creation Date | Status | Completion Date | Service Request Number | Type of Service Request | Street Address | ZIP Code | X Coordinate | Y Coordinate | Ward | Police District | Community Area | Latitude | Longitude | Location |
+|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
+| 10/9/2017 | Completed - Dup | 10/11/2017 | 17-06773249 | Street Lights - All/Out | 2826 N TALMAN AVE | 60618 | 1158230.158 | 1918676.902 | 33 | 14 | 21 | 41.93259686594802 | -87.69393551 | (41.932596865948, -87.693935514475) |
+| 10/11/2017 | Completed | 10/11/2017 | 17-06816558 | Street Lights - All/Out | 6200 S LAKE SHORE DR | 60637 | 1190863.778 | 1864244.283 | 5 | 3 | 42 | 41.78250135027194 | -87.57577731 | (41.782501350272, -87.575777307852) |
+| 3/20/2014 | Completed - Dup | 8/4/2017 | 14-00400272 | Street Lights - All/Out | 5730 N KINGSDALE AVE | 60646 | 1143691.393 | 1937640.891 | 39 | 17 | 12 | 41.984920748899164 | -87.74688744 | (41.984920748899, -87.746887444765) |
+| 10/9/2017 | Completed | 10/11/2017 | 17-06772762 | Street Lights - All/Out | 5246 S LUNA AVE | 60638 | 1140255.697 | 1869109.118 | 14 | 8 | 56 | 41.79692498298546 | -87.7612044 | (41.796924982985, -87.761204398005) |
+| 10/10/2017 | Completed | 10/11/2017 | 17-06786335 | Street Lights - All/Out | 954 E 111TH ST | 60628 | 1184652.066 | 1831465.656 | 9 | 5 | 50 | 41.69270116620948 | -87.59957553 | (41.692701166209, -87.599575527098) |
+| 10/8/2017 | Completed | 10/11/2017 | 17-06752801 | Street Lights - All/Out | 4399 N DAMEN AVE | 60618 | 1162224.952 | 1929224.823 | 47 | 19 | 5 | 41.961458246672315 | -87.67895915 | (41.961458246672, -87.67895914919) |
+| 10/6/2017 | Completed | 10/11/2017 | 17-06696916 | Street Lights - All/Out | 4730 N BROADWAY | 60640 | 1167596.292 | 1931650.772 | 46 | 19 | 3 | 41.968000877697875 | -87.65914105 | (41.968000877698, -87.659141052722) |
+| 10/7/2017 | Completed | 10/11/2017 | 17-06734666 | Street Lights - All/Out | 6449 S VERNON AVE | 60637 | 1180358.718 | 1862347.753 | 20 | 3 | 42 | 41.77754460257851 | -87.61434958 | (41.777544602579, -87.61434958023) |
+
+311 Service Requests - Pot Holes Reported - Historical
+
+| Name | Data type | Values (sample) | Description |
+|-|-|-|-|
+| Creation Date | Floating Timestamp | 4/25/2018 | Request creation date |
+| Status | Text | Completed | Request status |
+| Completion Date | Floating Timestamp | 4/26/2018 | Request completion date |
+| Service Request Number | Text | 18-01325016 | Service number of the request |
+| Type of Service Request | Text | Pothole in Street | Service request type |
+| Current Activity | Text | Final Outcome | Latest Activity Description |
+| Most Recent Action | Text | No Potholes Found | Latest action taken |
+| Number of Potholes Filled on Block | 0 | | Count of repaired potholes |
+| Street Address | Text | 5100 S LAWLER AVE | Address of request |
+| ZIP Code | Number | 60638 | ZIP code value of request address |
+| X Coordinate | Number | 1143556.31919224 | X Coordinate value |
+| Y Coordinate | Number | 1870339.26041166 | Y Coordinate value |
+| Ward | Number | 14 | Ward Number value |
+| Police District | Number | 8 | Police District number |
+| Community Area | Number | 56 | Community Area number |
+| SSA | Number | 26 | |
+| Latitude | double | 41.80014700738077 | This is the latitude value. Lines of latitude are parallel to the equator. |
+| Longitude | double | -87.7492147421616 | This is the longitude value. Lines of longitude run perpendicular to lines of latitude, and all pass through both poles. |
+| Location | Location | (41.80014700738077, -87.7492147421616) | Combined latitude and longitude values for the address |
+
+Preview
+
+| Creation Date | Status | Completion Date | Service Request Number | Type of Service Request | Current Activity | Most Recent Action | Number of Potholes Filled on Block | Street Address | ZIP Code | X Coordinate | Y Coordinate | Ward | Police District | Community Area | SSA | Latitude | Longitude | Location |
+|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
+| 6/13/2012 | Completed | 6/18/2012 | 12-01071965 | Pot Hole in Street | Dispatch Crew | Pothole Patched | 14 | 7040 N FRANCISCO AVE | 60645 | 1155793.815 | 1946654.94 | 50 | 24 | 2 | | 42.0096087 | -87.70227538 | (42.009608698109, -87.702275384338) |
+| 6/15/2017 | Completed | 6/29/2017 | 17-03958579 | Pothole in Street | Final Outcome | WM Sewer Cave In Inspection Transfer Outcome | 0 | 4216 W CORTEZ ST | 60651 | 1148081.734 | 1906667.21 | 37 | 11 | 23 | | 41.89994838998482 | -87.73187935 | (41.899948389985, -87.731879353699) |
+| 1/13/2014 | Completed | 1/24/2014 | 14-00052283 | Pot Hole in Street | Final Outcome | Pothole Patched | 5 | 1200 S CANAL ST | 60607 | 1173311.531 | 1894981.335 | 2 | 1 | 28 | | 41.86717512472001 | -87.63937581 | (41.86717512472, -87.639375812581) |
+| 10/13/2015 | Completed | 11/24/2015 | 15-05364068 | Pothole in Street | Final Outcome | Pothole Patched | 3 | 6318 N WESTERN AVE | 60659 | 1159171.947 | 1941877.852 | 50 | 24 | 2 | 43 | 41.996507559859445 | -87.68999022 | (41.996507559859, -87.689990223964) |
+| 2/23/2014 | Completed | 4/8/2014 | 14-00256448 | Pot Hole in Street | Final Outcome | Pothole Patched | 12 | 6800 N KEDZIE AVE | 60645 | 1153845.564 | 1944905.48 | 50 | 0 | 2 | | 42.004746892817465 | -87.70949265 | (42.004746892817, -87.709492653059) |
+| 10/16/2015 | Completed | 11/24/2015 | 15-05416322 | Pothole in Street | Final Outcome | CDOT Asphalt Top Off Restoration Transfer Outcome | 0 | 6430 N KEDZIE AVE | 60645 | 1153862.527 | 1942457.906 | 50 | 0 | 2 | 43 | 41.998328665333965 | -87.70950507 | (41.998328665334, -87.7095050747) |
+| 4/1/2013 | Completed | 7/30/2013 | 13-00360362 | Pot Hole in Street | Final Outcome | Pothole Patched | 40 | 3738 N TRIPP AVE | 60641 | 1147334.393 | 1924509.895 | 38 | 17 | 16 | | 41.94928712004567 | -87.73398704 | (41.949287120046, -87.733987044713) |
+
+Sanitation Code Complaints
+
+| Name | Data type | Values (sample) | Description |
+|-|-|-|-|
+| Creation Date | Floating Timestamp | 9/17/2017 | Request creation date |
+| Status | Text | Completed | Request status |
+| Completion Date | Floating Timestamp | 10/11/2017 | Request completion date |
+| Service Request Number | Text | 17-06208608 | Service number of the request |
+| Type of Service Request | Text | Sanitation Code Violation | Service request type |
+| What is the Nature of this Code Violation? | Text | Overflowing carts | Latest Activity Description |
+| Street Address | Text | 6327 S KENNETH AVE | Address of request |
+| ZIP Code | Number | 60629 | ZIP code value of request address |
+| X Coordinate | Number | 1147796.475 | X Coordinate value |
+| Y Coordinate | Number | 1862216.771 | Y Coordinate value |
+| Ward | Number | 13 | Ward Number value |
+| Police District | Number | 8 | Police District number |
+| Community Area | Number | 65 | Community Area number |
+| Latitude | double | 41.77787022898461 | This is the latitude value. Lines of latitude are parallel to the equator. |
+| Longitude | double | -87.73372735 | This is the longitude value. Lines of longitude run perpendicular to lines of latitude, and all pass through both poles. |
+| Location | Location | (41.932596865948, -87.693935514475) | Combined latitude and longitude values for the address |
+
+Preview
+
+| Creation Date | Status | Completion Date | Service Request Number | Type of Service Request | What is the Nature of this Code Violation? | Street Address | ZIP Code | X Coordinate | Y Coordinate | Ward | Police District | Community Area | Latitude | Longitude | Location |
+|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
+| 9/17/2017 | Completed | 10/11/2017 | 17-06208608 | Sanitation Code Violation | Overflowing carts | 6327 S KENNETH AVE | 60629 | 1147796.475 | 1862216.771 | 13 | 8 | 65 | 41.77787022898461 | -87.73372735 | (41.777870228985, -87.733727348463) |
+| 10/5/2017 | Completed | 10/11/2017 | 17-06678788 | Sanitation Code Violation | Garbage in alley | 3020 W MONTROSE AVE | 60618 | 1155359.487 | 1929084.561 | 33 | 17 | 14 | 41.961214454744535 | -87.70420422 | (41.961214454745, -87.704204220358) |
+| 8/21/2017 | Completed | 10/11/2017 | 17-05591233 | Sanitation Code Violation | Garbage in yard | 1500 S DAMEN AVE | 60608 | 1163279.962 | 1892714.436 | 28 | 12 | 28 | 41.86124902532175 | -87.67610892 | (41.861249025322, -87.676108920835) |
+| 9/23/2017 | Completed | 10/11/2017 | 17-06370432 | Sanitation Code Violation | Construction Site Cleanliness/Fence | 6442 S CENTRAL AVE | 60638 | 1140196.719 | 1861187.963 | 13 | 8 | 64 | 41.77518903 | -87.76161383 | (41.775189032012, -87.761613831651) |
+| 8/1/2017 | Completed - Dup | 8/4/2017 | 17-05101063 | Sanitation Code Violation | Garbage in alley | 3016 W MONTROSE AVE | 60618 | 1155405.587 | 1929085.161 | 33 | 17 | 14 | 41.96121517 | -87.70403472 | (41.961215172275, -87.704034715236) |
+| 9/26/2017 | Completed | 10/11/2017 | 17-06440193 | Sanitation Code Violation | Other | 8830 S WABASH AVE | 60619 | 1178255.291 | 1846460.484 | 9 | 6 | 44 | 41.733996131384714 | -87.6225419 | (41.733996131385, -87.622541895911) |
+| 10/10/2017 | Completed | 10/11/2017 | 17-06786539 | Sanitation Code Violation | Other | 4523 N LAWNDALE AVE | 60625 | 1150908.388 | 1929805.543 | 35 | 17 | 14 | 41.963281388376565 | -87.72054998 | (41.963281388377, -87.7205499839) |
+| 5/31/2017 | Completed | 8/4/2017 | 17-03559234 | Sanitation Code Violation | Other | 3359 W 19TH ST | 60623 | 1154204.655 | 1890509.209 | 24 | 10 | 29 | 41.85538344067419 | -87.70948151 | (41.855383440674, -87.709481507782) |
## Data access
Reference here for the terms of using this dataset. Email dataportal@cityofchica
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azureml-opendatasets&registryId=city_safety_chicago --> - ```
-# This is a package in preview.
+# This is package is in preview.
from azureml.opendatasets import ChicagoSafety from datetime import datetime from dateutil import parser - end_date = parser.parse('2016-01-01') start_date = parser.parse('2015-05-01') safety = ChicagoSafety(start_date=start_date, end_date=end_date)
safety.info()
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azure-storage&registryId=city_safety_chicago --> - ```python # Pip install packages import os, sys
Sample not available for this platform/package combination.
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureDatabricks&package=azureml-opendatasets&registryId=city_safety_chicago --> - ``` # This is a package in preview. # You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import ChicagoSafety
from datetime import datetime from dateutil import parser - end_date = parser.parse('2016-01-01') start_date = parser.parse('2015-05-01') safety = ChicagoSafety(start_date=start_date, end_date=end_date)
Sample not available for this platform/package combination.
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureDatabricks&package=pyspark&registryId=city_safety_chicago --> - ```python # Azure storage access info blob_account_name = "azureopendatastorage"
display(spark.sql('SELECT * FROM source LIMIT 10'))
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureSynapse&package=azureml-opendatasets&registryId=city_safety_chicago --> - ```python # This is a package in preview. from azureml.opendatasets import ChicagoSafety
Sample not available for this platform/package combination.
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureSynapse&package=pyspark&registryId=city_safety_chicago --> - ```python # Azure storage access info blob_account_name = "azureopendatastorage"
display(spark.sql('SELECT * FROM source LIMIT 10'))
- See the [City Safety Analytics](https://github.com/scottcounts/CitySafety) example on GitHub. - ## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](./dataset-catalog.md).
open-datasets How To Create Azure Machine Learning Dataset From Open Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md
# Create Azure Machine Learning datasets from Azure Open Datasets > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you learn how to bring curated enrichment data into your local or remote machine learning experiments with [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) datasets and [Azure Open Datasets](./index.yml).
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
Note the following important changes to make before you upgrade to any of the av
| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | | | 1.25.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | | | 1.25.6 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.25.6 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.25.6 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 7 |Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
| 1.25.11 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.25.11 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.25.11 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.11 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.11 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | | | 1.26.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | | | 1.26.3 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.26.3 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.26.3 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.3 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.3 | 7 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
| 1.26.6 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.26.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.26.6 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.6 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.6 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.26.12 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) | | 1.27.1 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) | | 1.27.1 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.1 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.27.1 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.27.1 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.27.1 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.27.1 | 7 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
| 1.27.3 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.27.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.27.3 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.0 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.0 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
+| 1.27.3 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.27.3 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.27.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
+| 1.28.0 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.28.0 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
| 1.28.0 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.28.0 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.28.0 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.28.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
## Upgrading Kubernetes versions
postgresql Concepts Read Replicas Geo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas-geo.md
You can have a primary server in any [Azure Database for PostgreSQL flexible ser
- **Microsoft Azure operated by 21Vianet regions**: - China North 3 - China East 3
+ - China North 2
+ - China East 2
-> [!NOTE]
-> [Virtual endpoints](concepts-read-replicas-virtual-endpoints.md) and [promote to primary server features](concepts-read-replicas-promote.md) - are not currently supported in the special regions listed above.
## Paired regions for disaster recovery purposes
postgresql Concepts Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-troubleshooting-guides.md
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-The troubleshooting guides for Azure Database for PostgreSQL flexible server are designed to help you quickly identify and resolve common challenges you may encounter while using Azure Database for PostgreSQL flexible server. Integrated directly into the Azure portal, the troubleshooting guides provide actionable insights, recommendations, and data visualizations to assist you in diagnosing and addressing issues related to common performance problems. With these guides at your disposal, you'll be better equipped to optimize your Azure Database for PostgreSQL flexible server experience and ensure a smoother, more efficient database operation.
+The troubleshooting guides for Azure Database for PostgreSQL flexible server are designed to help you quickly identify and resolve common challenges you may encounter while using Azure Database for PostgreSQL flexible server. Integrated directly into the Azure portal, the troubleshooting guides provide actionable insights, recommendations, and data visualizations to assist you in diagnosing and addressing issues related to common performance problems. With these guides at your disposal, you are better equipped to optimize your Azure Database for PostgreSQL flexible server experience and ensure a smoother, more efficient database operation.
## Overview The troubleshooting guides available in Azure Database for PostgreSQL flexible server provide you with the necessary tools to analyze and troubleshoot prevalent performance issues, including:
-* High CPU Usage,
-* High Memory Usage,
-* High IOPS Usage,
-* High Temporary Files,
-* Autovacuum Monitoring,
-* Autovacuum Blockers.
+* CPU
+* Memory
+* IOPS
+* Temporary files
+* Autovacuum monitoring
+* Autovacuum blockers
:::image type="content" source="./media/concepts-troubleshooting-guides/overview-troubleshooting-guides.jpg" alt-text="Screenshot of multiple Troubleshooting Guides combined." lightbox="./media/concepts-troubleshooting-guides/overview-troubleshooting-guides.jpg":::
The troubleshooting guides are directly integrated into the Azure portal and you
The troubleshooting guides consist of the following components: -- **High CPU Usage**
+- **CPU**
- * CPU Utilization
- * Workload Details
- * Transaction Trends and Counts
- * Long Running Transactions
- * Top CPU Consuming queries
- * Total User Only Connections
+ * CPU
+ * Workload
+ * Transactions
+ * Long running transactions
+ * Queries
+ * User connections
+ * Locking and blocking
-- **High Memory Usage**
+- **Memory**
- * Memory Utilization
- * Workload Details
- * Long Running Sessions
- * Top Queries by Data Usage
- * Total User only Connections
- * Guidelines for configuring parameters
+ * Memory
+ * Workload
+ * Sessions
+ * Queries
+ * User connections
+ * Memory parameters
-- **High IOPS Usage**
+- **IOPS**
- * IOPS Usage
- * Workload Details
- * Session Details
- * Top Queries by IOPS
- * IO Wait Events
- * Checkpoint Details
- * Storage Usage
+ * IOPS
+ * Workload
+ * Sessions
+ * Queries
+ * Waits
+ * Checkpoints
+ * Storage
-- **High Temporary Files**
+- **Temporary files**
- * Storage Utilization
- * Temporary Files Generated
- * Workload Details
- * Top Queries by Temporary Files
+ * Storage
+ * Temporary files
+ * Workload
+ * Queries
-- **Autovacuum Monitoring**
+- **Autovacuum monitoring**
- * Bloat Ratio
- * Tuple Counts
- * Tables Vacuumed & Analyzed Execution Counts
- * Autovacuum Workers Execution Counts
+ * Bloat
+ * Tuples
+ * Vacuum and analyze
+ * Autovacuum workers
+ * Autovacuum per table
+ * Enhanced metrics
-- **Autovacuum Blockers**
+- **Autovacuum blockers**
- * Emergency AV and Wraparound
- * Autovacuum Blockers
+ * Emergency autovacuum and wraparound
+ * Autovacuum blockers
-Before using any troubleshooting guide, it is essential to ensure that all prerequisites are in place. For a detailed list of prerequisites, please refer to the [Use Troubleshooting Guides](how-to-troubleshooting-guides.md) article.
+Before using any troubleshooting guide, it's essential to ensure that all prerequisites are in place. For a detailed list of prerequisites refer to the article [Use troubleshooting guides](how-to-troubleshooting-guides.md).
### Limitations
-* Troubleshooting Guides are not available for [read replicas](concepts-read-replicas.md).
-* Please be aware that enabling Query Store on the Burstable pricing tier can lead to a negative impact on performance. As a result, it is generally not recommended to use Query Store with this particular pricing tier.
+* Troubleshooting guides aren't available for [read replicas](concepts-read-replicas.md).
+* Be aware that enabling Query Store on the Burstable pricing tier can lead to a negative impact on performance. As a result, it's not recommended to use Query Store with this particular pricing tier.
## Next steps
-* Learn more about [How to use Troubleshooting Guides](how-to-troubleshooting-guides.md).
+* Learn more about [How to use troubleshooting guides](how-to-troubleshooting-guides.md).
* Learn more about [Troubleshoot high CPU utilization](how-to-high-cpu-utilization.md). * Learn more about [High memory utilization](how-to-high-memory-utilization.md). * Learn more about [Troubleshoot high IOPS utilization](how-to-high-io-utilization.md).
postgresql How To Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshooting-guides.md
description: Learn how to use troubleshooting guides for Azure Database for Post
Previously updated : 04/27/2024 Last updated : 06/25/2024
In this article, you learn how to use troubleshooting guides for Azure Database
## Prerequisites
-To effectively troubleshoot specific issue, you need to make sure you have all the necessary data in place.
-Each troubleshooting guide requires a specific set of data, which is sourced from three separate features: [Diagnostic settings](how-to-configure-and-access-logs.md), [Query Store](concepts-query-store.md), and [Enhanced Metrics](concepts-monitoring.md#enabling-enhanced-metrics).
-All troubleshooting guides require logs to be sent to the Log Analytics workspace, but the specific category of logs to be captured may vary depending on the particular guide.
+To effectively troubleshoot a specific issue, you need to make sure that you have all the necessary data in place.
+Each troubleshooting guide requires a specific set of data, which is sourced from three separate features: [Diagnostic settings](how-to-configure-and-access-logs.md), [Query Store](concepts-query-store.md), and [Enhanced metrics](concepts-monitoring.md#enabling-enhanced-metrics).
+All troubleshooting guides require logs to be sent to a Log Analytics workspace, but the specific category of logs to be captured may vary depending on the particular guide.
-Please follow the steps described in [Configure and Access Logs - Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md) to configure diagnostic settings and send the logs to the Log Analytics workspace.
-Query Store, and Enhanced Metrics are configured via the Server Parameters. Please follow the steps described in the configure server parameters in Azure Database for PostgreSQL flexible server articles for [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
+Please, follow the steps described in [Configure and Access Logs - Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md) to configure diagnostic settings and send the logs to a Log Analytics workspace.
-The table below provides information on the required log categories for each troubleshooting guide, as well as the necessary Query Store, Enhanced Metrics and Server Parameters prerequisites.
+Query Store, and Enhanced metrics are configured via Server parameters. Please follow the steps described in the configure server parameters in Azure Database for PostgreSQL flexible server articles for [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-| Troubleshooting guide | Diagnostic settings log categories | Query Store | Enhanced Metrics | Server Parameters |
-|:-|:--|-|-|-|
-| Autovacuum Blockers | Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Database Remaining Transactions | N/A | N/A | N/A |
-| Autovacuum Monitoring | Azure Database for PostgreSQL flexible server Logs, PostgreSQL Tables Statistics, Azure Database for PostgreSQL flexible server Database Remaining Transactions | N/A | N/A | log_autovacuum_min_duration |
-| High CPU Usage | Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
-| High IOPS Usage | Azure Database for PostgreSQL flexible server Query Store Runtime, Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | track_io_timing to ON |
-| High Memory Usage | Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
-| High Temporary Files | Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Runtime, Azure Database for PostgreSQL flexible server Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+The table below provides information on the required log categories for each troubleshooting guide, as well as the necessary Query Store, Enhanced metrics and Server parameters prerequisites.
+
+| Troubleshooting guide | Diagnostic settings log categories and metrics | Query Store | Enhanced metrics | Server parameters |
+|:-|:-|--|-|--|
+| CPU | PostgreSQL Server Logs<br/>PostgreSQL Server Sessions data<br/>PostgreSQL Server Query Store Runtime<br/>AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| Memory | PostgreSQL Server Logs<br/>PostgreSQL Server Sessions data<br/>PostgreSQL Server Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| IOPS | PostgreSQL Server Query Store Runtime<br/>PostgreSQL Server Logs<br/>PostgreSQL Server Sessions data<br/>PostgreSQL Server Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL<br/>pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | track_io_timing to ON |
+| Temporary files | PostgreSQL Server Sessions data<br/>PostgreSQL Server Query Store Runtime<br/>PostgreSQL Server Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL<br/>pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | N/A |
+| Autovacuum monitoring | PostgreSQL Server Logs<br/>PostgreSQL Autovacuum and schema statistics<br/>PostgreSQL remaining transactions | N/A | N/A | log_autovacuum_min_duration |
+| Autovacuum blockers | PostgreSQL Server Sessions data<br/>PostgreSQL remaining transactions | N/A | N/A | N/A |
> [!NOTE] > Please note that if you have recently enabled diagnostic settings, query store, enhanced metrics or server parameters, it may take some time for the data to be populated. Additionally, if there has been no activity on the database within a certain time frame, the charts might appear empty. In such cases, try changing the time range to capture relevant data. Be patient and allow the system to collect and display the necessary data before proceeding with your troubleshooting efforts.
-## Using Troubleshooting guides
+## Using the troubleshooting guides
-To use troubleshooting guides, follow these steps:
+To use the troubleshooting guides, follow these steps:
1. Open the Azure portal and find an Azure Database for PostgreSQL flexible server instance that you want to examine.
-2. From the left-side menu, open Help > Troubleshooting guides.
+2. From the left-side menu, under the *Monitoring* section, select *Troubleshooting guides*.
3. Navigate to the top of the page where you will find a series of tabs, each representing one of the six problems you may wish to resolve. Click on the relevant tab. :::image type="content" source="./media/how-to-troubleshooting-guides/portal-blade-overview.png" alt-text="Screenshot of Troubleshooting guides - tabular view.":::
-4. Select the time range during which the problem occurred.
+4. Select the period of time which you want to analyze.
:::image type="content" source="./media/how-to-troubleshooting-guides/time-range.png" alt-text="Screenshot of time range picker."::: 5. Follow the step-by-step instructions provided by the guide. Pay close attention to the charts and data visualizations plotted within the troubleshooting steps, as they can help you identify any inaccuracies or anomalies. Use this information to effectively diagnose and resolve the problem at hand.
-### Retrieving the Query Text
+### Retrieving the text of queries collected by query store
Due to privacy considerations, certain information such as query text and usernames may not be displayed within the Azure portal.
-To retrieve the query text, you need to log in to your Azure Database for PostgreSQL flexible server instance.
-Access the `azure_sys` database using the PostgreSQL client of your choice, where query store data is stored.
+To retrieve the text of those queries collected by query store, you need to log in to your Azure Database for PostgreSQL flexible server instance.
+Using the PostgreSQL client of your choice, access the `azure_sys` database where query store data is stored.
Once connected, query the `query_store.query_texts_view view` to retrieve the desired query text.
-In the example shown below, we utilize Azure Cloud Shell and the `psql` tool to accomplish this task:
- :::image type="content" source="./media/how-to-troubleshooting-guides/retrieve-query-text.png" alt-text="Screenshot of retrieving the Query Text.":::
-### Retrieving the Username
+### Retrieving the name of a user or role
For privacy reasons, the Azure portal displays the role ID from the PostgreSQL metadata (pg_catalog) rather than the actual username. To retrieve the username, you can query the `pg_roles` view or use the query shown below in your PostgreSQL client of choice, such as Azure Cloud Shell and the `psql` tool:
To retrieve the username, you can query the `pg_roles` view or use the query sho
SELECT 'UserID'::regrole; ```
+In the following example you would be retrieving the name of the user or role whose identifier is 24776.
+
+```sql
+SELECT '24776'::regrole;
+```
+ :::image type="content" source="./media/how-to-troubleshooting-guides/retrieve-username.png" alt-text="Screenshot of retrieving the Username.":::
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* Support for new [minor versions](concepts-supported-versions.md) 16.3, 15.7, 14.12, 13.15, and 12.19 <sup>$</sup> * General availability of [IOPS scaling](./concepts-storage.md#iops-scaling) on Azure Database for PostgreSQL flexible server. * CMK support for LTR in Public preview [long-term backup retention](concepts-backup-restore.md).
+* Support for [built-in Azure Policy definitions](concepts-security.md#azure-policy-support)
## Release: May 2024 * General availability of Postgres [azure_ai](generative-ai-azure-overview.md) extension.
postgresql Automigration Single To Flexible Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md
+
+ Title: Automigration
+description: This tutorial describes how to configure notifications, review migration details, and FAQs for an Azure Database for PostgreSQL Single Server instance schedule for automigration to Flexible Server.
+++ Last updated : 06/04/2024++++
+ - mvc
+ - mode-api
++
+# Automigration from Azure Database for Postgresql ΓÇô Single Server to Flexible Server
++
+**Automigration** from Azure Database for Postgresql ΓÇô Single Server to Flexible Server is a service-initiated migration during a planned downtime window for Single Server running PostgreSQL 11 and database workloads with **Basic, General Purpose or Memory Optimized SKU**, data storage used **<= 5 GiB** and **no complex features (CMK, Microsoft Entra ID, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent advance notifications detailing steps to review migration details and make modifications if necessary.
+
+The automigration provides a highly resilient and self-healing offline migration experience during a planned migration window, with up to **20 mins** of downtime. The migration service is a hosted solution using the [pgcopydb](https://github.com/dimitri/pgcopydb) binary and provides a fast and efficient way of copying databases from the source PostgreSQL instance to the target. This migration removes the overhead to manually migrate your server. Post migration, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration:
+
+- **Target Flexible Server is deployed** and matches your Single server SKU in terms of performance and cost, inheriting all firewall rules from source Single Server.
+
+- **Date is migrated** during the migration window chosen by the service or elected by you. If the window is chosen by the service, it's typically outside business hours of the specific region the server is hosted in. Source Single Server is set to read-only and the data & schema is migrated from the source Single Server to the target Flexible Server. User roles, privileges, and ownership of all database objects are also migrated to the flexible server.
+
+- **DNS switch and cutover** are performed within the planned migration window with minimal downtime, allowing usage of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates or changes. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server.
+
+- The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI.
+
+- The **updated connection strings** to connect to your old single server are shared with you by email. The connection strings can be used to log in to the Single server if you want to copy any settings to your new Flexible server.
+
+- The **legacy Single Server** is deleted **seven days** after the migration.
+
+## Nomination Eligibility
+
+If you own a Single Server workload with no complex features (CMK, Microsoft Entra ID, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for automigration. Submit your server details through this [form](https://forms.office.com/r/4pF55L8TxY).
+
+## Configure migration alerts and review migration schedule
+
+Servers eligible for automigration are sent an advance notification by the service.
+
+Following described are the ways to check and configure automigration notifications:
+
+- Subscription owners for Single Servers scheduled for automigration receive an email notification.
+- Configure **service health alerts** to receive automigration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification).
+- Check the automigration **notification on the Azure portal** by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal).
+
+Following described are the ways to review your migration schedule once you receive the automigration notification:
+
+> [!NOTE]
+> The migration schedule will be locked 7 days prior to the scheduled migration window during which you'll be unable to reschedule.
+
+- The **Single Server overview page** for your instance displays a portal banner with information about your migration schedule.
+- For Single Servers scheduled for automigration, the **Overview** page is updated with the relevant information. You can review the migration schedule by navigating to the Overview page of your Single Server instance.
+- If you wish to defer the migration, you can defer by a month at a time on the Azure portal. You can reschedule the migration by selecting another migration window within a month.
+
+> [!NOTE]
+> Typically, candidate servers short-listed for automigration do not use cross region or Geo redundant backups. And these features can only be enabled during create time for a postgresql Flexible Server. In case you plan to use any of these features, it's recommended to opt out of the automigration schedule and migrate your server manually.
+
+## Prerequisite checks for automigration
+
+Review the following prerequisites to ensure a successful automigration:
+
+- The Single Server instance should be in **ready state** during the planned migration window for automigration to take place.
+- For Single Server instance with **SSL enabled**, ensure you have all certificates (**[DigiCertGlobalRootG2 Root CA](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and [DigiCertGlobalRootCA Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate with all three certificates before scheduled automigration to ensure business continuity post-migration.
+- If your source Azure Database for postgresql Single Server has firewall rule names exceeding 80 characters, rename them to ensure length of name is fewer than 80 characters. (The firewall rule name length supported on Flexible Server is 80 characters whereas on Single Server the allowed length is 128 characters.)
+
+## How is the target postgresql Flexible Server provisioned?
+
+The compute tier and SKU for the target flexible server is provisioned based on the source single server's pricing tier and VCores as shown below.
+
+| Single Server Pricing Tier | Single Server VCores | Flexible Server Tier | Flexible Server SKU Name |
+| | | :: | :: |
+| Basic | 1 | Burstable | B1ms |
+| Basic | 2 | Burstable | B2s |
+| General Purpose | 2 | GeneralPurpose | Standard_D2s_v3 |
+| General Purpose | 4 | GeneralPurpose | Standard_D4s_v3 |
+| General Purpose | 8 | GeneralPurpose | Standard_D8s_v3 |
+| General Purpose | 16 | GeneralPurpose | Standard_D16s_v3 |
+| General Purpose | 32 | GeneralPurpose | Standard_D32s_v3 |
+| General Purpose | 64 | GeneralPurpose | Standard_D64s_v3 |
+| Memory Optimized | 2 | MemoryOptimized | Standard_E2s_v3 |
+| Memory Optimized | 4 | MemoryOptimized | Standard_E4s_v3 |
+| Memory Optimized | 8 | MemoryOptimized | Standard_E8s_v3 |
+| Memory Optimized | 16 | MemoryOptimized | Standard_E16s_v3 |
+| Memory Optimized | 32 | MemoryOptimized | Standard_E32s_v3 |
+
+- The postgresql version, region, connection string, subscription, and resource group for the target Flexible Server will remain the same as that of the source Single Server.
+- For Single Servers with less than 20-GiB storage, the storage size is set to 32 GiB as that is the minimum storage limit on Azure Database for postgresql - Flexible Server.
+- For Single Servers with greater storage requirement, sufficient storage equivalent to 1.25 times or 25% more storage than what is being used in the Single server is allocated. During the initial base copy of data, multiple insert statements are executed on the target, which generates WALs (Write Ahead Logs). Until these WALs are archived, the logs consume storage at the target and hence the margin of safety.
+- Both username formats ΓÇô username@server_name (Single Server) and username (Flexible Server) are supported on the migrated Flexible Server.
+- Both connection string formats ΓÇô Single Server and Flexible Server are supported on the migrated Flexible Server.
+
+## Post-migration steps
+
+Here's the info you need to know post automigration:
+
+- The server parameters in Flexible server are tuned to the community standards. If you want to retain the same server parameter values as your Single server, you can log in via PowerShell and run the script [here](https://github.com/hariramt/auto-migration/tree/main) to copy the parameter values.
+- To enable [query perf insights](../flexible-server/concepts-query-performance-insight.md), you need to enable query store on the Flexible server which isn't enabled by default
+- If [High Availability](../../reliability/reliability-postgresql-flexible-server.md) is needed, you can enable it with zero downtime.
+
+## Frequently Asked Questions (FAQs)
+
+**Q. Why am I being auto-migratedΓÇï?**
+
+**A.** Your Azure Database for Postgresql - Single Server instance is eligible for automigration to our flagship offering Azure Database for Postgresql - Flexible Server. This automigration will remove the overhead to manually migrate your server. You can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
+
+**Q. How does the automigration take place? What all does it migrate?ΓÇï**
+
+**A.** The Flexible Server is provisioned to closely match the same VCores and storage as that of your Single Server. Next the source Single Server is put in a read-only state, schema and data is copied to target Flexible Server. The DNS switch is performed to route all existing connections to target and the target Flexible Server is brought online. The automigration migrates the databases (including schema, data, users/roles, and privileges). The migration is offline where you see downtime of up to 20 minutes.
+
+**Q. How can I set up or view automigration alerts?ΓÇï**
+
+**A.** Following are the ways you can set up alerts:
+
+- Configure service health alerts to receive automigration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification).
+- Check the automigration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal).
+
+**Q. How can I defer the scheduled migration of my Single server?ΓÇï**
+
+**A.** You can review the migration schedule by navigating to the Overview page of your Single Server instance. If you wish to defer the migration, you can defer by a month at the most by navigating to the Overview page of your single server instance on the Azure portal. You can reschedule the migration by selecting another migration window within a month. The migration details will be locked seven days before the scheduled migration window after which you're unable to reschedule. This automigration can be deferred monthly until 30 March 2025.
+
+**Q. How can I opt out of a scheduled automigration of my Single server?ΓÇï**
+
+**A.** If you wish to opt out of the automigration, you can raise a support ticket for this purpose.
+
+**Q. What username and connection string would be supported for the migrated Flexible Server? ΓÇïΓÇï**
+
+**A.** Both username formats - username@server_name (Single Server format) and username (Flexible Server format) are supported for the migrated Flexible Server, and hence you aren't required to update them to maintain your application continuity post migration. Additionally, both connection string formats (Single and Flexible server format) are also supported for the migrated Flexible Server.
+
+**Q. I see a pricing difference on my potential move from postgresql Basic Single Server to postgresql Flexible Server??ΓÇï**
+
+**A.** Few servers might see a small price revision after migration as the minimum storage limit on both offerings is different (5 GiB on Single Server and 32 GiB on Flexible Server). Storage cost for Flexible Server is marginally higher than Single Server. Any price increase is offset through better throughput and performance compared to Single Server. For more information on Flexible server pricing, click [here](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/)
+
+## Related content
+
+- [Manage an Azure Database for postgresql - Flexible Server using the Azure portal.](../flexible-server/how-to-manage-server-portal.md)
reliability Asm Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/asm-retirement.md
Below is a list of classic resources being retired, their retirement dates, and
|[Classic Resource Providers](https://azure.microsoft.com/updates/azure-classic-resource-providers-will-be-retired-on-31-august-2024/) | Aug 2024 |[Migrate Classic Resource Providers to ARM](/azure/azure-resource-manager/management/deployment-models?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | | |[Integration Services Environment](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) | Aug 2024 |[Migrate Integration Services Environment to ARM](/azure/logic-apps/export-from-ise-to-standard-logic-app?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | [ISE Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%2265e73690-23aa-be68-83be-a6b9bd188345%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%224401dcbe-4183-6d63-7b0c-313ce7c4a496%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)| |[Microsoft HPC Pack](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |Aug 2024| [Migrate Microsoft HPC Pack to ARM](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide)|[HPC Pack Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22e00b1ed8-fc24-fef4-6f4c-36d963708ae1%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22b0d0a49b-0eff-12cd-a955-7e9d6cd809d4%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
-|[Virtual WAN](/azure/virtual-wan/virtual-wan-faq#update-router?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 2024 | [Migrate Virtual WAN to ARM](/azure/virtual-wan/virtual-wan-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#update-router) |[Virtual WAN Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22d3b69052-33aa-55e7-6d30-ebb7040f9766%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%229fce0565-284f-2521-c1ac-6c80f954b323%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Virtual WAN hubs on Cloud Services](/azure/virtual-wan/virtual-wan-faq#update-router?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | July 2024 | [Migrate Virtual WAN to ARM](/azure/virtual-wan/virtual-wan-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#update-router) |[Virtual WAN Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22d3b69052-33aa-55e7-6d30-ebb7040f9766%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%229fce0565-284f-2521-c1ac-6c80f954b323%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 2024 | [Migrate Classic Storage to ARM](/azure/storage/common/classic-account-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Classic Storage](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%226a9c20ed-85c7-c289-d5e2-560da8f2a7c8%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%2212adcfc2-182a-874a-066e-dda77370890a%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) | |[Classic Virtual Network](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 2024 | [Migrate Classic Virtual Network to ARM]( /azure/virtual-network/migrate-classic-vnet-powershell?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Virtual network Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22b25271d3-6431-dfbc-5f12-5693326809b3%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%227b487f07-f200-85b5-f3e1-0a2d40b71fef%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)| |[Classic Application Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 2024 | [Migrate Classic Application Gateway to ARM](/azure/application-gateway/classic-to-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |[Application Gateway Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22101732bb-31af-ee61-7c16-d4ad77c86a50%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%228b2086bf-19da-8ab5-41dc-ad9eadc6e9b3%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)|
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
# Support for moving Azure VMs between Azure regions > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes support and prerequisites when you move virtual machines and related network resources across Azure regions using Resource Mover.
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
- ignite-2023 - build-2024 Previously updated : 06/11/2024 Last updated : 07/02/2024 # Configure a search service to connect using a managed identity in Azure AI Search
A search service uses Azure Storage as an indexer data source and as a data sink
<sup>1</sup> For connectivity between search and storage, your network security configuration imposes constraints on which type of managed identity you can use. Only a system managed identity can be used for a same-region connection to storage via the trusted service exception or resource instance rule. See [Access to a network-protected storage account](search-indexer-securing-resources.md#access-to-a-network-protected-storage-account) for details.
-<sup>2</sup> For enrichment caching in Azure table storage, the search service currently can't connect to tables on a storage account that has [shared key access turned off](../storage/common/shared-key-authorization-prevent.md).
+<sup>2</sup> AI search service currently can't connect to tables on a storage account that has [shared key access turned off](../storage/common/shared-key-authorization-prevent.md).
<sup>3</sup> Connections to Azure OpenAI or Azure AI include: [Custom skill](cognitive-search-custom-skill-interface.md), [Custom vectorizer](vector-search-vectorizer-custom-web-api.md), [Azure OpenAI embedding skill](cognitive-search-skill-azure-openai-embedding.md), [Azure OpenAI vectorizer](vector-search-how-to-configure-vectorizer.md), [AML skill](cognitive-search-aml-skill.md), [Azure AI Studio model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md), [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md), [Azure AI Vision vectorizer](vector-search-vectorizer-ai-services-vision.md).
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Last updated 06/28/2024
-# Connect to Azure AI Search using key authentication
+# Connect to Azure AI Search using keys
-Azure AI Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint is accepted if both the request and the API key are valid.
+Azure AI Search offers key-based authentication for connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. In your source code, you can specify it as an [environment variable](/azure/ai-services/cognitive-services-environment-variables) or as an app setting in your project, and then reference the variable on the request. A request made to a search service endpoint is accepted if both the request and the API key are valid.
-Key-based authentication is the default. You can replace it with [role-based access](search-security-enable-roles.md), which eliminates the need for hardcoded keys in your code.
+Key-based authentication is the default.
+
+You can replace it with [role-based access](search-security-enable-roles.md), which eliminates the need for hardcoded keys in your codebase.
## Types of API keys
search Search Security Enable Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-enable-roles.md
Last updated 06/18/2024
# Enable or disable role-based access control in Azure AI Search
-If you want to use roles for authorized access to Azure AI Search, this article explains how to enable role-based access control for your search service.
+Before you can assign roles for authorized access to Azure AI Search, enable role-based access control on your search service.
Role-based access for data plane operations is optional, but recommended as the more secure option. The alternative is [key-based authentication](search-security-api-keys.md), which is the default.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Last updated 06/03/2024
-# Connect to Azure AI Search using role-based access controls
+# Connect to Azure AI Search using roles
Azure provides a global authentication and [role-based authorization system](../role-based-access-control/role-assignments-portal.yml) for all services running on the platform. In Azure AI Search, you can assign Azure roles for:
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
Deferred messages won't be purged and moved to the dead-letter queue after they
## Errors while processing subscription rules
-If you enable dead-lettering on filter evaluation exceptions, any errors that occur while a subscription's SQL filter rule executes are captured in the DLQ along with the offending message. Don't use this option in a production environment in which not all message types have subscribers.
+If you enable dead-lettering on filter evaluation exceptions, any errors that occur while a subscription's SQL filter rule executes are captured in the DLQ along with the offending message. Don't use this option in a production environment where you have message types that are sent to the topic, which don't have subscribers, as this may result in a large load of DLQ messages. As such, ensure that all messages sent to the topic have at least one matching subscription.
## Application-level dead-lettering
service-connector Tutorial Python Aks Openai Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-openai-connection-string.md
+ Last updated 05/07/2024
service-connector Tutorial Python Aks Openai Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-openai-workload-identity.md
+ Last updated 05/07/2024
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md
# Replicate Azure Stack VMs to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article shows you how to set up disaster recovery Azure Stack VMs to Azure, using the [Azure Site Recovery service](site-recovery-overview.md).
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
# Support matrix for Azure VM disaster recovery between Azure regions > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes support and prerequisites for disaster recovery of Azure VMs from one Azure region to another, using the [Azure Site Recovery](site-recovery-overview.md) service.
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
# Troubleshoot Azure-to-Azure VM replication errors > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of [Azure virtual machines](azure-to-azure-tutorial-enable-replication.md) (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md).
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
# Accelerated Networking with Azure virtual machine disaster recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking:
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Last updated 04/29/2024
# Replicate virtual machines running in a proximity placement group to another region > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to replicate, fail over, and fail back Azure virtual machines (VMs) running in a proximity placement group to a secondary region.
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
# Troubleshoot errors when failing over VMware VM or physical machine to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
You may receive one of the following errors while doing failover of a virtual machine to Azure. To troubleshoot, use the described steps for each error condition.
site-recovery Site Recovery Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new-archive.md
Last updated 12/27/2023
# Archive for What's new in Site Recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article contains information on older features and updates in the Azure Site Recovery service. The primary [What's new in Azure Site Recovery](./site-recovery-whats-new.md) article contains the latest updates.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
# What's new in Site Recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated regularly.
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
# Set up disaster recovery of VMware VMs to Azure with PowerShell > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you see how to replicate and fail over VMware virtual machines to Azure using Azure PowerShell.
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Last updated 03/07/2024
# Install a Linux master target server for failback > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic.
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
# Prepare source machine for push installation of mobility agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you set up disaster recovery for VMware VMs and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the [Site Recovery Mobility service](vmware-physical-mobility-service-overview.md) on each on-premises VMware VM and physical server. The Mobility service captures data writes on the machine, and forwards them to the Site Recovery process server.
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Last updated 05/02/2022
# Automate Mobility Service installation > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to automate installation and updates for the Mobility Service agent in [Azure Site Recovery](site-recovery-overview.md).
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
# Support matrix for disaster recovery of VMware VMs and physical servers to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes supported components and settings for disaster recovery of VMware VMs and physical servers to Azure using [Azure Site Recovery](site-recovery-overview.md).
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Last updated 03/07/2024
# Manage the Mobility agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
You set up mobility agent on your server when you use Azure Site Recovery for disaster recovery of VMware VMs and physical servers to Azure. Mobility agent coordinates communications between your protected machine, configuration server/scale-out process server and manages data replication. This article summarizes common tasks for managing mobility agent after it's deployed.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
# About the Mobility service for VMware VMs and physical servers > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data, writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods:
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md
Last updated 06/27/2024
# Deploy an application with a custom container image > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
# Mount SMB Azure file shares on Linux clients > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure file shares can be mounted in Linux distributions using the [SMB kernel client](https://wiki.samba.org/index.php/LinuxCIFS).
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/partner-overview.md
This article highlights Microsoft partner companies integrated with Azure Storag
|![Atempo](./media/atempo-logo.png) |**Atempo**<br>Atempo Miria empowers you to manage complex file workflows including migration, backup, archive, and synchronization in heterogenous environments. Atempo Miria has a compatibility guide allowing to implement efficient data workflows between NAS, parallel FS, object, tape, and optical disk. The association of Azure and Atempo Miria allows customers to deploy any file workflow from on-premises to Azure or from cloud to Azure. |[Partner page](https://www.atempo.com/products/miria-for-archiving-large-file-sets/)| |![Cirrus company logo](./media/cirrus-logo.jpg) |**Cirrus Data**<br>Cirrus Data Solutions is a block storage data migration solution for both on-premises and cloud environments. An end-to-end approach allows you to migrate your data from on-premises to the cloud, between storage tiers within the cloud, and seamlessly migrate between public clouds. |[Partner Page](https://www.cirrusdata.com/cloud-migration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cirrusdatasolutionsinc1618222951068.cirrus-migrate-cloud)| |![Commvault company logo](./media/commvault-logo.jpg) |**Commvault**<br>Optimize, protect, migrate, and index your data using Microsoft infrastructure with Commvault. Take control of your data with Commvault Complete Data Protection, the Microsoft-centric and, Azure-centric data management solution. Commvault provides the tools you need to manage, migrate, access, and recover your data no matter where it resides, while reducing cost and risk.|[Partner Page](https://www.commvault.com/complete-data-protection)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault)|
-|![Data Dynamics company logo](./media/datadyn-logo.png) |**Data Dynamics**<br>Data Dynamics provides enterprise solutions to manage unstructured data for hybrid and multi-cloud environments. Their Unified Unstructured Data Management Platform uses analytics and automation to help you intelligently and efficiently move data from heterogenous storage environments (SMB, NFS, or S3 Object) into Azure. The platform provides seamless integration, enterprise scale, and performance that enables the efficient management of data for hybrid and multi-cloud environments. Use cases include: intelligent cloud migration, disaster recovery, archive, backup, and infrastructure optimization and data management. |[Partner page](https://www.datadynamicsinc.com/partners-2/)|
-![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Azure. Focus on value-added activities instead of time-consuming migration tasks. Grow your storage footprint without CAPEX investments.|[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)|
+|![Data Dynamics company logo](./media/datadyn-logo.png) |**Data Dynamics**<br>Data Dynamics provides enterprise solutions to manage unstructured data for hybrid and multicloud environments. Their Unified Unstructured Data Management Platform uses analytics and automation to help you intelligently and efficiently move data from heterogenous storage environments (SMB, NFS, or S3 Object) into Azure. The platform provides seamless integration, enterprise scale, and performance that enables the efficient management of data for hybrid and multicloud environments. Use cases include: intelligent cloud migration, disaster recovery, archive, backup, and infrastructure optimization and data management. |[Partner page](https://www.datadynamicsinc.com/partners-2/)|
+|![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Azure. Focus on value-added activities instead of time-consuming migration tasks. Grow your storage footprint without CAPEX investments.|[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)|
![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
-|![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software lets you consistently analyze, move, and manage data across clouds.<br><br>Komprise helps you to analyze data growth across any network attached storage (NAS) and object storage to identify significant cost savings. You can also archive cold data to Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Blob storage. Patented Komprise Transparent Move Technology enables you to archive files without changing user access. Global search and tagging enables virtual data lakes for AI, big data, and machine learning applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview)
-|![Peer company logo](./media/peer-logo.png) |**Peer Software**<br>Peer Software provides real-time file management solutions for hybrid and multi-cloud environments. Key use cases include high availability for user and application data across branch offices, Azure regions and availability zones, file sharing with version integrity, and migration to file or object storage with minimal cutover downtime. |[Partner page](https://go.peersoftware.com/azure_file_management_solutions)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/peer-software-inc.peergfs?tab=overview)
-|![Privacera company logo](./media/privacera-logo.png) |**Privacera**<br>Privacera provides a unified system for data governance and security across multiple cloud services and analytical platforms. Privacera enables IT and data platform teams to democratize data for analytics, while ensuring compliance with privacy regulations.  |[Partner page](https://privacera.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/globaltenetincdbaprivacera1585932150924.privacera_platform)
-|![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure data & storage management software solutions. It enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through an on-premises-first hybrid model. Specializes in migrating mission-critical workflows, application servers, and NAS/tape-to-cloud migrations. Tiger Bridge is a non-proprietary, software-only data management solution. It blends a file system and multi-tier cloud storage into a single space and enables hybrid workflows. Tiger Bridge addresses several data management challenges: file server and application server extension, migration, disaster recovery, backup and archive, and multi-site sync. It also offers continuous data protection and ransomware protection capabilities. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tiger-technology.tiger_bridge_saas_soft_only)|
+|![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software lets you consistently analyze, move, and manage data across clouds.<br><br>Komprise helps you to analyze data growth across any network attached storage (NAS) and object storage to identify significant cost savings. You can also archive cold data to Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Blob storage. Patented Komprise Transparent Move Technology enables you to archive files without changing user access. Global search and tagging enables virtual data lakes for AI, big data, and machine learning applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview)|
+|![Peer company logo](./media/peer-logo.png) |**Peer Software**<br>Peer Software provides real-time file management solutions for hybrid and multicloud environments. Key use cases include high availability for user and application data across branch offices, Azure regions and availability zones, file sharing with version integrity, and migration to file or object storage with minimal cutover downtime. |[Partner page](https://go.peersoftware.com/azure_file_management_solutions)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/peer-software-inc.peergfs?tab=overview)
+|![Privacera company logo](./media/privacera-logo.png) |**Privacera**<br>Privacera provides a unified system for data governance and security across multiple cloud services and analytical platforms. Privacera enables IT and data platform teams to democratize data for analytics, while ensuring compliance with privacy regulations.  |[Partner page](https://privacera.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/globaltenetincdbaprivacera1585932150924.privacera_platform)|
+|![Tape Ark company logo.](./media/tapeark-logo.png)|**Tape Ark**<br>Tape Ark is focused on migration of data from physical tapes to Azure Storage allowing customers to break free from the management and costs of legacy tapes. Tape Ark offers a fully managed SaaS service allowing companies to go completely tape free. Retrieve the legacy tapes from offsite storage, send them to Tape Ark, and Tape Ark will migrate the media (irrespective of age, type or format) to Azure Storage. This provides a fully cloud based tape management, and restore platform. Tape Ark has built mass data ingest facilities in Australia, India, UK, USA & Canada.|[Partner page](https://www.tapeark.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/globaldataanalyticsptyltdtatapeark1636285238780.tapearkrestoresaas?tab=Overview)|
+|![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure data & storage management software solutions. It enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through an on-premises-first hybrid model. Specializes in migrating mission-critical workflows, application servers, and NAS/tape-to-cloud migrations. Tiger Bridge is a non-proprietary, software-only data management solution. It blends a file system and multi-tier cloud storage into a single space and enables hybrid workflows. Tiger Bridge addresses several data management challenges: file server and application server extension, migration, disaster recovery, backup and archive, and multi-site sync. It also offers continuous data protection and ransomware protection capabilities. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tiger_bridge_saas_soft_only)|
-Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
+Are you a storage partner but your solution isn't listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
## Next steps To learn more about some of our other partners, see:
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are some limitations that you might see in Delta Lake support in serverles
- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). - Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data). - You can't [store query results to storage in Delta Lake format](create-external-table-as-select.md) by using the CETAS command. The CETAS command supports only Parquet and CSV as the output formats.-- Serverless SQL pools in Synapse Analytics are compatible with Delta reader version 1. The Delta features that require Delta readers with version 2 or higher (for example [column mapping](https://github.com/delta-io/delt#reader-requirements-for-column-mapping)) are not supported in the serverless SQL pools.
+- Serverless SQL pools in Synapse Analytics are compatible with **Delta reader version 1**.
- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters. - Delta Lake support isn't available in dedicated SQL pools. Make sure that you use serverless SQL pools to query Delta Lake files. - For more information about known issues with serverless SQL pools, see [Azure Synapse Analytics known issues](../known-issues.md).
-### Column rename in Delta table is not supported
+### Serverless support Delta 1.0 version
+
+Serverless SQL pools are reading only Delta Lake 1.0 version. Serverless SQL pools is a [Delta reader with level 1](https://github.com/delta-io/delt#reader-version-requirements), and doesnΓÇÖt support the following features:
+- Column mappings are ignored - serverless SQL pools will return original column names.
+- Delete vectors are ignored and the old version of deleted/updated rows will be returned (possibly wrong results).
+- The following Delta Lake features are not supported: [V2 checkpoints](https://github.com/delta-io/delt#vacuum-protocol-check)
+
+#### Delete vectors are ignored
+
+If your Delta lake table is configured to use Delta writer version 7, it will store deleted rows and old versions of updated rows in Delete Vectors (DV). Since serverless SQL pools have Delta reader 1 level, they will ignore the delete vectors and probably produce **wrong results** when reading unsupported Delta Lake version.
+
+#### Column rename in Delta table is not supported
The serverless SQL pool does not support querying Delta Lake tables with the [renamed columns](https://docs.delta.io/latest/delta-batch.html#rename-columns). Serverless SQL pool cannot read data from the renamed column.
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
# Support matrix for Azure Update Manager > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly.
This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers.
virtual-desktop Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-recommendations.md
description: Learn about recommendations for helping keep your Azure Virtual Des
Previously updated : 01/09/2024 Last updated : 06/03/2024 # Security recommendations for Azure Virtual Desktop
The following table summarizes our recommendations for each scenario.
| Trust level scenario | Recommended solution | ||-|
-| Users from one organization with standard privileges | Use a Windows Enterprise multi-session OS. |
+| Users from one organization with standard privileges | Use a Windows Enterprise multi-session operating system (OS). |
| Users require administrative privileges | Use a personal host pool and assign each user their own session host. | | Users from different organizations connecting | Separate Azure tenant and Azure subscription |
By restricting operating system capabilities, you can strengthen the security of
## Trusted launch
-Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against bottom-of-the-stack threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+Trusted launch are Azure VMs with enhanced security features aimed to protect against persistent attack techniques such as bottom-of-the-stack threats through attack vectors such as rootkits, boot kits, and kernel-level malware. It allows for secure deployment of VMs with verified boot loaders, OS kernels, and drivers, and also protects keys, certificates, and secrets in the VMs. Learn more about trusted launch at [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
-### Enable trusted launch as default
+When you add session hosts using the Azure portal, the default security type is **Trusted virtual machines**. This ensures that your VM meets the mandatory requirements for Windows 11. For more information about these requirements, see [Virtual machine support](/windows/whats-new/windows-11-requirements#virtual-machine-support).
-Trusted launch protects against advanced and persistent attack techniques. This feature also allows for secure deployment of VMs with verified boot loaders, OS kernels, and drivers. Trusted launch also protects keys, certificates, and secrets in the VMs. Learn more about trusted launch at [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+## Azure confidential computing virtual machines
-When you add session hosts using the Azure portal, the security type automatically changes to **Trusted virtual machines**. This ensures that your VM meets the mandatory requirements for Windows 11. For more information about these requirements, see [Virtual machine support](/windows/whats-new/windows-11-requirements#virtual-machine-support).
+Azure Virtual Desktop support for [Azure confidential computing](../confidential-computing/overview.md) virtual machines ensures a user's virtual desktop is encrypted in memory, protected in use, and backed by a hardware root of trust.
-## Azure Confidential computing virtual machines
+Deploying confidential virtual machines with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the CPU that can't be read from software. For more information, including the VM sizes available, see the [Azure confidential computing overview](../confidential-computing/overview.md).
-Azure Virtual Desktop support for Azure Confidential computing virtual machines ensures a userΓÇÖs virtual desktop is encrypted in memory, protected in use, and backed by hardware root of trust. Azure Confidential computing VMs for Azure Virtual Desktop are compatible with [supported operating systems](prerequisites.md#operating-systems-and-licenses). Deploying confidential VMs with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. These virtual desktops are powered by the latest Third-generation (Gen 3) Advanced Micro Devices (AMD) EPYCΓäó processor with Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) technology. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the AMD CPU that can't be read from software. For more information, see the [Azure Confidential computing overview](../confidential-computing/overview.md).
+The following operating systems are supported for use as session hosts with confidential virtual machines on Azure Virtual Desktop, for versions that are in active support. For support dates, see [Microsoft Lifecycle Policy](/lifecycle/).
-The following operating systems are supported for use as session hosts with confidential VMs on Azure Virtual Desktop:
--- Windows 11 Enterprise, version 22H2-- Windows 11 Enterprise multi-session, version 22H2
+- Windows 11 Enterprise
+- Windows 11 Enterprise multi-session
+- Windows 10 Enterprise
+- Windows 10 Enterprise multi-session
- Windows Server 2022 - Windows Server 2019
-You can create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
+You can create session hosts using confidential virtual machines when you [deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
-### OS disk encryption
+## Operating system disk encryption
-Encrypting the operating system disk is an extra layer of encryption that binds disk encryption keys to the Confidential computing VM's Trusted Platform Module (TPM). This encryption makes the disk content accessible only to the VM. Integrity monitoring allows cryptographic attestation and verification of VM boot integrity and monitoring alerts if the VM didnΓÇÖt boot because attestation failed with the defined baseline. For more information about integrity monitoring, see [Microsoft Defender for Cloud Integration](../virtual-machines/trusted-launch.md#microsoft-defender-for-cloud-integration). You can enable confidential compute encryption when you create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
+Encrypting the operating system disk is an extra layer of encryption that binds disk encryption keys to the confidential computing VM's Trusted Platform Module (TPM). This encryption makes the disk content accessible only to the VM. Integrity monitoring allows cryptographic attestation and verification of VM boot integrity and monitoring alerts if the VM didnΓÇÖt boot because attestation failed with the defined baseline. For more information about integrity monitoring, see [Microsoft Defender for Cloud Integration](../virtual-machines/trusted-launch.md#microsoft-defender-for-cloud-integration). You can enable confidential compute encryption when you create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
-### Secure Boot
+## Secure Boot
Secure Boot is a mode that platform firmware supports that protects your firmware from malware-based rootkits and boot kits. This mode only allows signed operating systems and drivers to boot.
-### Monitor boot integrity using Remote Attestation
+## Monitor boot integrity using Remote Attestation
Remote attestation is a great way to check the health of your VMs. Remote attestation verifies that Measured Boot records are present, genuine, and originate from the Virtual Trusted Platform Module (vTPM). As a health check, it provides cryptographic certainty that a platform started up correctly.
-### vTPM
+## vTPM
A vTPM is a virtualized version of a hardware Trusted Platform Module (TPM), with a virtual instance of a TPM per VM. vTPM enables remote attestation by performing integrity measurement of the entire boot chain of the VM (UEFI, OS, system, and drivers).
We recommend enabling vTPM to use remote attestation on your VMs. With vTPM enab
> [!NOTE] > BitLocker shouldn't be used to encrypt the specific disk where you're storing your FSLogix profile data.
-### Virtualization-based Security
+## Virtualization-based Security
Virtualization-based Security (VBS) uses the hypervisor to create and isolate a secure region of memory that's inaccessible to the OS. Hypervisor-Protected Code Integrity (HVCI) and Windows Defender Credential Guard both use VBS to provide increased protection from vulnerabilities.
-#### Hypervisor-Protected Code Integrity
+### Hypervisor-Protected Code Integrity
HVCI is a powerful system mitigation that uses VBS to protect Windows kernel-mode processes against injection and execution of malicious or unverified code.
-#### Windows Defender Credential Guard
+### Windows Defender Credential Guard
Enable Windows Defender Credential Guard. Windows Defender Credential Guard uses VBS to isolate and protect secrets so that only privileged system software can access them. This prevents unauthorized access to these secrets and credential theft attacks, such as Pass-the-Hash attacks. For more information, see [Credential Guard overview](/windows/security/identity-protection/credential-guard/).
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
# Quickstart: Create a Virtual Machine Scale Set in the Azure portal > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
# Spot Priority Mix for high availability and cost savings > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Flexible scale sets
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
# Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to:
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
# Automatic Guest Patching for Azure Virtual Machines and Scale Sets > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/compiling-scaling-applications.md
# Scaling HPC applications > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
# Configure and optimize VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
# OS Images Supported with Remote NVMe > [!NOTE]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The following lists provide up-to-date information on which OS images are tagged as NVMe supported. These lists will be updated when new OS images are made available with remote NVMe support.
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
ms.devlang: azurecli
# Use Linux diagnostic extension 3.0 to monitor metrics and logs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This document describes version 3.0 and newer of the Linux diagnostic extension (LAD).
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
ms.devlang: azurecli
# Use the Linux diagnostic extension 4.0 to monitor metrics and logs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the latest versions of the Linux diagnostic extension (LAD).
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
# Enable InfiniBand > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
# InfiniBand Driver Extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC. It does not install the InfiniBand ND drivers on the non-SR-IOV enabled [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs.
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
# NVIDIA GPU Driver Extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup.
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
# Manage Network Watcher Agent virtual machine extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
The Network Watcher Agent virtual machine extension is a requirement for some of Azure Network Watcher features that capture network traffic to diagnose and monitor Azure virtual machines (VMs). For more information, see [What is Azure Network Watcher?](../../network-watcher/network-watcher-overview.md)
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
ms.devlang: azurecli
# Stackify Retrace Linux Agent Extension > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
## Overview
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
Last updated 02/03/2023
# How to update the Azure Linux Agent on a VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
To update your [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) on a Linux VM in Azure, you must already have:
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fsv2-series.md
# Fsv2-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
# Known issues with HB-series and N-series VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hb Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md
# HB-series virtual machines overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
# HBv2 series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets.
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
# HBv3-series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
# Create an image definition and an image version > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
An [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
# Find Azure Marketplace image information using the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloud Init Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-troubleshooting.md
# Troubleshooting VM provisioning with cloud-init > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
# Use cloud-init to configure a swap partition on a Linux VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloudinit Update Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm.md
# Use cloud-init to update and install packages in a Linux VM in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
# Prepare Linux for imaging in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Last updated 02/20/2024
# Azure Disk Encryption on an isolated network > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets.
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
# Azure Disk Encryption for Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
# Azure Disk Encryption sample scripts for Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
# Endorsed Linux distributions on Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Imaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/imaging.md
# Bringing and creating Linux images in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
# Install NVIDIA GPU drivers on N-series VMs running Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
# Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
# Time sync for Linux VMs in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
# Tutorial: Create and Manage Linux VMs with the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
# cloud-init support for virtual machines in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly.
> **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
# M-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Last updated 09/19/2023
# NC A100 v4-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
Last updated 03/13/2023
# NDm A100 v4-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
# NP-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md
# Set up Message Passing Interface for HPC > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
If you share gallery resources to someone outside of your Azure tenant, they wil
1. On the page for your gallery, in the menu on the left, select **Access control (IAM)**. 1. Under **Add**, select **Add role assignment**. The **Add role assignment** page will open.
-1. Under **Role**, select **Reader**.
+1. Under **Role**, select **Contributor**.
1. Ensure that the user is selected in the Members tab.For **Assign access to**, keep the default of **User, group, or service principal**. 1. Click **Select** members and choose a user account from the page that opens on the right. 1. If the user is outside of your organization, you'll see the message **This user will be sent an email that enables them to collaborate with Microsoft.** Select the user with the email address and then click **Save**.
Use the object ID as a scope, along with an email address and [az role assignmen
```azurecli-interactive az role assignment create \
- --role "Reader" \
+ --role "Contributor" \
--assignee <email address> \ --scope <gallery ID> ```
$user = Get-AzADUser -StartsWith alinne_montes@contoso.com
# Grant access to the user for our gallery New-AzRoleAssignment ` -ObjectId $user.Id `
- -RoleDefinitionName Reader `
+ -RoleDefinitionName Contributor `
-ResourceName $gallery.Name ` -ResourceType Microsoft.Compute/galleries ` -ResourceGroupName $resourceGroup.ResourceGroupName
virtual-machines Trusted Launch Existing Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vmss.md
Azure Virtual machine Scale sets supports enabling Trusted launch on existing [U
## Limitations - Enabling Trusted launch on existing [virtual machine Scale sets with data disks attached](../virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md) is currently not supported.-
- - To validate if scale is configured with data disk, navigate to scale set -> **Disks** under **Settings** menu -> check under heading **Data disks**
-
+ - To validate if scale set is configured with data disk, navigate to scale set -> **Disks** under **Settings** menu -> check under heading **Data disks**
:::image type="content" source="./media/trusted-launch/00-vmss-with-data-disks.png" alt-text="Screenshot of the scale set with data disks."::: - Enabling Trusted launch on existing [virtual machine Scale sets Flex](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md) is currently not supported.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Azure offers Trusted Launch as a seamless way to improve the security of [Genera
> [!IMPORTANT] > > - Trusted Launch is selected as the default state for newly created Azure VMs. If your new VM requires features that aren't supported by Trusted Launch, see the [Trusted Launch FAQs](trusted-launch-faq.md).
-> - Existing [VMs](overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing VMs](trusted-launch-existing-vm.md).
-> - Existing [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing scale sets](trusted-launch-existing-vmss.md).
+> - Existing [virtual machines (VMs)](overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing VMs](trusted-launch-existing-vm.md).
+> - Existing [virtual machine scale sets (VMSS)](../virtual-machine-scale-sets/overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing scale sets](trusted-launch-existing-vmss.md).
## Benefits
Azure offers Trusted Launch as a seamless way to improve the security of [Genera
| [High Performance Compute](sizes-hpc.md) |[HB-series](hb-series.md), [HBv2-series](hbv2-series.md), [HBv3-series](hbv3-series.md), [HBv4-series](hbv4-series.md), [HC-series](hc-series.md), [HX-series](hx-series.md) | All sizes supported. | > [!NOTE]
+>
> - Installation of the *CUDA & GRID drivers on Secure Boot-enabled Windows VMs* doesn't require any extra steps. > - Installation of the *CUDA driver on Secure Boot-enabled Ubuntu VMs* requires extra steps. For more information, see [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md#install-cuda-drivers-on-n-series-vms). Secure Boot should be disabled for installing CUDA drivers on other Linux VMs. > - Installation of the *GRID driver* requires Secure Boot to be disabled for Linux VMs.
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
# Find and use Azure Marketplace VM images with Azure PowerShell > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Centos End Of Life https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md
There are also several companies offering extended support for CentOS 7, which m
- OpenLogic: [Enterprise Linux Support](https://www.openlogic.com/solutions/enterprise-linux-support/centos) - TuxCare: [Extended Lifecycle Support](https://docs.tuxcare.com/extended-lifecycle-support/)
+- CIQ: [CIQ Bridge - Extending the life of CentOS 7](https://ciq.com/products/ciq-bridge/)
+ See the [Endorsed Distribution](../..//linux/endorsed-distros.md) page for details on Azure endorsed distributions and images. ## CentOS compatible distributions
See the [Endorsed Distribution](../..//linux/endorsed-distros.md) page for detai
> [!CAUTION] > If you perform an in-place major version update following a migration (e.g. CentOS 7 -> RHEL 7 -> RHEL 8) there will be a disconnection between the data plane and the **[control plane](/azure/architecture/guide/multitenant/considerations/control-planes)** of the virtual machine (VM). Azure capabilities such as **[Auto guest patching](/azure/virtual-machines/automatic-vm-guest-patching)**, **[Auto OS image upgrades](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade)**, **[Hotpatching](/windows-server/get-started/hotpatch?toc=%2Fazure%2Fvirtual-machines%2Ftoc.json)**, and **[Azure Update Manager](/azure/update-manager/overview)** won't be available. To utilize these features, it's recommended to create a new VM using your preferred operating system instead of performing an in-place upgrade.
-> > [!NOTE]
+
+> [!NOTE]
> - ΓÇ£Binary compatibleΓÇ¥ (Application Binary Interface or ABI) means based on the same upstream distribution (Fedora). There is no guarantee of bug for bug compatibility. - For a full list of endorsed Linux Distributions on Azure see: [Linux distributions endorsed on Azure - Azure Virtual Machines | Microsoft Learn](../../linux/endorsed-distros.md) - For details on Red Hat & Microsoft Integrated Support see: Microsoft and Red Hat Partner and Deliver Integrated Support, a Unique Offering in the IT World | Microsoft Learn
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
# Install TmaxSoft OpenFrame on Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Learn how to set up an OpenFrame environment on Azure suitable for development, demos, testing, or production workloads. This tutorial walks you through each step.
virtual-machines Weblogic Server Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-server-azure-virtual-machine.md
description: Shows how to quickly stand up WebLogic Server on Azure Virtual Mach
Previously updated : 05/29/2024 Last updated : 07/01/2024 # Quickstart: Deploy WebLogic Server on Azure Virtual Machines (VMs)
-This article shows you how to quickly deploy WebLogic Server (WLS) on Azure Virtual Machine (VM) with the simplest possible set of configuration choices using the Azure portal. For a more full featured tutorial, including the use of Azure Application Gateway to make WLS cluster on VM securely visible on the public internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json).
+This article shows you how to quickly deploy WebLogic Server (WLS) on an Azure Virtual Machine (VM) with the simplest possible set of configuration choices using the Azure portal. In this quickstart, you learn how to:
-In this quickstart, you will learn how to:
--- Deploy WebLogic Server with Administration Server on a VM using the Azure portal.-- Deploy a Java EE sample application with WebLogic Server Administration Console portal.-
-This quickstart assumes a basic understanding of WebLogic Server concepts. For more information, see [Oracle WebLogic Server](https://www.oracle.com/java/weblogic/).
+- Deploy WebLogic Server with Administration Server enabled on a VM using the Azure portal.
+- Deploy a sample Java application with the WebLogic Server Administration Console.
+- Connect to the VM running WebLogic using SSH.
If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing WebLogic on Azure solutions, fill out this short [survey on WebLogic migration](https://aka.ms/wls-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration.
If you're interested in providing feedback or working closely on your migration
## Deploy WebLogic Server with Administration Server on a VM
-The steps in this section direct you to deploy WebLogic Server on VM in the simplest possible way: using the [single instance with an admin server](https://aka.ms/wls-vm-admin) offer. Other offers are available to meet different scenarios, including: [single instance without an admin server](https://aka.ms/wls-vm-singlenode), [static cluster](https://aka.ms/wls-vm-cluster), and [dynamic cluster](https://aka.ms/wls-vm-dynamic-cluster). For more information, see [What are solutions for running Oracle WebLogic Server on Azure Virtual Machines?](/azure/virtual-machines/workloads/oracle/oracle-weblogic?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json).
-
+The following steps show you how to deploy WebLogic Server on a VM using the [single instance with an admin server](https://aka.ms/wls-vm-admin) offer on the Azure portal. There are other offers that meet different scenarios such as [WebLogic cluster on multiple VMs](https://aka.ms/wls-vm-cluster).
-The following steps show you how to find the WebLogic Server with Admin Server offer and fill out the **Basics** pane:
-
-1. In the search bar at the top of the portal, enter *weblogic*. In the autosuggested search results, in the **Marketplace** section, select **Oracle WebLogic Server With Admin Server**.
+1. In the search bar at the top of the portal, enter *weblogic*. In the autosuggested search results, in the **Marketplace** section, select **WebLogic Server with Admin Console on VM**. You can also go directly to the offer using the portal link.
:::image type="content" source="media/weblogic-server-azure-virtual-machine/search-weblogic-admin-offer-from-portal.png" alt-text="Screenshot of the Azure portal that shows WebLogic Server in the search results." lightbox="media/weblogic-server-azure-virtual-machine/search-weblogic-admin-offer-from-portal.png":::
- You can also go directly to the offer with this [portal link](https://aka.ms/wls-vm-admin).
+1. On the offer page, select **Create**. You then see the **Basics** pane.
-1. On the offer page, select **Create**.
+ :::image type="content" source="media/weblogic-server-azure-virtual-machine/portal-start-experience.png" alt-text="Screenshot of the Azure portal that shows the Create WebLogic Server With Admin console on Azure VM page." lightbox="media/weblogic-server-azure-virtual-machine/portal-start-experience.png":::
1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that you used to sign in to the Azure portal. 1. The offer must be deployed in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *ejb0802wls*.
-1. Under **Instance details**, select the region for the deployment. For a list of Azure regions how and where VMs operate, see [Regions for virtual machines in Azure](/azure/virtual-machines/regions).
+1. Under **Instance details**, select the region for the deployment.
1. Accept the default value in **Oracle WebLogic Image**.
The following steps show you how to find the WebLogic Server with Admin Server o
1. Fill in *wlsVmCluster2022* for the **Password for WebLogic Administrator**. Use the same value for the confirmation.
-1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If not, fix any validation problems and select **Review + create** again.
+1. Select **Review + create**.
+
+1. Ensure the green **Validation Passed** message appears at the top. If it doesn't, fix any validation problems and select **Review + create** again.
1. Select **Create**. 1. Track the progress of the deployment in the **Deployment is in progress** page.
-Depending on network conditions and other activity in your selected region, the deployment may take up to 30 minutes to complete.
+Depending on network conditions and other activity in your selected region, the deployment might take up to 30 minutes to complete.
## Examine the deployment output
If you navigated away from the **Deployment is in progress** page, the following
1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group. 1. In the left navigation pane, in the **Settings** section, select **Deployments**. You can see an ordered list of the deployments to this resource group, with the most recent one first. 1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot:-
+x
:::image type="content" source="media/weblogic-server-azure-virtual-machine/resource-group-deployments.png" alt-text="Screenshot of the Azure portal that shows the resource group deployments list." lightbox="media/weblogic-server-azure-virtual-machine/resource-group-deployments.png":::
-1. In the left panel, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs.
-1. The **sshCommand** value is the fully qualified, SSH command to connect the VM that runs WebLogic Server. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
-1. The **adminConsoleURL** value is the fully qualified, public internet visible link to the WebLogic Server admin console. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+1. In the left panel, select **Outputs**. This list shows useful output values from the deployment.
+1. The **sshCommand** value is the fully qualified SSH command to connect to the VM that runs WebLogic Server. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+1. The **adminConsoleURL** value is the fully qualified public internet visible link to the WebLogic Server admin console. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
-## Deploy a Java EE application from Administration Console portal
+## Deploy a Java application from Administration Console
-Use the following steps to run a sample application in the WebLogic Server:
+Use the following steps to run a sample application on the WebLogic Server:
1. Download a sample application as a *.war* or *.ear* file. The sample app should be self contained and not have any database, messaging, or other external connection requirements. The sample app from the WebLogic Kubernetes Operator documentation is a good choice. You can download it from [Oracle](https://aka.ms/wls-aks-testwebapp). Save the file to your local filesystem.
Use the following steps to run a sample application in the WebLogic Server:
1. Under **Locate deployment to install and prepare for deployment**, select **Upload your file(s)**. 1. Under **Upload a deployment to the Administration Server**, select **Choose File** and upload your sample application. Select **Next**.
- 1. Select **Finish**.
+ 1. Accept the defaults in the next few screens and select **Finish**.
+ 1. On the application configuration screen, select **Save**.
1. Under **Change Center** on the top left corner, select **Activate Changes**. You can see the message **All changes have been activated. No restarts are necessary**.
If you want to manage the VM, you can connect to it with SSH command. Before acc
Use the following steps to enable port 22:
-1. Navigate back to your working resource group. In the overview page, you can find a network security group named **wls-nsg**. Select **wls-nsg**.
+1. Navigate back to your working resource group in the Azure portal. In the overview page, you can find a network security group named **wls-nsg**. Select **wls-nsg**.
1. In the left panel, select **Settings**, then **Inbound security rules**. If there's a rule to allow port `22`, then you can jump to step 4. 1. In the top of the page, select **Add**.
Use the following steps to enable port 22:
After the deployment completes, you can SSH to the VM.
-1. Connect the VM with the value of **sshCommand** and your password (this article uses *wlsVmCluster2022*).
+1. Connect to the VM with the value of **sshCommand** and your password (this article uses *wlsVmCluster2022*).
## Clean up resources
-If you're not going to continue to use the WebLogic Server, navigate back to your working resource group. At the top of the page, under the text **Resource group**, select the resource group. Then, select **Delete resource group**.
+If you're not going to continue to use the WebLogic Server, navigate back to your working resource group in the Azure portal. At the top of the page, under the text **Resource group**, select **Delete resource group**.
## Next steps Continue to explore options to run WebLogic Server on Azure. * [WebLogic Server on virtual machines](/azure/virtual-machines/workloads/oracle/oracle-weblogic?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json)
-* [WebLogic Server on AKS](/azure/virtual-machines/workloads/oracle/weblogic-aks?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json)
-* [Migrate WebLogic Server applications to Azure Kubernetes Service](/azure/developer/java/migration/migrate-weblogic-to-virtual-machines?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json)
-* [Explore options for day 2 and beyond](https://aka.ms/wls-vms-day2)
+* [WebLogic Server on Azure Kubernetes Service](/azure/virtual-machines/workloads/oracle/weblogic-aks?toc=/azure/developer/java/ee/toc.json&bc=/azure/developer/java/breadcrumb/toc.json)
For more information about the Oracle WebLogic offers at Azure Marketplace, see [Oracle WebLogic Server on Azure](https://aka.ms/wls-contact-me). These offers are all _Bring-Your-Own-License_. They assume that you already have the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
virtual-network-manager How To Deploy Hub Spoke Topology With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-deploy-hub-spoke-topology-with-azure-firewall.md
+
+ Title: How to deploy hub and spoke topology with Azure Firewall
+description: Learn how to deploy a hub and spoke topology with Azure Firewall using Virtual Network Manager.
++++ Last updated : 06/04/2024++
+# How to deploy hub and spoke topology with Azure Firewall
+
+In this article, you learn how to deploy a hub and spoke topology with Azure Firewall using Azure Virtual Network Manager (AVNM). You create a network manager instance, and implement network groups for trusted and untrusted traffic. Next, you deploy a connectivity configuration for defining your hub and spoke topology. When deploying the connectivity configuration, you have a choice of adding [direct connectivity](concept-connectivity-configuration.md#direct-connectivity) for direct, trusted communication between spoke virtual networks, or requiring spokes to communicate through the hub network. You finish by deploying a routing configuration to route all traffic to Azure Firewall, except the traffic within the same virtual network when the virtual networks are trusted.
+
+Many organizations use Azure Firewall to protect their virtual networks from threats and unwanted traffic, and they route all traffic to Azure Firewall except trusted traffic within the same virtual network. Traditionally, setting up such a scenario is cumbersome because new user-defined routes (UDRs) need to be created for each new subnet, and all route tables have different UDRs. UDR management in Azure Virtual Network Manager can help you easily achieve this scenario by creating a routing rule that routes all traffic to Azure Firewall, except the traffic within the same virtual network.
+
+## Prerequisites
+
+- An Azure subscription with permissions to create resources in the subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- Three virtual networks with subnets in the same region. One virtual network is the hub virtual network, and the other two virtual networks are the spoke virtual networks.
+ - For this example, the hub virtual network is named **hub-vnet**, and the spoke virtual networks are **spoke-vnet-1** and **spoke-vnet-2**.
+ - The hub virtual network requires a subnet for the Azure Firewall named **AzureFirewallSubnet**.
+- An Azure Virtual Network Manager instance with user-defined routing and connectivity configurations enabled.
+- All virtual networks configured in a hub and spoke topology.
+- An Azure Firewall in the hub virtual network. For more information, see [Deploy and configure Azure Firewall and policy using the Azure portal](../firewall/tutorial-firewall-deploy-portal-policy.md).
++++
+## Create a routing configuration and rule collection
+
+In this task, you create a routing configuration and rule collection that includes your spoke network group. Routing configurations define the routing rules for traffic between virtual networks.
+
+1. In the network manager instance, select **Configurations** under **Settings**.
+2. On the **Create a routing configuration** page, enter the routing configuration **Name** and **Description** on the **Basics** tab then select **Next: Rule collection >**.
+3. Select **Add** on the **Rule collections** tab.
+4. In the **Add a rule collection** window, enter or select the following settings for the rule collection:
+
+ | **Setting** | **Value** |
+ |||
+ | **Name** | Enter a name for your rule collection. |
+ | **Description** | (Optional) Enter a description for your rule collection. |
+ | **Local route setting** | Select **Direct routing within virtual network**. |
+ | **Enable BGP route propagation** | (Optional) Select **Enable BGP route propagation** if you want to enable BGP route propagation. |
+ | **Target network group** | Select your spoke network group. |
+
+1. Under **Routing rules**, select **Add** to create a new routing rule.
+2. In the **Add a routing rule** window, enter or select the following settings for the routing rule:
+
+ | **Setting** | **Value** |
+ |||
+ | **Name** | Enter a name for your routing rule. |
+ | **Destination** | |
+ | **Destination type** | Select **IP Address**. |
+ | **Destination IP Addresses/CIDR ranges** | enter **0.0.0.0/0**. |
+ | **Next hop** | |
+ | **Next hop type** | Select **Virtual Appliance**.</br> Select **Import Azure firewall private IP address**|
+ | **Azure firewalls** | Select your Azure firewall then choose **Select**. |
+
+3. Select **Add** to add the routing rule to the rule collection.
+4. Select **Add** to add the rule collection to the routing configuration.
+
+ :::image type="content" source="media/how-to-deploy-hub-spoke-topology-with-azure-firewall/add-routing-rule.png" alt-text="Screenshot of Add a routing rule window with firewall as next hop.":::
+
+5. Select **Review + create** then select **Create**.
+
+## Deploy the routing configuration
+
+In this task, you deploy the routing configuration to create the routing rules for the hub and spoke topology.
+
+1. In the network manager instance, select **Deployments** under **Settings**.
+2. Select **Deploy configurations** then select **Routing configuration - Preview**.
+3. In the **Deploy a configuration** window, select the routing configuration you created, and select the **Target Regions** you wish to deploy the configuration to.
+1. Select **Next** or **Review + deploy** to review the deployment then select **Deploy**.
+
+## Delete all resources
+
+If you no longer need the resources created in this article, you can delete them to avoid incurring more costs.
+
+1. In the Azure portal, search for and select **Resource groups**.
+2. Select the resource group that contains the resources you want to delete.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about User defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md)
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
# Accelerated Networking overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the benefits, constraints, and supported configurations of Accelerated Networking. Accelerated Networking enables [single root I/O virtualization (SR-IOV)](/windows-hardware/drivers/network/overview-of-single-root-i-o-virtualization--sr-iov-) on supported virtual machine (VM) types, greatly improving networking performance. This high-performance data path bypasses the host, which reduces latency, jitter, and CPU utilization for the most demanding network workloads.
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
Previously updated : 08/23/2023 Last updated : 07/01/2024 # Create a virtual network peering - Resource Manager, different subscriptions and Microsoft Entra tenants
-In this tutorial, you learn to create a virtual network peering between virtual networks created through Resource Manager. The virtual networks exist in different subscriptions that may belong to different Microsoft Entra tenants. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
+In this tutorial, you learn to create a virtual network peering between virtual networks created through Resource Manager. The virtual networks exist in different subscriptions that might belong to different Microsoft Entra tenants. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
Depending on whether, the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Learn how to create a virtual network peering in other scenarios by selecting th
A virtual network peering can't be created between two virtual networks deployed through the classic deployment model. If you need to connect virtual networks that were both created through the classic deployment model, you can use an Azure [VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to connect the virtual networks.
-This tutorial peers virtual networks in the same region. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region). It's recommended that you familiarize yourself with the [peering requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints) before peering virtual networks.
+This tutorial peers virtual networks in the same region. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region). Familiarize yourself with the [peering requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints) before peering virtual networks.
## Prerequisites # [**Portal**](#tab/create-peering-portal) -- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account or accounts with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
This tutorial peers virtual networks in the same region. You can also peer virtu
# [**PowerShell**](#tab/create-peering-powershell) -- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account or accounts with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
This tutorial peers virtual networks in the same region. You can also peer virtu
- Azure PowerShell installed locally or Azure Cloud Shell. -- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Sign in to Azure PowerShell and select the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
If you choose to install and use PowerShell locally, this article requires the A
# [**Azure CLI**](#tab/create-peering-cli) -- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account or accounts with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
In this section, you sign in as **user-2** and create a virtual network for the
# [**Portal**](#tab/create-peering-portal)
-Repeat the steps in the [previous section](#create-virtual-network) to create a second virtual network with the following values:
+Create a second virtual network with the following values by repeating the steps in the [previous section](#create-virtual-network).
| Setting | Value | | | |
You need the **Resource ID** for **vnet-2** from the previous steps to set up th
| Setting | Value | | - | -- |
- | **This virtual network** | |
- | Peering link name | Enter **vnet-1-to-vnet-2**. |
- | Allow 'vnet-1' to access 'vnet-2' | Leave the default of selected. |
- | Allow 'vnet-1' to receive forwarded traffic from 'vnet-2' | Select the checkbox. |
- | Allow gateway in 'vnet-1' to forward traffic to 'vnet-2' | Leave the default of cleared. |
- | Enable 'vnet-1' to use 'vnet-2' remote gateway | Leave the default of cleared. |
- | Use remote virtual network gateway or route server | Leave the default of cleared. |
- | **Remote virtual network** | |
- | Peering link name | Leave blank. |
- | Virtual network deployment model | Select **Resource manager**. |
- | Select the box for **I know my resource ID**. | |
- | Resource ID | Enter or paste the **Resource ID** for **vnet-2**. |
-
-1. In the pull-down box, select the **Directory** that corresponds with **vnet-2** and **user-2**.
-
-1. Select **Authenticate**.
-
- :::image type="content" source="./media/create-peering-different-subscriptions/vnet-1-to-vnet-2-peering.png" alt-text="Screenshot of peering from vnet-1 to vnet-2.":::
+ | **Remote virtual network summary** | |
+ | Peering link name | **vnet-2-to-vnet-1** |
+ | Virtual network deployment model | **Resource Manager** |
+ | I know my resource ID | **Select the box** |
+ | Resource ID | **Enter the Resource ID for vnet-2** |
+ | Directory | Select the Microsoft Entra ID directory that corresponds with **vnet-2** and **user-2** |
+ | **Remote virtual network peering settings** | |
+ | Allow 'the peered virtual network' to access 'vnet-1' | Leave the default of **Enabled** |
+ | Allow 'the peered virtual network' to receive forwarded traffic from 'vnet-1' | **Select the box** |
+ | **Local virtual network summary** | |
+ | Peering link name | **vnet-1-to-vnet-2** |
+ | **Local virtual network peering settings** | |
+ | Allow 'vnet-1' to access 'the peered virtual network' | Leave the default of **Enabled** |
+ | Allow 'vnet-1' to receive forwarded traffic from 'the peered virtual network' | **Select the box** |
1. Select **Add**.
+
+ :::image type="content" source="./media/create-peering-different-subscriptions/vnet-1-to-vnet-2-peering.png" alt-text="Screenshot of peering from vnet-1 to vnet-2.":::
1. Sign out of the portal as **user-1**.
Connect-AzAccount
### Change to subscription-1 (optional)
-You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
+You might have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
Change context to **subscription-1**.
az login
### Change to subscription-1 (optional)
-You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
+You might have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
Change context to **subscription-1**.
You need the **Resource IDs** for **vnet-1** from the previous steps to set up t
| Setting | Value | | - | -- |
- | **This virtual network** | |
- | Peering link name | Enter **vnet-2-to-vnet-1**. |
- | Allow 'vnet-2' to access 'vnet-1' | Leave the default of selected. |
- | Allow 'vnet-2' to receive forwarded traffic from 'vnet-1' | Select the checkbox. |
- | Allow gateway in 'vnet-2' to forward traffic to 'vnet-1' | Leave the default of cleared. |
- | Enable 'vnet-2' to use 'vnet-1's' remote gateway | Leave the default of cleared. |
- | **Remote virtual network** | |
- | Peering link name | Leave blank. |
- | Virtual network deployment model | Select **Resource manager**. |
- | Select the box for **I know my resource ID**. | |
- | Resource ID | Enter or paste the **Resource ID** for **vnet-1**. |
+ | **Remote virtual network summary** | |
+ | Peering link name | **vnet-1-to-vnet-2** |
+ | Virtual network deployment model | **Resource Manager** |
+ | I know my resource ID | **Select the box** |
+ | Resource ID | **Enter the Resource ID for vnet-2** |
+ | Directory | Select the Microsoft Entra ID directory that corresponds with **vnet-1** and **user-1** |
+ | **Remote virtual network peering settings** | |
+ | Allow 'the peered virtual network' to access 'vnet-1' | Leave the default of **Enabled** |
+ | Allow 'the peered virtual network' to receive forwarded traffic from 'vnet-1' | **Select the box** |
+ | **Local virtual network summary** | |
+ | Peering link name | **vnet-1-to-vnet-2** |
+ | **Local virtual network peering settings** | |
+ | Allow 'vnet-1' to access 'the peered virtual network' | Leave the default of **Enabled** |
+ | Allow 'vnet-1' to receive forwarded traffic from 'the peered virtual network' | **Select the box** |
+
+1. Select **Add**.
+
1. In the pull-down box, select the **Directory** that corresponds with **vnet-1** and **user-1**.
Connect-AzAccount
### Change to subscription-2 (optional)
-You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
+You might have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
Change context to **subscription-2**.
az login
### Change to subscription-2 (optional)
-You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
+You might have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
Change context to **subscription-2**.
az network vnet peering list \
```
-The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using subnet-1 Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS.
+The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS (Domain Name System) server or use Azure DNS.
+
+> [!IMPORTANT]
+> If you update the address space in one of the members of the peer, you must resync the connection to reflect the address space changes. For more information, see [Update the address space for a peered virtual network using the Azure portal](/azure/virtual-network/update-virtual-network-peering-address-space#modify-the-address-range-prefix-of-an-existing-address-range)
For more information about using your own DNS for name resolution, see, [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
# Use Azure CLI to create a Windows or Linux VM with Accelerated Networking > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to create a Linux or Windows virtual machine (VM) with Accelerated Networking (AccelNet) enabled by using the Azure CLI command-line interface. The article also discusses how to enable and manage Accelerated Networking on existing VMs.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
Previously updated : 08/24/2023-
- - FY23 content-maintenance
- - ignite-2023
Last updated : 07/02/2024 # Default outbound access in Azure
Examples of explicit outbound connectivity for virtual machines are:
* Created within a subnet associated to a NAT gateway.
-* In the backend pool of a standard load balancer with outbound rules defined.
+* Deployed in the backend pool of a standard load balancer with outbound rules defined.
-* In the backend pool of a basic public load balancer.
+* Deployed in the backend pool of a basic public load balancer.
* Virtual machines with public IP addresses explicitly associated to them.
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
* Customers don't own the default outbound access IP. This IP might change, and any dependency on it could cause issues in the future. Some examples of configurations that won't work when using default outbound access:-- When you have multiple NICs on the same VM, note that default outbound IPs won't consistently be the same across all NICs.-- When scaling up/down Virtual Machine Scale sets, default outbound IPs assigned to individual instances can and will often change.
+- When you have multiple NICs on the same VM, default outbound IPs won't consistently be the same across all NICs.
+- When scaling up/down Virtual Machine Scale sets, default outbound IPs assigned to individual instances can and change.
- Similarly, default outbound IPs aren't consistent or contiguous across VM instances in a Virtual Machine Scale Set. ## How can I transition to an explicit method of public connectivity (and disable default outbound access)?
There are multiple ways to turn off default outbound access. The following secti
* Existing subnets can't currently be converted to Private.
-* In configurations using a User Defined Route (UDR) with a default route (0/0) that sends traffic to an upstream firewall/network virtual appliance, any traffic that bypasses this route (e.g. to Service Tagged destinations) will break in a Private subnet.
+* In configurations using a User Defined Route (UDR) with a default route (0/0) that sends traffic to an upstream firewall/network virtual appliance, any traffic that bypasses this route (for example, to Service Tagged destinations) breaks in a Private subnet.
### Add an explicit outbound connectivity method
There are multiple ways to turn off default outbound access. The following secti
### Use Flexible orchestration mode for Virtual Machine Scale Sets
-* Flexible scale sets are secure by default. Any instances created via Flexible scale sets don't have the default outbound access IP associated with them, so an explicit outbound method is required. For more information, see [Flexible orchestration mode for Virtual Machine Scale Sets](../../virtual-machines/flexible-virtual-machine-scale-sets.md)
+* Flexible scale sets are secure by default. Any instances created via Flexible scale sets don't have the default outbound access IP associated with them, so an explicit outbound method is required. For more information, see [Flexible orchestration mode for Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#what-has-changed-with-flexible-orchestration-mode)
>[!Important] > When a load balancer backend pool is configured by IP address, it will use default outbound access due to an ongoing known issue. For secure by default configuration and applications with demanding outbound needs, associate a NAT gateway to the VMs in your load balancer's backend pool to secure traffic. See more on existing [known issues](../../load-balancer/whats-new.md#known-issues).
virtual-network Setup Dpdk Mana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk-mana.md
The setup procedure for MANA DPDK is outlined in the [example code.](#example-te
Legacy Azure Linux VMs rely on the mlx4 or mlx5 drivers and the accompanying hardware for accelerated networking. Azure DPDK users would select specific interfaces to include or exclude by passing bus addresses to the DPDK EAL. The setup procedure for MANA DPDK differs slightly, since the assumption of one bus address per Accelerated Networking interface no longer holds true. Rather than using a PCI bus address, the MANA PMD uses the MAC address to determine which interface it should bind to. ## MANA DPDK EAL Arguments
-The MANA PMD probes all devices and ports on the system when no `--vdev` argument is present; the `--vdev` argument is not mandatory. In testing environments it's often desirable to leave one (primary) interface available for servicing the SSH connection to the VM. To use DPDK with a subset of the available VFs, users should pass both the bus address of the MANA device and the MAC address of the interfaces in the `--vdev` argument. For more detail, example code is available to demonstrate [DPDK EAL initialization on MANA](#example-testpmd-setup-and-netvsc-test).
+The MANA PMD probes all devices and ports on the system when no `--vdev` argument is present; the `--vdev` argument isn't mandatory. In testing environments, it's often desirable to leave one (primary) interface available for servicing the SSH connection to the VM. To use DPDK with a subset of the available VFs, users should pass both the bus address of the MANA device and the MAC address of the interfaces in the `--vdev` argument. For more detail, example code is available to demonstrate [DPDK EAL initialization on MANA](#example-testpmd-setup-and-netvsc-test).
For general information about the DPDK Environment Abstraction Layer (EAL): - [DPDK EAL Arguments for Linux](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#eal-in-a-linux-userland-execution-environment)
MANA DPDK requires the following set of drivers:
1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later) 1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later)
+### Supported Marketplace Images
+A nonexhaustive list of images with backported patches for DPDK with MANA:
+- Red Hat Enterprise Linux 8.9
+- Red Hat Enterprise Linux 9.4
+- Canonical Ubuntu Server 20.04 (5.15.0-1045-azure)
+- Canonical Ubuntu Server 22.04 (5.15.0-1045-azure)
+ >[!NOTE] >MANA DPDK is not available for Windows; it will only work on Linux VMs.
popd
Note the following example code for running DPDK with MANA. The direct-to-vf 'netvsc' configuration on Azure is recommended for maximum performance with MANA. >[!NOTE]
->DPDK requires either 2MB or 1GB hugepages to be enabled
+>DPDK requires either 2MB or 1GB hugepages to be enabled.
+>Example assumes an Azure VM with 2 accelerated networking NICs attached.
```bash # Enable 2MB hugepages.
echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind
dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2 # MANA multiple queue test (example assumes > 9 cores)
-dpdk-testpmd -l 1-9 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --nb-cores=8 --txd=128 --rxd=128 --txq=8 --rxq=8 --stats 2
+dpdk-testpmd -l 1-6 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --nb-cores=4 --txd=128 --rxd=128 --txq=8 --rxq=8 --stats 2
```
dpdk-testpmd -l 1-9 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --
### Fail to set interface down. Failure to set the MANA bound device to DOWN can result in low or zero packet throughput. The failure to release the device can result the EAL error message related to transmit queues.
-```
+```log
mana_start_tx_queues(): Failed to create qp queue index 0 mana_dev_start(): failed to start tx queues -19 ```
mana_dev_start(): failed to start tx queues -19
### Failure to enable huge pages. Try enabling huge pages and ensuring the information is visible in meminfo.
-```
+```log
EAL: No free 2048 kB hugepages reported on node 0 EAL: FATAL: Cannot get hugepage information. EAL: Cannot get hugepage information.
EAL: Error - exiting with code: 1
Cause: Cannot init EAL: Permission denied ```
-### Low throughput with use of --vdev="net_vdev_netvsc0,iface=eth1"
+### Low throughput with use of `--vdev="net_vdev_netvsc0,iface=eth1"`
Failover configuration of either the `net_failsafe` or `net_vdev_netvsc` poll-mode-drivers isn't recommended for high performance on Azure. The netvsc configuration with DPDK version 20.11 or higher may give better results. For optimal performance, ensure your Linux kernel, rdma-core, and DPDK packages meet the listed requirements for DPDK and MANA.+
+### Version mismatch for rdma-core
+Mismatches in rdma-core and the linux kernel can occur anytime; often they occur when a user is building some combination of rdma-core, DPDK, and the linux kernel from source. This type of version mismatch can cause a failed probe of the MANA virtual function (VF).
+
+```log
+EAL: Probe PCI driver: net_mana (1414:ba) device: 7870:00:00.0 (socket 0)
+mana_arg_parse_callback(): key=mac value=00:0d:3a:76:3b:d0 index=0
+mana_init_once(): MP INIT PRIMARY
+mana_pci_probe_mac(): Probe device name mana_0 dev_name uverbs0 ibdev_path /sys/class/infiniband/mana_0
+mana_probe_port(): device located port 2 address 00:0D:3A:76:3B:D0
+mana_probe_port(): ibv_alloc_parent_domain failed port 2
+mana_pci_probe_mac(): Probe on IB port 2 failed -12
+EAL: Requested device 7870:00:00.0 cannot be used
+EAL: Bus (pci) probe failed.
+hn_vf_attach(): Couldn't find port for VF
+hn_vf_add(): RNDIS reports VF but device not found, retrying
+
+```
+This likely results from using a kernel with backported patches for mana_ib with a newer version of rdma-core. The root cause is an interaction between the kernel RDMA drivers and user space rdma-core libraries.
+
+The Linux kernel uapi for RDMA has a list of RDMA provider IDs, in backported versions of the kernel this ID value can differ from the version in the rdma-core libraries.
+> {!NOTE}
+> Example snippets are from [Ubuntu 5.150-1045 linux-azure](https://git.launchpad.net/~canonical-kernel/ubuntu/+source/linux-azure/+git/focal/tree/include/uapi/rdma/ib_user_ioctl_verbs.h?h=azure-5.15-next) and [rdma-core v46.0](https://github.com/linux-rdma/rdma-core/blob/4cce53f5be035137c9d31d28e204502231a56382/kernel-headers/rdma/ib_user_ioctl_verbs.h#L220)
+```c
+// Linux kernel header
+// include/uapi/rdma/ib_user_ioctl_verbs.h
+enum rdma_driver_id {
+ RDMA_DRIVER_UNKNOWN,
+ RDMA_DRIVER_MLX5,
+ RDMA_DRIVER_MLX4,
+ RDMA_DRIVER_CXGB3,
+ RDMA_DRIVER_CXGB4,
+ RDMA_DRIVER_MTHCA,
+ RDMA_DRIVER_BNXT_RE,
+ RDMA_DRIVER_OCRDMA,
+ RDMA_DRIVER_NES,
+ RDMA_DRIVER_I40IW,
+ RDMA_DRIVER_IRDMA = RDMA_DRIVER_I40IW,
+ RDMA_DRIVER_VMW_PVRDMA,
+ RDMA_DRIVER_QEDR,
+ RDMA_DRIVER_HNS,
+ RDMA_DRIVER_USNIC,
+ RDMA_DRIVER_RXE,
+ RDMA_DRIVER_HFI1,
+ RDMA_DRIVER_QIB,
+ RDMA_DRIVER_EFA,
+ RDMA_DRIVER_SIW,
+ RDMA_DRIVER_MANA, //<- MANA added as last member of enum after backporting
+};
+
+// Example mismatched rdma-core ioctl verbs header
+// on github: kernel-headers/rdma/ib_user_ioctl_verbs.h
+// or in release tar.gz: include/rdma/ib_user_ioctl_verbs.h
+enum rdma_driver_id {
+ RDMA_DRIVER_UNKNOWN,
+ RDMA_DRIVER_MLX5,
+ RDMA_DRIVER_MLX4,
+ RDMA_DRIVER_CXGB3,
+ RDMA_DRIVER_CXGB4,
+ RDMA_DRIVER_MTHCA,
+ RDMA_DRIVER_BNXT_RE,
+ RDMA_DRIVER_OCRDMA,
+ RDMA_DRIVER_NES,
+ RDMA_DRIVER_I40IW,
+ RDMA_DRIVER_IRDMA = RDMA_DRIVER_I40IW,
+ RDMA_DRIVER_VMW_PVRDMA,
+ RDMA_DRIVER_QEDR,
+ RDMA_DRIVER_HNS,
+ RDMA_DRIVER_USNIC,
+ RDMA_DRIVER_RXE,
+ RDMA_DRIVER_HFI1,
+ RDMA_DRIVER_QIB,
+ RDMA_DRIVER_EFA,
+ RDMA_DRIVER_SIW,
+ RDMA_DRIVER_ERDMA, // <- This upstream has two additional providers
+ RDMA_DRIVER_MANA, // <- So MANA's ID in the enum does not match
+};
+```
+
+This mismatch results in the MANA provider code failing to load. Use `gdb` to trace the execution of `dpdk-testpmd` to confirm the ERDMA provider is loaded instead of the MANA provider. The MANA driver_id must be consistent for both the kernel and rdma-core. The MANA PMD loads correctly when those IDs match.
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
# Set up DPDK in a Linux virtual machine > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Data Plane Development Kit (DPDK) on Azure offers a faster user-space packet processing framework for performance-intensive applications. This framework bypasses the virtual machineΓÇÖs kernel network stack.
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
# Test VM network throughput by using NTTTCP > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to use the free NTTTCP tool from Microsoft to test network bandwidth and throughput performance on Azure Windows or Linux virtual machines (VMs). A tool like NTTTCP targets the network for testing and minimizes the use of other resources that could affect performance.
virtual-network Virtual Network Optimize Network Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-optimize-network-bandwidth.md
# Optimize network throughput for Azure virtual machines > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Virtual Machines (VMs) have default network settings that can be further optimized for network throughput. This article describes how to optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
# Test network latency between Azure VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to test network latency between Azure virtual machines (VMs) by using the publicly available tools [Latte](https://github.com/microsoft/latte) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
# Name resolution for resources in Azure virtual networks > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure can be used to host IaaS, PaaS, and hybrid solutions. In order to facilitate communication between the virtual machines (VMs) and other resources deployed in a virtual network, it may be necessary to allow them to communicate with each other. The use of easily remembered and unchanging names simplifies the communication process, rather than relying on IP addresses.
vpn-gateway Vpn Gateway Validate Throughput To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-validate-throughput-to-vnet.md
# How to validate VPN throughput to a virtual network > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
A VPN gateway connection enables you to establish secure, cross-premises connectivity between your Virtual Network within Azure and your on-premises IT infrastructure.
web-application-firewall Migrate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/migrate-policy.md
function createNewTopLevelWafPolicy ($subscriptionId, $resourceGroupName, $appli
if ($appgw.FirewallPolicy) { $customRulePolicyId = $appgw.FirewallPolicy.Id
- $rg = Get-AzResourceGroup -Id $customRulePolicyId
+ $rg = Get-AzResourceGroup -Name $resourceGroupName
$crPolicyName = $customRulePolicyId.Substring($customRulePolicyId.LastIndexOf("/") + 1) $customRulePolicy = Get-AzApplicationGatewayFirewallPolicy -ResourceGroupName $rg.ResourceGroupName -Name $crPolicyName $wafPolicy = New-AzApplicationGatewayFirewallPolicy -ResourceGroupName $rg.ResourceGroupName -Name $wafPolicyName -CustomRule $customRulePolicy.CustomRules -ManagedRule $managedRule -PolicySetting $policySetting -Location $appgw.Location