Updates from: 08/16/2024 01:05:56
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
Starting with version `2024-07-31-preview`, you can train your custom neural mod
You can choose to spend all of 10 free hours on a single model build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours`: ```bash
+POST https://{endpoint}/documentintelligence/documentModels:build?api-version=2024-07-31-preview
-POST /documentModels:build
{
+ "modelId": "string",
+ "description": "string",
+ "buildMode": "neural",
..., "maxTrainingHours": 10 }
POST /documentModels:build
> * If you would like to train additional neural models or train models for a longer time period that **exceed 10 hours**, billing charges apply. For details on the billing charges, refer to the [pricing page](https://azure.microsoft.com/pricing/details/ai-document-intelligence/). > * You can opt in for this paid training service by setting the `maxTrainingHours` to the desired maximum number of hours. API calls with no budget but with the `maxTrainingHours` set as over 10 hours will fail. > * As each build takes different amount of time depending on the type and size of the training dataset, billing is calculated for the actual time spent training the neural model, with a minimum of 30 minutes per training job.
-> * This paid billing structure enables you to train larger data sets for longer durations with flexibility in the training hours.
+> * This paid training feature enables you to train larger data sets for longer durations with flexibility in the training hours.
```bash
GET /documentModels/{myCustomModel}
## Billing
-For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For the Azure support ticket, enter in the `summary` section a phrase such as `Increase Document Intelligence custom neural training (TPS) limit`. A ticket can only apply at a resource-level, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket.
+For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For Azure support ticket, enter in the `summary` field: `Increase Document Intelligence custom neural training (TPS) limit`.
+
+> [!IMPORTANT]
+> * When increasing the training limit, note that 2 custom neural model training sessions will be considered as 1 training hour. For more details on the pricing for increasing the number of training sessions, refer to the [pricing page](https://azure.microsoft.com/pricing/details/ai-document-intelligence/).
+> * Azure support ticket for training limit increase can only apply at a **resource-level**, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket.
If you want to train models for longer durations than 30 minutes, we support **paid training** with our newest version, `v4.0 (2024-07-31-preview)`. Using the latest version, you can train your model for a longer duration to process larger documents. For more information about paid training, *see* [Billing v4.0](service-limits.md#billing).
If you want to train models for longer durations than 30 minutes, we support **p
## Billing
-For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For the Azure support ticket, enter in the `summary` section a phrase such as `Increase Document Intelligence custom neural training (TPS) limit`. A ticket can only apply at a resource-level, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket.
+For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For Azure support ticket, enter in the `summary` field: `Increase Document Intelligence custom neural training (TPS) limit`.
+
+> [!IMPORTANT]
+> * When increasing the training limit, note that 2 custom neural model training sessions will be considered as 1 training hour. For more details on the pricing for increasing the number of training sessions, refer to the [pricing page](https://azure.microsoft.com/pricing/details/ai-document-intelligence/).
+> * Azure support ticket for training limit increase can only apply at a **resource-level**, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket.
If you want to train models for longer durations than 30 minutes, we support **paid training** with our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. For more information about paid training, *see* [Billing v4.0](service-limits.md#billing).
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
When you deploy a specified number of provisioned throughput units (PTUs), a set
PTU deployment utilization = (PTUs consumed in the time period) / (PTUs deployed in the time period)
-You can find the utilization measure in the Azure-Monitor section for your resource. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com), go to your Azure OpenAI resource and select the Metrics page from the left nav. On the metrics page, select the 'Provisioned-managed utilization' measure. If you have more than one deployment in the resource, you should also split the values by each deployment by clicking the 'Apply Splitting' button.
+You can find the utilization measure in the Azure-Monitor section for your resource. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com), go to your Azure OpenAI resource and select the Metrics page from the left nav. On the metrics page, select the 'Provisioned-managed utilization V2' metric. If you have more than one deployment in the resource, you should also split the values by each deployment by clicking the 'Apply Splitting' button.
:::image type="content" source="../media/provisioned/azure-monitor-utilization.jpg" alt-text="Screenshot of the provisioned managed utilization on the resource's metrics blade in the Azure portal." lightbox="../media/provisioned/azure-monitor-utilization.jpg":::
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Previously updated : 05/09/2024 Last updated : 08/09/2024 recommendations: false # Use the Azure OpenAI web app
-Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standalone web app to interact with Azure OpenAI Service models by using a graphical user interface. You can deploy the app by using either Azure OpenAI Studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
+Along with Azure AI Studio, Azure OpenAI Studio, APIs, and SDKs, you can use the customizable standalone web app to interact with Azure OpenAI models by using a graphical user interface. Key features include:
+* Connectivity with multiple data sources to support rich querying and retrieval-augmented generation, including Azure AI Search, Prompt Flow, and more.
+* Conversation history and user feedback collection through Cosmos DB.
+* Authentication with role-based access control via Microsoft Entra ID.
+* Customization of the user interface, data sources, and features using environment variables (no-code via Azure portal).
+* Support for modifying the underlying web application source code as an open-source repository.
+
+You can deploy the app by using either [Azure AI Studio](/azure/ai-studio/tutorials/deploy-chat-web-app) or [Azure OpenAI Studio](/azure/ai-services/openai/use-your-data-quickstart), or through a manual deployment through the Azure portal or the Azure Developer CLI via your local machine [(instructions available at the repository here)](https://github.com/microsoft/sample-app-aoai-chatGPT). Depending on your deployment channel, you can preload a data source to chat with via the web application, but this can be changed after deployment.
+
+For Azure OpenAI beginners aspiring to chat with their data through the web application, [Azure AI Studio](/azure/ai-studio/tutorials/deploy-chat-web-app) is the recommended medium for initial deployment and data source configuration.
![Screenshot that shows the web app interface.](../media/use-your-data/web-app.png) ## Important considerations -- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When finished with your app, you can delete it from the Azure portal.-- GPT-4 Turbo with Vision models aren't supported.
+- This web application and many of its features are in preview, meaning that bugs might occur and that not all features might be complete. If you find a bug or require assistance, raise an issue in the associated [GitHub repository](https://github.com/microsoft/sample-app-aoai-chatGPT).
+- Publishing a web app creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When you're done with your app, you can delete it and any associated resources from the Azure portal.
+- GPT-4 Turbo with Vision models are not currently supported.
- By default, the app is deployed with the Microsoft identity provider already configured. The identity provider restricts access to the app to members of your Azure tenant. To add or modify authentication: 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name that you specified during publishing. Select the web app, and then select **Authentication** on the left menu. Then select **Add identity provider**.
Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standa
1. Select Microsoft as the identity provider. The default settings on this page restrict the app to your tenant only, so you don't need to change anything else here. Select **Add**.
- Now users are asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant.
+Now users will be asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant. For more information on managing authentication, view this [quickstart on authentication for web apps on Azure App Service.](/azure/app-service/scenario-secure-app-authentication-app-service)
-## Web app customization
+## Customizing the application using environment variables
You can customize the app's front-end and back-end logic. The app provides several [environment variables](https://github.com/microsoft/sample-app-aoai-chatGPT#common-customization-scenarios-eg-updating-the-default-chat-logo-and-headers) for common customization scenarios such as changing the icon in the app.
+These environment variables can be modified through the Azure portal after deploying the web application.
+1. In the Azure portal, search for and select the App Services page.
+2. Select the web app that you have just deployed.
+3. In the left menu of the app, select Settings > Environment variables.
+4. To modify an existing environment variable, click on its name.
+5. To add a single new environment variable, click on Add in the panel's top menu bar.
+6. To use the JSON-based editor to manage environment variables, click Advanced edit.
+ When you're customizing the app, we recommend: - Clearly communicating how each setting that you implement affects the user experience.
When you're customizing the app, we recommend:
Sample source code for the web app is available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). Source code is provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps.
-## Updating the web app
+## Modifying the application user interface
+
+The environment variables relevant to user interface customization are:
+- `UI_CHAT_DESCRIPTION`: This is the smaller paragraph text shown below the `UI_CHAT_TITLE` in the center of the page upon loading.
+ - Data type: text
+- `UI_CHAT_LOGO`: This is the large image shown in the center of the page upon loading.
+ - Data type: URL to image
+- `UI_CHAT_TITLE`: This is the large text shown in the center of the page upon loading.
+ - Data type: text
+- `UI_FAVICON`: This is the favicon shown on the browser window/tab.
+ - Data type: URL to image
+- `UI_LOGO`: This is logo appears in the top left of the page and to the left of the title.
+ - Data type: URL to image
+- `UI_TITLE`: This is the title shown on the browser window/tab. It also appears in the top left of the page by the logo.
+ - Data type: text
+- `UI_SHOW_SHARE_BUTTON`: This button appears on the top right of the page, and allows users to share a URL linking to the web app.
+ - Data type: Boolean, must enter either True or False, defaults to True if left blank or unspecified.
+- `UI_SHOW_CHAT_HISTORY_BUTTON`: This appears on the top right of the page and to the left of the UI_SHOW_SHARE_BUTTON.
+ - Data type: Boolean, must enter either True or False, defaults to True if left blank or unspecified.
+
+To modify the application user interface, follow the instructions in the previous step to open the environment variables page for your web app. Then, use Advanced edit to open the JSON-based editor. At the top of the JSON (after the `[` character), paste the below code block and customize the values accordingly:
+```json
+ {
+ "name": "UI_CHAT_DESCRIPTION",
+ "value": "This is an example of a UI Chat Description. Chatbots can make mistakes. Check important info and sensitive info.",
+ "slotSetting": false
+ },
+ {
+ "name": "UI_CHAT_LOGO",
+ "value": "https://learn-bot.azurewebsites.net/assets/Contoso-ff70ad88.svg",
+ "slotSetting": false
+ },
+ {
+ "name": "UI_CHAT_TITLE",
+ "value": "This is an example of a UI Chat Title. Start chatting",
+ "slotSetting": false
+ },
+ {
+ "name": "UI_FAVICON",
+ "value": "https://learn-bot.azurewebsites.net/assets/Contoso-ff70ad88.svg",
+ "slotSetting": false
+ },
+ {
+ "name": "UI_LOGO",
+ "value": "https://learn-bot.azurewebsites.net/assets/Contoso-ff70ad88.svg",
+ "slotSetting": false
+ },
+ {
+ "name": "UI_TITLE",
+ "value": "This is an example of a UI Title",
+ "slotSetting": false
+ },
+```
+
+## Enabling chat history using Cosmos DB
+
+You can turn on chat history for your users of the web app. When you turn on the feature, users have access to their individual previous queries and responses.
+
+To turn on chat history, deploy or redeploy your model as a web app by using [Azure OpenAI Studio](https://oai.azure.com/portal) or [Azure AI Studio](https://ai.azure.com/) and select **Enable chat history and user feedback in the web app**.
++
+> [!IMPORTANT]
+> Turning on chat history creates an [Azure Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and it incurs [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage that you use beyond any free tiers.
+
+After you turn on chat history, your users can show and hide it in the upper-right corner of the app. When users show chat history, they can rename or delete conversations. You can modify whether users can access this function using the environment variable `UI_SHOW_CHAT_HISTORY_BUTTON` as specified in the previous section. Because the users are signed in to the app, conversations are automatically ordered from newest to oldest. Conversations are named based on the first query in the conversation.
+
+> [!NOTE]
+> Popular Azure regions such as East US can experience periods of high-demand where it might not be possible to deploy a new instance of Cosmos DB. In that case, opt to deploy to alternative region such as East US 2 or retry your deployment until it succeeds. Should the deployment of Cosmos DB fail, your app will be available at its specified URL, but chat history will not be available. Enabling conversation history will also enable the view conversation history button in the top-right.
+
+Deploying with the chat history option selected will automatically populate the following environment variables, so there is no need to modify them unless you wish to switch Cosmos DB instances. They are:
+- `AZURE_COSMOSDB_ACCOUNT`: This is the name of the Cosmos DB account that is deployed along with your web app.
+ - Data type: text
+- `AZURE_COSMOSDB_ACCOUNT_KEY`: This is an alternative environment variable that is used only when permissions are not granted via Microsoft Entra ID and key-based authentication is used instead.
+ - Data type: text. Is normally not present or populated.
+- `AZURE_COSMOSDB_DATABASE`: This is the name of the database object within Cosmos DB that is deployed along with your web app.
+ - Data type: text, should be `db_conversation_history`
+- `AZURE_COSMOSDB_CONTAINER`: This is the name of the database container object within Cosmos DB that is deployed along with your web app.
+ - Data type: text, should be `conversations`
+- `AZURE_COSMOSDB_ACCOUNT`: This is the name of the Cosmos DB account that is deployed along with your web app.
+ - Data type: text
++
+### Collecting user feedback
+
+To collect user feedback, you can enable a set of 'thumbs up' and 'thumbs down' icons that appear on each of the chatbot's responses. This will allow users to evaluate a response's quality, and indicate where errors occur using a 'provide negative feedback' modal window.
+
+To enable this feature, set the following environment variable to True:
+- `AZURE_COSMOSDB_ENABLE_FEEDBACK`: This is the name of the Cosmos DB account that is deployed along with your web app.
+ - Data type: Data type: Boolean, must enter either True or False
+
+This can be accomplished using the Advanced edit or simple Edit options as previously explained. The JSON to paste in the Advanced edit JSON editor is:
+```json
+ {
+ "name": "AZURE_COSMOSDB_ENABLE_FEEDBACK",
+ "value": "True",
+ "slotSetting": false
+ },
+```
+
+## Connecting to Azure AI Search and uploaded files as a data source
+
+### Using Azure AI Studio
+
+Follow [this tutorial on integrating Azure AI Search with AI Studio](/azure/ai-studio/tutorials/deploy-chat-web-app#add-your-data-and-try-the-chat-model-again) and redeploy your application.
+
+### Using Azure OpenAI Studio
+
+Follow [this tutorial on integrating Azure AI Search with OpenAI Studio](/azure/ai-services/openai/use-your-data-quickstart#add-your-data-using-azure-openai-studio) and redeploy your application.
+
+### Using environment variables
+
+To connect to Azure AI Search without redeploying your app, you can modify the following mandatory environment variables using any of the editing options as previously described.
+- `DATASOURCE_TYPE`: This determines which data source to use when answering a user's queries.
+ - Data type: text. Should be set to `AzureCognitiveSearch` (former name for Azure AI Search)
+- `AZURE_SEARCH_SERVICE`: This is the name of your Azure AI Search instance.
+ - Data type: text
+- `AZURE_SEARCH_INDEX`: This is the name of your Azure AI Search instance's index name.
+ - Data type: text
+- `AZURE_SEARCH_KEY`: This is the authentication key of your Azure AI Search instance. Optional if using Microsoft Entra ID for authentication.
+ - Data type: text
+
+### Further customization scenarios using environment variables
+
+- `AZURE_SEARCH_USE_SEMANTIC_SEARCH`: Indicates whether to use semantic search in Azure AI Search.
+ - Data type: boolean, should be set to `False` if not using semantic search.
+- `AZURE_SEARCH_SEMANTIC_SEARCH_CONFIG`: Specifies the name of the semantic search configuration to use if semantic search is enabled.
+ - Data type: text, defaults to `azureml-default`.
+- `AZURE_SEARCH_INDEX_TOP_K`: Defines the number of top documents to retrieve from Azure AI Search.
+ - Data type: integer, should be set to `5`.
+- `AZURE_SEARCH_ENABLE_IN_DOMAIN`: Limits responses to queries related only to your data.
+ - Data type: boolean, should be set to `True`.
+- `AZURE_SEARCH_CONTENT_COLUMNS`: Specifies the list of fields in your Azure AI Search index that contain the text content of your documents, used when formulating a bot response.
+ - Data type: text, defaults to `content` if deployed from Azure AI Studio or Azure OpenAI Studio,
+- `AZURE_SEARCH_FILENAME_COLUMN`: Specifies the field from your Azure AI Search index that provides a unique identifier of the source data to display in the UI.
+ - Data type: text, defaults to `filepath` if deployed from Azure AI Studio or Azure OpenAI Studio,
+- `AZURE_SEARCH_TITLE_COLUMN`: Specifies the field from your Azure AI Search index that provides a relevant title or header for your data content to display in the UI.
+ - Data type: text, defaults to `title` if deployed from Azure AI Studio or Azure OpenAI Studio,
+- `AZURE_SEARCH_URL_COLUMN`: Specifies the field from your Azure AI Search index that contains a URL for the document.
+ - Data type: text, defaults to `url` if deployed from Azure AI Studio or Azure OpenAI Studio,
+- `AZURE_SEARCH_VECTOR_COLUMNS`: Specifies the list of fields in your Azure AI Search index that contain vector embeddings of your documents, used when formulating a bot response.
+ - Data type: text, defaults to `contentVector` if deployed from Azure AI Studio or Azure OpenAI Studio,
+- `AZURE_SEARCH_QUERY_TYPE`: Specifies the query type to use: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, or `vectorSemanticHybrid`. This setting takes precedence over `AZURE_SEARCH_USE_SEMANTIC_SEARCH`.
+ - Data type: text, we recommend testing with `vectorSemanticHybrid`.
+- `AZURE_SEARCH_PERMITTED_GROUPS_COLUMN`: Specifies the field from your Azure AI Search index that contains Microsoft Entra group IDs, determining document-level access control.
+ - Data type: text
+- `AZURE_SEARCH_STRICTNESS`: Specifies the strictness level for the model limiting responses to your data.
+ - Data type: integer, should be set between `1` and `5`, with `3` being recommended.
+- `AZURE_OPENAI_EMBEDDING_NAME`: Specifies the name of your embedding model deployment if using vector search.
+ - Data type: text
+
+The JSON to paste in the Advanced edit JSON editor is:
+```json
+{
+ "name": "AZURE_SEARCH_CONTENT_COLUMNS",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_ENABLE_IN_DOMAIN",
+ "value": "true",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_FILENAME_COLUMN",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_INDEX",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_KEY",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_PERMITTED_GROUPS_COLUMN",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_QUERY_TYPE",
+ "value": "vectorSemanticHybrid",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_SEMANTIC_SEARCH_CONFIG",
+ "value": "azureml-default",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_SERVICE",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_STRICTNESS",
+ "value": "3",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_TITLE_COLUMN",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_TOP_K",
+ "value": "5",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_URL_COLUMN",
+ "value": "",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_USE_SEMANTIC_SEARCH",
+ "value": "true",
+ "slotSetting": false
+ },
+ {
+ "name": "AZURE_SEARCH_VECTOR_COLUMNS",
+ "value": "contentVector",
+ "slotSetting": false
+ },
+```
+
+## Connecting to Prompt Flow as a data source
+
+[Prompt flows](/azure/ai-studio/how-to/flow-develop) allow you to define highly customizable RAG and processing logic on a user's queries.
+
+### Creating and deploying your prompt flow in Azure AI Studio
+
+Follow [this tutorial](/azure/ai-studio/tutorials/deploy-copilot-ai-studio) to create, test, and deploy an inferencing endpoint for your prompt flow in Azure AI Studio.
+
+### Enable underlying citations from your prompt flow
+
+When configuring your prompt flow to display citations when integrated this web application, it must return two key outputs: one called `documents` (your citations), and one called `reply` (your natural language answer).
+1. `documents` is a JSON object, which should contain the following elements. `citations` is a list that can contain multiple items following the same schema. the `documents` object should be generated and populated based on your selected RAG pattern.
+```json
+{
+ "citations": [
+ {
+ "content": "string",
+ "id": 12345,
+ "title": "string",
+ "filepath": "string",
+ "url": "string",
+ "metadata": "string",
+ "chunk_id": None,
+ "reindex_id": None,
+ "part_index": None
+ }
+ ],
+ "intent": "Your_string_here"
+}
+```
+++
+2. `reply` consists of a returned string that represents the final natural language to a given user query. Your `reply` must contain references to each of the documents (sources) in the following format: `[doc1], [doc2]`, etc. The web application will parse `reply` and process the references, replacing all instances of `[doc1]` with small superscript numeric indicators that link directly to the ordered `documents` that are returned. Hence, you must prompt your LLM that generates the final natural language to include these references, which should also be passed in your LLM call to ensure that they align correctly. For example:
+```text
+system:
+You are a helpful chat assistant that answers a user's question based on the information retrieved from a data source.
+
+YOU MUST ALWAYS USE CITATIONS FOR ALL FACTUAL RESPONSES. YOU MUST INCLUDE CITATIONS IN YOUR ANSWER IN THE FORMAT [doc1], [doc2], ... AND SO FORTH WHEN YOU ARE USING INFORMATION RELATING TO SAID SOURCE. THIS MUST BE RETURNED IN YOUR ANSWER.
+
+Provide sort and concise answers with details directly related to the query.
+
+## Conversation history for context
+{% for item in chat_history %}
+user:
+{{item.inputs.query}}
+
+assistant:
+{{item.outputs.reply}}
+{% endfor %}
+
+## Current question
+user:
+### HERE ARE SOME CITED SOURCE INFORMATION FROM A MOCKED API TO ASSIST WITH ANSWERING THE QUESTION BELOW. ANSWER ONLY BASED ON THE TRUTHS PRESENTED HERE.
+{{your_input_name_for_documents}}
+FOR EACH OF THE CITATIONS ABOVE, YOU MUST INCLUDE IN YOUR ANSWER [doc1], [doc2], ... AND SO FORTH WHEN YOU ARE USING INFORMATION RELATING TO SAID SOURCE. THIS MUST BE RETURNED IN YOUR ANSWER.
+### HERE IS THE QUESTION TO ANSWER.
+{{question}}
+
+```
+
+### Configuring environment variables to integrate prompt flow
+
+The environment variables to modify are:
+- `AZURE_OPENAI_STREAM`: This determines whether the answer is loaded in a streaming (incremental load) format. This is not supported for prompt flow and thus must be set to `False` to use this feature.
+ - Data type: boolean, set to `True` if not using prompt flow, `False` if using prompt flow
+- `USE_PROMPTFLOW`: Indicates whether to use an existing Prompt flow deployed endpoint. If set to `True`, both `PROMPTFLOW_ENDPOINT` and `PROMPTFLOW_API_KEY` must be set.
+ - Data type: boolean, should be set to `False` if not using Prompt flow.
+- `PROMPTFLOW_ENDPOINT`: Specifies the URL of the deployed Prompt flow endpoint.
+ - Data type: text, for example `https://pf-deployment-name.region.inference.ml.azure.com/score`
+- `PROMPTFLOW_API_KEY`: The authentication key for the deployed Prompt flow endpoint. Note: only Key-based authentication is supported.
+ - Data type: text
+- `PROMPTFLOW_RESPONSE_TIMEOUT`: Defines the timeout value in seconds for the Prompt flow endpoint to respond.
+ - Data type: integer, should be set to `120`.
+- `PROMPTFLOW_REQUEST_FIELD_NAME`: The default field name to construct the Prompt flow request. Note: `chat_history` is automatically constructed based on the interaction. If your API expects other mandatory fields, you will need to change the request parameters under the `promptflow_request` function.
+ - Data type: text, should be set to `query`.
+- `PROMPTFLOW_RESPONSE_FIELD_NAME`: The default field name to process the response from the Prompt flow request.
+ - Data type: text, should be set to `reply`.
+- `PROMPTFLOW_CITATIONS_FIELD_NAME`: The default field name to process the citations output from the Prompt flow request.
+ - Data type: text, should be set to `documents`.
++
+## Connecting to other data sources
+
+Other data sources are supported, including:
+- Azure Cosmos DB
+- Elasticsearch
+- Azure SQL Server
+- Pinecone
+- Azure Machine Learning Index
+
+For further instructions on enabling these data sources, see the [GitHub repository](https://github.com/microsoft/sample-app-aoai-chatGPT).
++
+## Updating the web app to include the latest changes
> [!NOTE] > As of February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When you're updating an app that was published before February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page.
If you customized or changed the app's source code, you need to update your app'
- If your app is hosted on GitHub, push your code changes to your repo, and then use the preceding synchronization steps. - If you're redeploying the app manually (for example, by using the Azure CLI), follow the steps for your deployment strategy.
-## Chat history
-
-You can turn on chat history for your users of the web app. When you turn on the feature, users have access to their individual previous queries and responses.
-
-To turn on chat history, deploy or redeploy your model as a web app by using [Azure OpenAI Studio](https://oai.azure.com/portal) and select **Enable chat history in the web app**.
--
-> [!IMPORTANT]
-> Turning on chat history creates an [Azure Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and it incurs [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage that you use.
-
-After you turn on chat history, your users can show and hide it in the upper-right corner of the app. When users show chat history, they can rename or delete conversations. Because the users are signed in to the app, conversations are automatically ordered from newest to oldest. Conversations are named based on the first query in the conversation.
-- ## Deleting your Cosmos DB instance Deleting your web app doesn't delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option selected on subsequent updates from the Azure OpenAI Studio, the application notifies the user of a connection error. However, the user can continue to use the web app without access to the chat history.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control). - [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes). - [Vector search and semantic search options](./concepts/use-your-data.md#search-types). -- [View your chat history in the deployed web app](./how-to/use-web-app.md#chat-history)
+- [View your chat history in the deployed web app](./how-to/use-web-app.md#enabling-chat-history-using-cosmos-db)
## July 2023
ai-services Translate Text Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translate-text-parameters.md
Previously updated : 04/29/2024 Last updated : 08/14/2024
The body of the request is a JSON array. Each array element is a JSON object wit
The following limitations apply: * The array can have at most 100 elements.
-* The entire text included in the request can't exceed 10,000 characters including spaces.
+* The entire text included in the request can't exceed 50,000 characters including spaces.
## Response body
This feature works the same way with `textType=text` or with `textType=html`. Th
## Request limits
-Each translate request is limited to 10,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. We recommended sending shorter requests.
+Each translate request is limited to 50,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. We recommended sending shorter requests.
The following table lists array element and character limits for the Translator **translation** operation. | Operation | Maximum size of array element | Maximum number of array elements | Maximum request size (characters) | |:-|:-|:-|:-|
-| translate | 10,000 | 100 | 10,000 |
+| translate | 10,000 | 100 | 50,000 |
## Use docker compose: Translator with supporting containers
api-management High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/high-availability.md
Title: Ensure reliability of your Azure API Management instance
-description: Learn how to use Azure reliability features including availability zones and multiregion deployments to make your Azure API Management service instance resilient to cloud failures.
+description: Learn about features including availability zones and multiregion deployments to make your Azure API Management instance resilient to cloud failures.
- Previously updated : 03/08/2024+ Last updated : 08/14/2024
[!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
-This article introduces service capabilities and considerations to ensure that your API Management instance continues to serve API requests if Azure outages occur.
+This article is an overview of service capabilities to ensure that your API Management instance continues to serve API requests if Azure outages occur.
-API Management supports the following key service capabilities that are recommended for [reliable and resilient](../reliability/overview.md) Azure solutions. Use them individually, or together, to improve the availability of your API Management solution:
+API Management offers the following capabilities for [reliable and resilient](../reliability/overview.md) Azure solutions. Use them individually or together to enhance availability:
-* **Availability zones**, to provide resilience to datacenter-level outages
+* **Availability zones**: Resilience to datacenter-level outages
-* **Multi-region deployment**, to provide resilience to regional outages
+* **Multi-region deployment**: Resilience to regional outages
> [!NOTE]
-> API Management supports availability zones and multi-region deployment in the **Premium** service tier.
+> * Availability zones and multi-region deployment are supported in the **Premium** tier.
+> * For configuration, see [Migrate API Management to availability zone support](/azure/reliability/migrate-api-mgt?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=%2Fazure%2Fapi-management%2Fbreadcrumb%2Ftoc.json) and [Deploy API Management in multiple regions](api-management-howto-deploy-multi-region.md).
-## Availability zones
-Azure [availability zones](../reliability/availability-zones-overview.md) are physically separate locations within an Azure region that are tolerant to datacenter-level failures. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. To ensure resiliency, a minimum of 3 separate availability zones are present in all availability zone-enabled regions.
+## Availability zones
+Azure availability zones are physically separate locations within an Azure region that are tolerant to datacenter-level failures. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. To ensure resiliency, a minimum of 3 separate availability zones are present in all availability zone-enabled regions. [Learn more](../reliability/availability-zones-overview.md)
-Enabling [zone redundancy](../reliability/migrate-api-mgt.md) for an API Management instance in a supported region provides redundancy for all [service components](api-management-key-concepts.md#api-management-components): gateway, management plane, and developer portal. Azure automatically replicates all service components across the zones that you select. Zone redundancy is only available in the Premium service tier.
+Enabling [zone redundancy](../reliability/migrate-api-mgt.md) for an API Management instance in a supported region provides redundancy for all [service components](api-management-key-concepts.md#api-management-components): gateway, management plane, and developer portal. Azure automatically replicates all service components across the zones that you select.
When you enable zone redundancy in a region, consider the number of API Management scale [units](upgrade-and-scale.md) that need to be distributed. Minimally, configure the same number of units as the number of availability zones, or a multiple so that the units are distributed evenly across the zones. For example, if you select 3 availability zones in a region, you could have 3 units so that each zone hosts one unit. > [!NOTE]
-> Use the [capacity](api-management-capacity.md) metric and your own testing to decide on the number of scale units that will provide the gateway performance for your needs. Learn more about [scaling and upgrading](upgrade-and-scale.md) your service instance.
+> Use the [capacity](api-management-capacity.md) metric and your own testing to decide the number of scale units that will provide the gateway performance for your needs. Learn more about [scaling and upgrading](upgrade-and-scale.md) your service instance.
## Multi-region deployment
-With [multi-region deployment](api-management-howto-deploy-multi-region.md), you can add regional API gateways to an existing API Management instance in one or more supported Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline. Multi-region deployment is only available in the Premium service tier.
+With [multi-region deployment](api-management-howto-deploy-multi-region.md), you can add regional API gateways to an existing API Management instance in one or more supported Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline.
[!INCLUDE [api-management-multi-region-concepts](../../includes/api-management-multi-region-concepts.md)]
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Microsoft Entra (formerly called Azure Active Directory) service for a specified set of principals in the directory. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable. > [!NOTE]
-> To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy.
+> To validate a JWT that was provided by an identity provider other than Microsoft Entra, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
```xml <validate-azure-ad-token
- tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service"
+ tenant-id="tenant ID or URL (for example, "https://contoso.onmicrosoft.com") of the Microsoft Entra ID tenant"
header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)" query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)" token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
failed-validation-error-message="error message to return on failure" output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token"> <client-application-ids>
- <application-id>Client application ID from Azure Active Directory</application-id>
+ <application-id>Client application ID from Microsoft Entra</application-id>
<!-- If there are multiple client application IDs, then add additional application-id elements --> </client-application-ids> <backend-application-ids>
- <application-id>Backend application ID from Azure Active Directory</application-id>
+ <application-id>Backend application ID from Microsoft Entra</application-id>
<!-- If there are multiple backend application IDs, then add additional application-id elements --> </backend-application-ids> <audiences>
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | |
-| tenant-id | Tenant ID or URL of the Microsoft Entra service. Policy expressions are allowed.| Yes | N/A |
+| tenant-id | Tenant ID or URL of the Microsoft Entra ID tenant, or one of the following well-known tenants:<br/><br/> - `organizations` or `https://login.microsoftonline.com/organizations` - to allow tokens from accounts in any organizational directory (any Microsoft Entra directory)<br/>- `common` or `https://login.microsoftonline.com/common` - to allow tokens from accounts in any organizational directory (any Microsoft Entra directory) and from personal Microsoft accounts (for example, Skype, XBox)<br/><br/>Policy expressions are allowed.| Yes | N/A |
| header-name | The name of the HTTP header holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` | | query-parameter-name | The name of the query parameter holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
The following policy is the minimal form of the `validate-azure-ad-token` policy
### Validate that audience and claim are correct
-The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Microsoft Entra tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
+The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The Microsoft tenant ID is the well-known `organizations` tenant, which allows tokens from accounts in any organizational directory. The hostname is provided using a policy expression, and the client application ID is provided using a named value. The decoded JWT is provided in the `jwt` variable after validation.
For more details on optional claims, read [Provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md). ```xml
-<validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
+<validate-azure-ad-token tenant-id="organizations" output-token-variable-name="jwt">
<client-application-ids> <application-id>{{aad-client-application-id}}</application-id> </client-application-ids>
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
| France South | ✅ | | ✅ | | Germany North | ✅ | | ✅ | | Germany West Central | ✅ | ✅ | ✅ |
+| Israel Central | ✅ | ✅ | |
| Italy North | ✅ | ✅** | | | Japan East | ✅ | ✅ | ✅ | | Japan West | ✅ | | ✅ |
-| Jio India West | | | ✅ |
+| Jio India Central | ✅** | | |
+| Jio India West | ✅** | | ✅ |
| Korea Central | ✅ | ✅ | ✅ |
-| Korea South | ✅ | | ✅ |
+| Korea South | ✅ | | ✅ |
+| Mexico Central | ✅ | ✅** | |
| North Central US | ✅ | | ✅ | | North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| South Central US | ✅ | ✅ | ✅ | | South India | ✅ | | ✅ | | Southeast Asia | ✅ | ✅ | ✅ |
+| Spain Central | ✅ | ✅** | |
| Sweden Central | ✅ | ✅ | | | Switzerland North | ✅ | ✅ | ✅ | | Switzerland West | ✅ | | ✅ |
app-service Samples Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-bicep.md
To learn about the Bicep syntax and properties for App Services resources, see [
| [App with a database, managed identity, and monitoring](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-managed-identity-sql-db)| Deploys an App Service App with a database, managed identity, and monitoring. | | [Two apps in separate regions with Azure Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-multi-region-front-door) | Deploys two identical web apps in separate regions with Azure Front Door to direct traffic. | |**App Service Environment**| **Description** |
-| [Create an App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-create) | Creates an App Service environment v2 in your virtual network. |
+| [Create an App Service environment v3](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asp-app-on-asev3-create) | Creates an App Service environment v3, App Service plan, App Service, and all associated networking resources. |
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md
The following table includes links to bash scripts built using the Azure CLI.
| [Backup and restore app](./scripts/cli-backup-schedule-restore.md) | Creates an App Service app and creates a one-time backup for it, creates a backup schedule for it, and then restores an App Service app from a backup. | |**Monitor app**|| | [Monitor an app with web server logs](./scripts/cli-monitor.md) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
-| | |
app-service Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-powershell.md
The following table includes links to PowerShell scripts built using the Azure P
| [Restore an app from backup](./scripts/powershell-backup-restore.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Restores an app from a previously completed backup. | | [Restore a backup across subscriptions](./scripts/powershell-backup-restore-diff-sub.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Restores a web app from a backup in another subscription. | |**Monitor app**||
-| [Monitor an app with web server logs](./scripts/powershell-monitor.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
-| | |
+| [Monitor an app with web server logs](./scripts/powershell-monitor.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
app-service Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-resource-manager-templates.md
To learn about the JSON syntax and properties for App Services resources, see [M
| [App connected to a backend webapp with staging slots](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-secure-ntier)| Deploys two web apps (frontend and backend) with staging slots securely connected together with VNet injection and Private Endpoint. | | [Two apps in separate regions with Azure Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-multi-region-front-door) | Deploys two identical web apps in separate regions with Azure Front Door to direct traffic. | |**App Service Environment**| **Description** |
-| [Create an App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-create) | Creates an App Service environment v2 in your virtual network. |
-| [Create an App Service environment v2 with an ILB address](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-ilb-create) | Creates an App Service environment v2 in your virtual network with a private internal load balancer address. |
-| [Configure the default SSL certificate for an ILB App Service environment or an ILB App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-ase-ilb-configure-default-ssl) | Configures the default TLS/SSL certificate for an ILB App Service environment or an ILB App Service environment v2. |
-| | |
+| [Create an App Service environment v3](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asp-app-on-asev3-create) | Creates an App Service environment v3, App Service plan, App Service, and all associated networking resources. |
app-service Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-terraform.md
The following table includes links to Terraform scripts.
|**Create app**|| | [Create two apps and connect securely with Private Endpoint and VNet integration](./scripts/terraform-secure-backend-frontend.md)| Creates two App Service apps and connect apps together with Private Endpoint and VNet integration. | | [Provision App Service and use slot swap to deploy](/azure/developer/terraform/provision-infrastructure-using-azure-deployment-slots)| Provision App Service infrastructure with Azure deployment slots. |
-| [Create an Azure Windows web app with a backup](./scripts/terraform-backup.md)| Create an Azure Windows web app with a backup schedule. |
-| | |
+| [Create an Azure Windows web app with a backup](./scripts/terraform-backup.md)| Create an Azure Windows web app with a backup schedule. |
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
adobe-target-content: ./webjobs-create-ieux
# Run background tasks with WebJobs in Azure App Service
-=======
-> [!NOTE]
-> WebJobs for **Windows container**, **Linux code**, and **Linux container** is in preview. WebJobs for Windows code is generally available and not in preview.
--- Deploy WebJobs by using the [Azure portal](https://portal.azure.com) to upload an executable or script. You can run background tasks in the Azure App Service. If instead of the Azure App Service, you're using Visual Studio to develop and deploy WebJobs, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
+> [!NOTE]
+> WebJobs for **Windows container**, **Linux code**, and **Linux container** is in preview. WebJobs for Windows code is generally available and not in preview.
+ ## Overview WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. + You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). Azure Functions provides another way to run programs and scripts. For a comparison between WebJobs and Functions, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md). -- ## WebJob types ### <a name="acceptablefiles"></a>Supported file types for scripts or programs
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
- [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. > [!NOTE]
-> You can't set up this feature for Red Hat OpenShift, or for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](/azure/aks/manage-azure-rbac) and doesn't require the AKS cluster to be connected to Azure Arc.
-
+> Azure RBAC is not available for Red Hat OpenShift or managed Kubernetes offerings where user access to the API server is restricted (ex: Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE)).
+>
+> Azure RBAC does not currently support Kubernetes clusters operating on ARM64 architecture. Please use [Kubernetes RBAC](identity-access-overview.md#kubernetes-rbac-authorization) to manage access control for ARM64-based Kubernetes clusters.
+>
+> For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](/azure/aks/manage-azure-rbac) and doesn't require the AKS cluster to be connected to Azure Arc.
## Enable Azure RBAC on the cluster 1. Get the cluster MSI identity by running the following command:
azure-fluid-relay Rotate Fluid Relay Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/rotate-fluid-relay-access-keys.md
+
+description: Learn how to rotate Azure Fluid Relay access keys.
+ Title: Rotate Azure Fluid Relay access keys
Last updated : 08/13/2024++++
+# How to rotate Fluid Relay Server access keys
+This article provides an overview of managing access keys (tenant keys) in Azure Fluid Relay Service. Microsoft recommends that you regularly rotate your keys for better security.
+
+## Primary / Secondary keys
+Customers use the access keys to sign the access tokens that are used to access Azure Fluid Relay Services. Azure Fluid Relay uses the keys to validate the tokens.
+
+Two keys are associated with each Azure Fluid Relay Service: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll, keys, providing continuous access to your account and data.
+
+## View your access keys
+
+### [Azure portal](#tab/azure-portal)
+To see your access keys, search for your Azure Fluid Relay Service in the Azure portal. On the left menu of Azure Fluid Relay Service page, select **Settings**. Then, select **Access Keys**. Select the **Copy** button to copy the selected key.
+
+[![Screenshot that shows the Access Keys page.](../images/rotate-tenant-keys.png)](../images/rotate-tenant-keys.png#lightbox)
+
+### [PowerShell](#tab/azure-powershell)
+To retrieve your access keys with PowerShell, you need to install [Azure Fluid Relay module](/powershell/module/az.fluidrelay) first.
+
+```azurepowershell
+Install-Module Az.FluidRelay
+```
+
+Then call the [Get-AzFluidRelayServerKey](/powershell/module/az.fluidrelay/get-azfluidrelayserverkey) command.
+
+```azurepowershell
+Get-AzFluidRelayServerKey -FluidRelayServerName <Fluid Relay Service name> -ResourceGroup <resource group> -SubscriptionId <subscription id>
+```
+
+### [Azure CLI](#tab/azure-cli)
+To retrieve your access keys with Azure CLI, you need to install [fluid-relay](/cli/azure/fluid-relay) extension first. See [instructions](/cli/azure/azure-cli-extensions-overview).
+
+Then use [az fluid-relay server list-key](/cli/azure/fluid-relay/server?view=azure-cli-latest&preserve-view=true#az-fluid-relay-server-list-key) command to list access keys.
+
+```azurecli
+az fluid-relay server list-key --resource-group <resource group> --server-name <Fluid Relay Service name>
+```
+++
+## Rotate your access keys
+Two access keys are assigned so that your Azure Fluid Relay Service does not have to be taken offline when you rotate a key. Having two keys ensures that your application maintains access to Azure Fluid Relay throughout the process. You should rotate one of two keys at one time to avoid service interruptions.
+
+The process of rotating primary and secondary keys is the same. The following steps are for primary keys.
+
+### [Azure portal](#tab/azure-portal)
+To rotate your Azure Fluid Relay primary key in the Azure portal:
+
+1. Update the access keys in your application code to use the secondary access key for the Azure Fluid Relay.
+
+2. Navigate to your Fluid Relay Service in the Azure portal.
+
+3. Under **Settings**, select **Access key**.
+
+4. To regenerate the primary access key for your Azure Fluid Relay Service, select the **Regenerate Primary Key** button above the Access Information.
+
+5. Update the primary key in your code to reference the new primary access key.
+
+### [PowerShell](#tab/azure-powershell)
+To rotate your Fluid Relay primary key with PowerShell, you need to install [Azure Fluid Relay module](/powershell/module/az.fluidrelay) first.
+
+```azurepowershell
+Install-Module Az.FluidRelay
+```
+
+Then follow steps below:
+
+1. Update the access keys in your application code to use the secondary access key for the Azure Fluid Relay.
+
+2. Call the [New-AzFluidRelayServerKey](/powershell/module/az.fluidrelay/new-azfluidrelayserverkey) command to regenerate the primary access key, as shown in the following example:
++
+```azurepowershell
+New-AzFluidRelayServerKey -FluidRelayServerName <Fluid Relay Service name> -ResourceGroup <resource group> -KeyName <key name>
+```
+
+3. Update the primary key in your code to reference the new primary access key.
+
+### [Azure CLI](#tab/azure-cli)
+To rotate your Fluid Relay primary key with Azure CLI, you need to install [fluid-relay](/cli/azure/fluid-relay) extension first. See [instructions](/cli/azure/azure-cli-extensions-overview).
+
+Then follow steps below:
+
+1. Update the access keys in your application code to use the secondary access key for the Azure Fluid Relay.
+
+2. Call the [az fluid-relay server regenerate-key](/cli/azure/fluid-relay/server?view=azure-cli-latest&preserve-view=true#az-fluid-relay-server-regenerate-key) command to regenerate the primary access key, as shown in the following example:
+
+```azurecli
+az fluid-relay server regenerate-key --resource-group <resource group>--server-name <Fluid Relay Service name>--key-name <key name>
+```
+
+3. Update the primary key in your code to reference the new primary access key.
++
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
public static class QueueFunctions
+For an end-to-end example of how to configure an output binding to Queue storage, see one of these articles:
+++ [Connect functions to Azure Storage using Visual Studio](functions-add-output-binding-storage-queue-vs.md)++ [Connect functions to Azure Storage using Visual Studio Code](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp)++ [Connect functions to Azure Storage using command line tools](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-csharp) ::: zone-end ::: zone pivot="programming-language-java"
The following example shows a Java function that creates a queue message for whe
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on parameters whose value would be written to Queue storage. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type of a POJO.
+For an end-to-end example of how to configure an output binding to Queue storage, see one of these articles:
+++ [Connect functions to Azure Storage using Visual Studio Code](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-java)++ [Connect functions to Azure Storage using command line tools](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-java) ::: zone-end ::: zone pivot="programming-language-typescript"
To output multiple messages, return an array instead of a single object. For exa
:::code language="javascript" source="~/azure-functions-nodejs-v4/js/src/functions/storageQueueOutput2.js" id="displayInDocs" :::
+For an end-to-end example of how to configure an output binding to Queue storage, see one of these articles:
+++ [Connect functions to Azure Storage using Visual Studio Code](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)++ [Connect functions to Azure Storage using command line tools](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-javascript)+ # [Model v3](#tab/nodejs-v3) The following example shows an HTTP trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function creates a queue item for each HTTP request received.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
}) ```
+For an end-to-end example of how to configure an output binding to Queue storage, see one of these articles:
+++ [Connect functions to Azure Storage using Visual Studio Code](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-powershell)++ [Connect functions to Azure Storage using command line tools](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-powershell) ::: zone-end ::: zone pivot="programming-language-python"
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
logging.info(f'name: {name}') return 'OK' ```+
+For an end-to-end example of how to configure an output binding to Queue storage, see one of these articles:
+++ [Connect functions to Azure Storage using Visual Studio Code](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-python)++ [Connect functions to Azure Storage using command line tools](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
+
# [v1](#tab/python-v1) A Storage queue binding is defined in *function.json* where *type* is set to `queue`.
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Azure Functions supports C# and C# script programming languages. If you're looki
### Updating to target .NET 8
-> [!NOTE]
-> Targeting .NET 8 with the in-process model is not yet enabled for apps in sovereign clouds. Updates will be communicated on [this tracking thread on GitHub](https://github.com/Azure/azure-functions-host/issues/9951).
- Apps using the in-process model can target .NET 8 by following the steps outlined in this section. However, if you choose to exercise this option, you should still begin planning your [migration to the isolated worker model](./migrate-dotnet-to-isolated-model.md) in advance of [support ending for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model). Many apps can change the configuration of the function app in Azure without updates to code or redeployment. To run .NET 8 with the in-process model, three configurations are required:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 06/26/2024 Last updated : 08/12/2024 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [App Service](../../app-service/index.yml) | &#x2705; | &#x2705; | | [Application Gateway](../../application-gateway/index.yml) | &#x2705; | &#x2705; | | [Automation](../../automation/index.yml) | &#x2705; | &#x2705; |
-| [Microsoft Entra ID (Free)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; |
+| [Microsoft Entra ID (Free)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) **&ast;**| &#x2705; | &#x2705; |
| [Microsoft Entra ID (P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | | [Azure Active Directory B2C](../../active-directory-b2c/index.yml) | &#x2705; | &#x2705; | | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | | [Azure Cosmos DB](/azure/cosmos-db/) | &#x2705; | &#x2705; | | [Azure Container Apps](../../container-apps/index.yml) | &#x2705; | &#x2705; |
-| [Azure Database for MariaDB](/azure/mariadb/) | &#x2705; | &#x2705; |
| [Azure Database for MySQL](/azure/mysql/) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](/azure/postgresql/) | &#x2705; | &#x2705; | | [Azure Databricks](/azure/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | [Azure Sphere](/azure-sphere/) | &#x2705; | &#x2705; | | [Azure Spring Apps](../../spring-apps/index.yml) | &#x2705; | &#x2705; |
-| [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; |
+| [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;&ast;&ast;** | &#x2705; | &#x2705; |
| [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; | | [Azure Static WebApps](../../static-web-apps/index.yml) | &#x2705; | &#x2705; | | [Azure Video Indexer](/azure/azure-video-indexer/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | | [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; |
-| [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; |
+| [Virtual Machines](../../virtual-machines/index.yml) | &#x2705; | &#x2705; |
| [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
-**&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting your Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
+**&ast;** FedRAMP High and DoD SRG Impact Level 2 authorization for Microsoft Entra ID applies to Microsoft Entra External ID. To learn more about Entra External ID, refer to the documentation [here](/entra/)
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative.
+**&ast;&ast;&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting your Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
+ ## Azure Government services by audit scope *Last updated: June 2024*
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cosmos DB](/azure/cosmos-db/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Database for MariaDB](/azure/mariadb/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Database for MySQL](/azure/mysql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for PostgreSQL](/azure/postgresql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Databricks](/azure/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | | | | [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Virtual Machines](../../virtual-machines/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md
recommendations: false Previously updated : 05/09/2023 Last updated : 08/15/2024 # Department of Defense (DoD) in Azure Government
The following services are in scope for DoD IL5 PA in US DoD regions (US DoD Cen
- [Microsoft Stream](/stream/overview) - [Network Watcher](https://azure.microsoft.com/services/network-watcher/) - [Power Apps](/powerapps/powerapps-overview)-- [Power Apps portal](https://powerapps.microsoft.com/portals/)
+- [Power Pages](https://powerapps.microsoft.com/portals/)
- [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) - [Power BI](https://powerbi.microsoft.com/) - [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/)
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| August 2024 | **Windows**<ul><li>Added columns to the Windows Event table: Keywords, UserName, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support AMA Client Installer for selected partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fluent-Bit updated to version 3.0.7. This fixes the issue with Fluent-Bit creating junk files in the root directory on process shutdown.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.5 |
+| August 2024 | **Windows**<ul><li>Added columns to the SecurityEvent table: Keywords, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support AMA Client Installer for selected partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fluent-Bit updated to version 3.0.7. This fixes the issue with Fluent-Bit creating junk files in the root directory on process shutdown.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.5 |
| June 2024 |**Windows**<ul><li>Fix encoding issues with Resource ID field.</li><li>AMA: Support new ingestion endpoint for GovSG environment.</li><li>Upgrade AzureSecurityPack version to 4.33.0.1.</li><li>Upgrade Metrics Extension version to 2.2024.517.533.</li><li>Upgrade Health Extension version to 2024.528.1.</li></ul>**Linux**<ul><li>Coming Soon</li></ul>| 1.28.2 | | | May 2024 |**Windows**<ul><li>Upgraded Fluent-bit version to 3.0.5. This Fix resolves as security issue in fluent-bit (NVD - CVE-2024-4323 (nist.gov)</li><li>Disabled Fluent-bit logging that caused disk exhaustion issues for some customers. Example error is Fluentbit log with "[C:\projects\fluent-bit-2e87g\src\flb_scheduler.c:72 errno=0] No error" fills up the entire disk of the server.</li><li>Fixed AMA extension getting stuck in deletion state on some VMs that are using Arc. This fix improves reliability.</li><li>Fixed AMA not using system proxy, this issue is a bug introduced in 1.26.0. The issue was caused by a new feature that uses the Arc agentΓÇÖs proxy settings. When the system proxy as set as None the proxy was broken in 1.26.</li><li>Fixed Windows Firewall Logs log file rollover issues</li></ul>| 1.27.0 | | | April 2024 |**Windows**<ul><li>In preparation for the May 17 public preview of Firewall Logs, the agent completed the addition of a profile filter for Domain, Public, and Private Logs. </li><li>AMA running on an Arc enabled server will default to using the Arc proxy settings if available.</li><li>The AMA VM extension proxy settings override the Arc defaults.</li><li>Bug fix in MSI installer: Symptom - If there are spaces in the fluent-bit config path, AMA wasn't recognizing the path properly. AMA now adds quotes to configuration path in fluent-bit.</li><li>Bug fix for Container Insights: Symptom - custom resource ID weren't being honored.</li><li>Security issue fix: skip the deletion of files and directory whose path contains a redirection (via Junction point, Hard links, Mount point, OB Symlinks etc.).</li><li>Updating MetricExtension package to 2.2024.328.1744.</li></ul>**Linux**<ul><li>AMA 1.30 now available in Arc.</li><li>New distribution support Debian 12, RHEL CIS L2.</li><li>Fix for mdsd version 1.30.3 in persistence mode, which converted positive integers to float/double values ("3.0", "4.0") to type ulong which broke Azure stream analytics.</li></ul>| 1.26.0 | 1.31.1 |
azure-monitor Data Collection Log Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-text.md
Use the following ARM template to create or modify a DCR for collecting text log
"type": "string", "metadata": { "description": "Unique name for the DCR. "
- },
+ }
}, "location": { "type": "string", "metadata": { "description": "Region for the DCR. Must be the same location as the Log Analytics workspace. "
- },
+ }
}, "filePatterns": { "type": "string", "metadata": { "description": "Path on the local disk for the log file to collect. May include wildcards.Enter multiple file patterns separated by commas (AMA version 1.26 or higher required for multiple file patterns on Linux)."
- },
+ }
}, "tableName": { "type": "string", "metadata": { "description": "Name of destination table in your Log Analytics workspace. "
- },
+ }
}, "workspaceResourceId": { "type": "string", "metadata": { "description": "Resource ID of the Log Analytics workspace with the target table."
- },
+ }
} }, "variables": {
- "tableOutputStream": "['Custom-',concat(parameters('tableName'))]"
+ "tableOutputStream": "[concat('Custom-', parameters('tableName'))]"
}, "resources": [ {
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
In general, there's no cost to ingest standard metrics (platform metrics) into a
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data.
+> To provide a better experience, custom metrics sent to Azure Monitor from the Application Insights Classic API (SDKs) are always stored in both Log Analytics and the Metrics Store. Your cost to store these metrics is only based on the volume ingested by Log Analytics. There is no additional cost for data stored in the Metrics Store.
## Custom metric definitions
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-overview.md
Title: Analyze application performance traces with Application Insights Profiler
description: Identify the hot path in your web server code with a low-footprint profiler. ms.contributor: charles.weininger Previously updated : 07/11/2024 Last updated : 08/15/2024
If you've enabled Profiler but aren't seeing traces, see the [Troubleshooting gu
- **Profiling web apps**: - Although you can use Profiler at no extra cost, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum. - You can attach only one profiler to each web app.
+ - Profiler on Linux is only supported on Windows-based web apps.
## Next steps Learn how to enable Profiler on your Azure service:
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
Title: Enable Profiler for Azure App Service apps | Microsoft Docs description: Profile live apps on Azure App Service with Application Insights Profiler. Previously updated : 09/21/2023 Last updated : 08/15/2024
[Application Insights Profiler](./profiler-overview.md) is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher. Follow these steps, even if you included the Application Insights SDK in your application at build time.
-To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md).
+Codeless installation of Application Insights Profiler:
+- Follows [the .NET Core support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+- Is only supported on *Windows-based* web apps.
-> [!NOTE]
-> Codeless installation of Application Insights Profiler follows the .NET Core support policy. For more information about supported runtime, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md).
## Prerequisites
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft Docs description: Enable Snapshot Debugger for .NET apps in Azure App Service-+
# Enable Snapshot Debugger for .NET apps in Azure App Service
-Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
- > [!NOTE]
-> We recommend that you run your application on the Basic service tier, or higher, when using Snapshot Debugger. For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots. The Consumption tier is not currently available for Snapshot Debugger.
+> If you're using a preview version of .NET Core, or your application references Application Insights SDK (directly or indirectly via a dependent assembly), follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md) to include the [`Microsoft.ApplicationInsights.SnapshotCollector`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application.
-## <a id="installation"></a> Enable Snapshot Debugger
+Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps running on Azure App Service on Windows service plans.
-Snapshot Debugger is pre-installed as part of the App Services runtime, but you need to turn it on to get snapshots for your App Service app. To enable Snapshot Debugger for an app, follow the instructions below:
+We recommend that you run your application on the Basic or higher service tiers when using Snapshot Debugger. For most applications:
+- The Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+- The Consumption tier isn't currently available for Snapshot Debugger.
-> [!NOTE]
-> If you're using a preview version of .NET Core, or your application references Application Insights SDK (directly or indirectly via a dependent assembly), follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md) to include the [`Microsoft.ApplicationInsights.SnapshotCollector`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package with the application.
+Although Snapshot Debugger is preinstalled as part of the App Services runtime, you need to turn it on to get snapshots for your App Service app. Codeless installation of Snapshot Debugger follows [the .NET Core support policy.](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-> [!NOTE]
-> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
-> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+# [Azure portal](#tab/portal)
-After you've deployed your .NET app:
+After you've deployed your .NET App Services web app:
-1. Go to the Azure control panel for your App Service.
-1. Go to the **Settings** > **Application Insights** page.
+1. Navigate to your App Service in the Azure portal.
+1. In the left-side menu, select **Settings** > **Application Insights**.
:::image type="content" source="./media/snapshot-debugger/application-insights-app-services.png" alt-text="Screenshot showing the Enable App Insights on App Services portal.":::
-1. Either follow the instructions on the page to create a new resource or select an existing App Insights resource to monitor your app.
-1. Switch Snapshot Debugger toggles to **On**.
+1. Click **Turn on Application Insights**.
+ - If you have an existing Application Insights resource you'd rather use, select that option under **Change your resource**.
+1. Under **Instrument your application**, select the **.NET** tab.
+1. Switch both Snapshot Debugger toggles to **On**.
:::image type="content" source="./media/snapshot-debugger/enablement-ui.png" alt-text="Screenshot showing how to add App Insights site extension.":::
-1. Snapshot Debugger is now enabled using an App Services App Setting.
+1. Snapshot Debugger is now enabled.
:::image type="content" source="./media/snapshot-debugger/snapshot-debugger-app-setting.png" alt-text="Screenshot showing App Setting for Snapshot Debugger.":::
-If you're running a different type of Azure service, here are instructions for enabling Snapshot Debugger on other supported platforms:
-
-* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-
-## Enable Snapshot Debugger for other clouds
-
-Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Microsoft Azure operated by 21Vianet](/azure/china/resources-developer-guide) through the Application Insights Connection String.
-
-|Connection String Property | US Government Cloud | China Cloud |
-|||-|
-|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
-
-For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
-
-<a name='enable-azure-active-directory-authentication-for-snapshot-ingestion'></a>
-
-## Enable Microsoft Entra authentication for snapshot ingestion
-
-Application Insights Snapshot Debugger supports Microsoft Entra authentication for snapshot ingestion. This means, for all snapshots of your application to be ingested, your application must be authenticated and provide the required application settings to the Snapshot Debugger agent.
-
-As of today, Snapshot Debugger only supports Microsoft Entra authentication when you reference and configure Microsoft Entra ID using the Application Insights SDK in your application.
-
-To turn-on Microsoft Entra ID for snapshot ingestion:
-
-1. Create and add the managed identity you want to use to authenticate against your Application Insights resource to your App Service.
-
- 1. For System-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity).
-
- 1. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity).
-
-1. Configure and turn on Microsoft Entra ID in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configure-and-enable-azure-ad-based-authentication)
-1. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use:
-
-For System-Assigned Identity:
-
-|App Setting | Value |
-||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD |
-
-For User-Assigned Identity:
-
-|App Setting | Value |
-||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD;ClientID={Client ID of the User-Assigned Identity} |
- ## Disable Snapshot Debugger
-To disable Snapshot Debugger, repeat the [steps for enabling](#installation). However, switch the Snapshot Debugger toggles to **Off**.
+To disable Snapshot Debugger for your App Services resource:
+1. Navigate to your App Service in the Azure portal.
+1. In the left-side menu, select **Settings** > **Application Insights**.
+1. Switch the Snapshot Debugger toggles to **Off**.
-## Azure Resource Manager template
+# [Azure Resource Manager](#tab/arm)
-For an Azure App Service, you can set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler. For example:
+You can also set app settings within the Azure Resource Manager template to enable Snapshot Debugger and Profiler. For example:
```json {
- "apiVersion": "2015-08-01",
+ "apiVersion": "2023-12-01",
"name": "[parameters('webSiteName')]", "type": "Microsoft.Web/sites", "location": "[resourceGroup().location]",
For an Azure App Service, you can set app settings within the Azure Resource Man
], "properties": { "APPINSIGHTS_INSTRUMENTATIONKEY": "[reference(resourceId('Microsoft.Insights/components', concat('AppInsights', parameters('webSiteName'))), '2014-04-01').InstrumentationKey]",
- "APPINSIGHTS_PROFILERFEATURE_VERSION": "1.0.0",
+ // "Turn on" a Snapshot Debugger version
"APPINSIGHTS_SNAPSHOTFEATURE_VERSION": "1.0.0", "DiagnosticServices_EXTENSION_VERSION": "~3", "ApplicationInsightsAgent_EXTENSION_VERSION": "~2"
For an Azure App Service, you can set app settings within the Azure Resource Man
}, ```
-## Not Supported Scenarios
++
+Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
+
+## Enable Snapshot Debugger for other cloud regions
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Microsoft Azure operated by 21Vianet](/azure/china/resources-developer-guide) through the Application Insights Connection String.
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+## Configure Snapshot Debugger
+
+### Enable Microsoft Entra authentication for snapshot ingestion
+
+Snapshot Debugger supports Microsoft Entra authentication for snapshot ingestion. For all snapshots of your application to be ingested, your application must be authenticated and provide the required application settings to the Snapshot Debugger agent.
+
+As of today, Snapshot Debugger only supports Microsoft Entra authentication when you reference and configure Microsoft Entra ID using the Application Insights SDK in your application.
+
+To turn on Microsoft Entra ID for snapshot ingestion in your App Services resource:
+
+1. Add the managed identity that authenticates against your Application Insights resource to your App Service. You can create either:
+
+ - [Add a System-Assigned Managed identity](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity).
+ - [Add a User-Assigned Managed identity](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity).
+
+1. Configure and turn on Microsoft Entra ID in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configure-and-enable-azure-ad-based-authentication)
+
+1. Add the following application setting. This setting tells the Snapshot Debugger agent which managed identity to use:
+
+For System-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD |
+
+For User-Assigned Identity:
+
+|App Setting | Value |
+||-|
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD;ClientID={Client ID of the User-Assigned Identity} |
+
+## Unsupported scenarios
Below you can find scenarios where Snapshot Collector isn't supported: |Scenario | Side Effects | Recommendation | ||--|-|
-|You're using the Snapshot Collector SDK in your application directly (*.csproj*) and have enabled the advanced option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost and no Snapshots will be available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.` <br/> [Learn more about the Application Insights feature "Interop".](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advanced option "Interop", use the codeless Snapshot Collector injection (enabled through the Azure portal). |
+|You're using the Snapshot Collector SDK in your application directly (*.csproj*) and enabled the advanced option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) are lost and no Snapshots are available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.` <br/> [Learn more about the Application Insights feature "Interop".](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advanced option "Interop", use the codeless Snapshot Collector injection (enabled through the Azure portal). |
## Next steps
-* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
-* See [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
+- View [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#access-debug-snapshots-in-the-portal) in the Azure portal.
+- [Troubleshoot Snapshot Debugger issues](snapshot-debugger-troubleshoot.md).
[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png [snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-data.md
reviewer: cweining-+ Last updated 11/17/2023 # View Application Insights Snapshot Debugger data
-Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal.
+Snapshots appear as [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal. View debug snapshots in the portal to examine the call stack and inspect variables at each call stack frame.
-You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+For a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
-## View Snapshots in the Portal
+## Prerequisites
-After an exception has occurred in your application and a snapshot is created, you can view snapshots in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either:
+Snapshots might include sensitive information. You can only view snapshots if you are assigned the `Application Insights Snapshot Debugger` role.
-* Select the **Operations** button when viewing the **Operations** tab, or
-* Select the **Exceptions** button when viewing the **Exceptions** tab.
+## Access debug snapshots in the portal
+After an exception has occurred in your application and a snapshot is created, you can view snapshots in the Azure portal within 5 to 10 minutes.
-Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event.
-- If a snapshot is available for the given exception, select the **Open debug snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md). -- [If you do not see this button, no snapshot may be available. See the troubleshooting guide.](./snapshot-debugger-troubleshoot.md#use-the-snapshot-health-check)
+1. In your Application Insights resource, select **Investigate** > **Failures** from the left-side menu.
+1. In the **Failures** pane, select either:
+ - The **Operations** tab, or
+ - The **Exceptions** tab.
-In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
+1. Select the **[x] Samples** in the center column of the page to generate a list of sample operations or exceptions to the right.
+ :::image type="content" source="./media/snapshot-debugger/failures-page.png" alt-text="Screenshot showing the Failures Page in Azure portal.":::
-Snapshots might include sensitive information. By default, you can only view snapshots if you are assigned the `Application Insights Snapshot Debugger` role.
+1. From the list of samples, select an operation or exception to open the **End-to-End Transaction Details** page. From here, select the exception event you'd like to investigate.
+ - If a snapshot is available for the given exception, select the **Open debug snapshot** button in the right pane to view the **Debug Snapshot** page.
+ - [If you do not see this button, no snapshot may be available. See the troubleshooting guide.](./snapshot-debugger-troubleshoot.md#use-the-snapshot-health-check)
-## View Snapshots in Visual Studio 2017 Enterprise or greater
+ :::image type="content" source="./media/snapshot-debugger/e2e-transaction-page.png" alt-text="Screenshot showing the Open Debug Snapshot button on exception.":::
-1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
+1. In the **Debug Snapshot** page, you see a call stack with a local variables pane. Select a call stack frame to view local variables and parameters for that function call in the variables pane.
-1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+ :::image type="content" source="./media/snapshot-debugger/open-snapshot-portal.png" alt-text="Screenshot showing the Open debug snapshot highlighted in the Azure portal.":::
-1. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
+## Download snapshots to view in Visual Studio
+
+To view snapshots in Visual Studio 2017 Enterprise or greater:
+
+1. Click the **Download Snapshot** button in the **Debug Snapshot** page to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
+
+1. In Visual Studio, make sure you have the Snapshot Debugger Visual Studio component installed.
+ - **For Visual Studio 2017 Enterprise and greater:** The required Snapshot Debugger component can be selected from the **Individual Component** list in the Visual Studio installer.
+ - **For a version older than Visual Studio 2017 version 15.5:** Install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+
+1. Open the `.diagsession` file. The Minidump Debugging page in Visual Studio appears.
+
+1. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown.
:::image type="content" source="./media/snapshot-debugger/open-snapshot-visual-studio.png" alt-text="Screenshot showing the debug snapshot in Visual Studio.":::
-The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
+The downloaded snapshot includes any symbol files found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
## Next steps
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
reviewer: cweining-+ Last updated 11/17/2023
Snapshot Debugger currently works for ASP.NET and ASP.NET Core apps that are running on Azure Functions on Windows service plans.
-We recommend that you run your application on the Basic service tier or higher when you use Snapshot Debugger.
+We recommend that you run your application on the Basic or higher service tiers when using Snapshot Debugger. For most applications:
+- The Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+- The Consumption tier isn't currently available for Snapshot Debugger.
-For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+Snapshot Debugger is preinstalled as part of the Azure Functions runtime, so you don't need to add extra NuGet packages or application settings.
## Prerequisite
-[Enable Application Insights monitoring in your Functions app](../../azure-functions/configure-monitoring.md#add-to-an-existing-function-app)
+[Enable Application Insights monitoring in your Functions app](../../azure-functions/configure-monitoring.md#new-function-app-in-the-portal).
## Enable Snapshot Debugger
To enable Snapshot Debugger in your Functions app, add the `snapshotConfiguratio
} } ```
+Generate traffic to your application that can trigger an exception. Then wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
-Snapshot Debugger is preinstalled as part of the Azure Functions runtime and is disabled by default. Because it's included in the runtime, you don't need to add extra NuGet packages or application settings.
-
-In the simple .NET Core Function app example that follows, `.csproj`, `{Your}Function.cs`, and `host.json` have Snapshot Debugger enabled:
+You can verify that Snapshot Debugger has been enabled by checking your .NET function app files. For example, in the following simple .NET function app, the `.csproj`, `{Your}Function.cs`, and `host.json` of your .NET application show Snapshot Debugger as enabled:
`Project.csproj`
To disable Snapshot Debugger in your Functions app, update your `host.json` file
} ```
-We recommend that you have Snapshot Debugger enabled on all your apps to ease diagnostics of application exceptions.
- ## Next steps
-* Generate traffic to your application that can trigger an exception. Then wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
-* [View snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
-* Customize Snapshot Debugger configuration based on your use case on your Functions app. For more information, see [Snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
+- [View snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#access-debug-snapshots-in-the-portal) in the Azure portal.
+- Customize Snapshot Debugger configuration based on your use case on your Functions app. For more information, see [Snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
+- [Troubleshoot Snapshot Debugger issues](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
- Title: Upgrade Application Insights Snapshot Debugger
-description: Learn how to upgrade the Snapshot Debugger for .NET apps to the latest version on Azure App Services or via NuGet packages.
---
-reviewer: cweining
- Previously updated : 11/17/2023---
-# Upgrade the Snapshot Debugger
-
-> [!IMPORTANT]
-> [Microsoft is moving away from TLS 1.0 and TLS 1.1](/lifecycle/announcements/transport-layer-security-1x-disablement) due to vulnerabilities. If you're using an older version of the site extension, you need to upgrade your instance of Snapshot Debugger to the latest version.
-
-Depending on how you enabled the Snapshot Debugger, you can follow two primary upgrade paths:
--- Via site extension-- Via an SDK/NuGet added to your application-
-# [Site extension](#tab/site-ext)
-
-> [!IMPORTANT]
-> Older versions of Application Insights used a private site extension called *Application Insights extension for Azure App Service*.
-> The current Application Insights experience is enabled by setting App Settings to light up a preinstalled site extension.
-> To avoid conflicts, which might cause your site to stop working, delete the private site extension first. See step 4 in the following procedure.
-
-If you enabled the Snapshot Debugger by using the site extension, you can upgrade by following these steps:
-
-1. Sign in to the Azure portal.
-1. Go to your resource that has Application Insights and Snapshot Debugger enabled. For example, for a web app, go to the Azure App Service resource.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/app-service-resource.png" alt-text="Screenshot that shows an individual App Service resource named DiagService01.":::
-
-1. Select the **Extensions** pane. Wait for the list of extensions to populate.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png" alt-text="Screenshot that shows App Service Extensions showing the Application Insights extension for Azure App Service installed.":::
-
-1. If any version of **Application Insights extension for Azure App Service** is installed, select it and select **Delete**. Confirm **Yes** to delete the extension. Wait for the delete process to finish before you move to the next step.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png" alt-text="Screenshot that shows App Service Extensions showing Application Insights extension for Azure App Service with the Delete button.":::
-
-1. Go to the **Overview** pane of your resource and select **Application Insights**.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-button.png" alt-text="Screenshot that shows selecting the Application Insights button.":::
-
-1. If this is the first time you've viewed the **Application Insights** pane for this app service, you're prompted to turn on Application Insights. Select **Turn on Application Insights**.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot that shows the Turn on Application Insights button.":::
-
-1. On the **Application Insights settings** pane, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
-
- If you decide to change *any* Application Insights settings, the **Apply** button is activated.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/view-application-insights-data.png" alt-text="Screenshot that shows Application Insights App Service Configuration page with the Apply button highlighted.":::
-
-1. After you select **Apply**, you're asked to confirm the changes.
-
- > [!NOTE]
- > The site restarts as part of the upgrade process.
-
- :::image type="content" source="./media/snapshot-debugger-upgrade/apply-monitoring-settings.png" alt-text="Screenshot that shows the App Service Apply monitoring settings prompt.":::
-
-1. Select **Yes** to apply the changes and wait for the process to finish.
-
-The site is now upgraded and is ready to use.
--
-# [SDK/NuGet](#tab/sdk-nuget)
-
-If your application is using a version of `Microsoft.ApplicationInsights.SnapshotCollector` earlier than version 1.3.1, upgrade it to a [newer version](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) to continue working.
---
-## Next steps
--- [Learn how to view snapshots](./snapshot-debugger-data.md)-- [Troubleshoot issues you encounter in Snapshot Debugger](./snapshot-debugger-troubleshoot.md)
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
reviewer: cweining-+ Last updated 11/17/2023
If your ASP.NET or ASP.NET Core application runs in Azure App Service and requir
If your application runs in Azure Service Fabric, Azure Cloud Services, Azure Virtual Machines, or on-premises machines, you can skip enabling Snapshot Debugger on App Service and follow the guidance in this article.
-## Before you begin
+## Prerequisites
-- [Enable Application Insights in your web app](../app/asp-net.md).
+- [Enable Application Insights in your .NET resource](../app/asp-net.md).
- Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.4.2 or above in your app.
+- Understand that snapshots may take 10 to 15 minutes to be sent to the Application Insights instance after an exception has been triggered.
## Configure snapshot collection for ASP.NET applications
-When you add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package to your application, the `SnapshotCollectorTelemetryProcessor` should be added automatically to the `TelemetryProcessors` section of [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
+When you add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package to your application, the `SnapshotCollectorTelemetryProcessor` is added automatically to the `TelemetryProcessors` section of [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
-If you don't see `SnapshotCollectorTelemetryProcessor` in ApplicationInsights.config, or if you want to customize the Snapshot Debugger configuration, you may edit it by hand. However, these edits may get overwritten if you later upgrade to a newer version of the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package.
+If you don't see `SnapshotCollectorTelemetryProcessor` in `ApplicationInsights.config`, or if you want to customize the Snapshot Debugger configuration, you can edit it manually.
-The following example shows a configuration equivalent to the default configuration:
+> [!NOTE]
+> Any manual configurations may get overwritten when upgrading to a newer version of the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package.
+
+Snapshot Collector's default configuration looks similar to the following example:
```xml <TelemetryProcessors>
Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/
### Update the services collection
-In your application's startup code, where services are configured, add a call to the `AddSnapshotCollector` extension method. It's a good idea to add this line immediately after the call to `AddApplicationInsightsTelemetry`. For example:
+In your application's startup code, where services are configured, add a call to the `AddSnapshotCollector` extension method. We suggest adding this line immediately after the call to `AddApplicationInsightsTelemetry`. For example:
+ ```csharp var builder = WebApplication.CreateBuilder(args);
builder.Services.AddApplicationInsightsTelemetry();
builder.Services.AddSnapshotCollector(); ```
-### Configure the Snapshot Collector
-For most situations, the default settings are sufficient. If not, customize the settings by adding the following code before the call to `AddSnapshotCollector()`
+### Customize the Snapshot Collector
+
+For most scenarios, Snapshot Collector's default settings are sufficient. However, you can customize the settings by adding the following code before the call to `AddSnapshotCollector()`:
+ ```csharp using Microsoft.ApplicationInsights.SnapshotCollector; ... builder.Services.Configure<SnapshotCollectorConfiguration>(builder.Configuration.GetSection("SnapshotCollector")); ```
-Next, add a `SnapshotCollector` section to _appsettings.json_ where you can override the defaults. The following example shows a configuration equivalent to the default configuration:
+Next, add a `SnapshotCollector` section to _`appsettings.json`_ where you can override the defaults.
+
+Snapshot Collector's default `appsettings.json` configuration looks similar to the following example:
```json {
Next, add a `SnapshotCollector` section to _appsettings.json_ where you can over
``` If you need to customize the Snapshot Collector's behavior manually, without using _appsettings.json_, use the overload of `AddSnapshotCollector` that takes a delegate. For example:+ ```csharp builder.Services.AddSnapshotCollector(config => config.IsEnabledInDeveloperMode = true); ``` ## Configure snapshot collection for other .NET applications
-Snapshots are collected only on exceptions that are reported to Application Insights. For ASP.NET and ASP.NET Core applications, the Application Insights SDK automatically reports unhandled exceptions that escape a controller method or endpoint route handler. For other applications, you might need to modify your code to report them. The exception handling code depends on the structure of your application. Here's an example:
+Snapshots are collected only on exceptions that are reported to Application Insights.
+
+For ASP.NET and ASP.NET Core applications, the Application Insights SDK automatically reports unhandled exceptions that escape a controller method or endpoint route handler.
+
+For other applications, you might need to modify your code to report them. The exception handling code depends on the structure of your application. For example:
```csharp using Microsoft.ApplicationInsights;
internal class LoggerExample
} ```
-> [!NOTE]
-> By default, the Application Insights Logger (`ApplicationInsightsLoggerProvider`) forwards exceptions to the Snapshot Debugger via `TelemetryClient.TrackException`. This behavior is controlled via the `TrackExceptionsAsExceptionTelemetry` property on the `ApplicationInsightsLoggerOptions` class. If you set `TrackExceptionsAsExceptionTelemetry` to `false` when configuring the Application Insights Logger, then the preceding example will not trigger the Snapshot Debugger. In this case, modify your code to call `TrackException` manually.
+By default, the Application Insights Logger (`ApplicationInsightsLoggerProvider`) forwards exceptions to the Snapshot Debugger via `TelemetryClient.TrackException`. This behavior is controlled via the `TrackExceptionsAsExceptionTelemetry` property on the `ApplicationInsightsLoggerOptions` class.
+
+If you set `TrackExceptionsAsExceptionTelemetry` to `false` when configuring the Application Insights Logger, the preceding example will not trigger the Snapshot Debugger. In this case, modify your code to call `TrackException` manually.
[!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)] ## Next steps -- Generate traffic to your application that can trigger an exception. Then wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.-- See [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- [Troubleshoot](snapshot-debugger-troubleshoot.md) Snapshot Debugger problems.
+- View [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#access-debug-snapshots-in-the-portal) in the Azure portal.
+- [Troubleshoot Snapshot Debugger issues](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
reviewer: cweining Previously updated : 11/17/2023 Last updated : 08/14/2024 # Debug exceptions in .NET applications using Snapshot Debugger
-With Snapshot Debugger, you can automatically collect a debug snapshot when an exception occurs in your live .NET application. Debug snapshots collected show the state of source code and variables at the moment the exception was thrown.
-
-The Snapshot Debugger in [Application Insights](../app/app-insights-overview.md):
+When enabled, Snapshot Debugger automatically collects a debug snapshot of the source code and variables when an exception occurs in your live .NET application. The Snapshot Debugger in [Application Insights](../app/app-insights-overview.md):
- Monitors system-generated logs from your web app. - Collects snapshots on your top-throwing exceptions.
Snapshot collection is available for:
The following environments are supported:
-* [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Functions](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later
-* [Azure Service Fabric](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later
-* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
-* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later
+- [Azure App Service](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json)
+- [Azure Functions](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json)
+- [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later
+- [Azure Service Fabric](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later
+- [Azure Virtual Machines and Azure Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
+- [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later
> [!NOTE] > Client applications (for example, WPF, Windows Forms, or UWP) aren't supported.
The Snapshot Debugger is implemented as an [Application Insights telemetry proce
### Snapshot Debugger process
-The Snapshot Debugger process starts and ends with the `TrackException` method. A process snapshot is a suspended clone of the running process, so that your users experience little to no interruption.
+The Snapshot Debugger process starts and ends with the `TrackException` method. A process snapshot is a suspended clone of the running process, so that your users experience little to no interruption. In a typical scenario:
1. Your application throws the [`TrackException`](../app/asp-net-exceptions.md#exceptions).
The Snapshot Debugger process starts and ends with the `TrackException` method.
### Snapshot Uploader process
-While the Snapshot Debugger process continues to run and serve traffic to users with little interruption, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader:
+While the Snapshot Debugger process continues to run and serve traffic to users with little interruption, the snapshot is handed off to the Snapshot Uploader process. In a typical scenario, the Snapshot Uploader:
1. Creates a minidump.
While the Snapshot Debugger process continues to run and serve traffic to users
If you enabled the Snapshot Debugger but you aren't seeing snapshots, see the [Troubleshooting guide](snapshot-debugger-troubleshoot.md).
+## Upgrading Snapshot Debugger
+
+Snapshot Debugger auto-upgrades via the built-in, preinstalled Application Insights site extension.
+
+Manually adding an Application Insights site extension to keep Snapshot Debugger up-to-date is deprecated.
+ ## Overhead The Snapshot Debugger is designed for use in production environments. The default settings include rate limits to minimize the impact on your applications.
azure-netapp-files Azure Netapp Files Troubleshoot Resource Provider Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-troubleshoot-resource-provider-errors.md
Previously updated : 02/09/2022 Last updated : 08/15/2024 # Troubleshoot Azure NetApp Files Resource Provider errors
This error occurs when the export policy isn't defined with a unique index. When
* Cause: The defined export policy doesn't meet the requirement for export policy rules. You must have one export policy rule at the minimum and five export policy rules at the maximum. * Solution:
-Make sure that the index isn't already used and that it is in the range from 1 to 5.
+Make sure that the index isn't already used and that it's in the range from 1 to 5.
* Workaround: Use a different index for the rule that you're trying to set.
This error occurs when the file path contains unsupported characters, for exampl
* Cause: The file path contains unsupported characters, for example, a period ("."), comma (","), underscore ("_"), or dollar sign ("$"). * Solution:
-Remove characters that are not alphabetical letters, numbers, or hyphens ("-") from the file path you entered.
+Remove all characters other than alphabetical letters, numbers, or hyphens ("-") from the file path you entered.
* Workaround: You can replace an underscore with a hyphen or use capitalization instead of spaces to indicate the beginning of new words. For example, use "NewVolume" instead of "new volume".
Wait a few minutes and check if the problem persists.
***Not allowed to mix protocol types CIFS and NFS***
-This error occurs when you're trying to create a Volume and there are both the CIFS (SMB) and NFS protocol types in the volume properties.
+This error occurs when you're trying to create a volume that has both the CIFS (SMB) and NFS protocol types in the volume properties.
* Cause: Both the CIFS (SMB) and NFS protocol types are used in the volume properties.
This error indicates that the operation isn't available for the active subscript
* Cause: The operation isn't available for the subscription or resource. * Solution:
-Make sure that the operation is entered correctly and that it is available for the resource and subscription that you're using.
+Make sure that the operation is entered correctly. The operation should be available for the resource and subscription that you're using.
***OwnerId cannot be changed***
This error occurs when you try to create a volume and the capacity pool in which
* Cause: The capacity pool where the volume is being created isn't found. * Solution:
-Most likely the pool was not fully created or was already deleted at the time of the volume creation.
+Most likely the pool wasn't fully created or was already deleted at the time of the volume creation.
***Patch operation is not supported for this resource type.*** This error occurs when you try to change the mount target or snapshot. * Cause:
-The mount target is defined when it is created, and it can't be changed subsequently.
+The mount target is defined when it's created, and it can't be changed subsequently.
The snapshots donΓÇÖt contain any properties that can be changed. * Solution: None. Those resources don't have any properties that can be changed.
This error occurs when you try to reference a nonexistent resource, for example,
* Cause: You're trying to reference a nonexistent resource (for example, a volume or snapshot) that has already been deleted or has a misspelled resource name. * Solution:
-Check the request for spelling errors to make sure that it is correctly referenced.
+Check the request for spelling errors to make sure that it's correctly referenced.
* Workaround: See the Solution section above.
This error occurs when nonexistent properties are provided for a resource such a
* Cause: The request has a set of properties that can be used with each resource. You can't include any nonexistent properties in the request. * Solution:
-Make sure that all property names are spelled correctly and that the properties are available for the subscription and resource.
+Make sure all property names are spelled correctly. Make sure the properties are available for the subscription and resource.
* Workaround: Reduce the number of properties defined in the request to eliminate the property that is causing the error.
See the solution above.
***Volume is being deleted and cannot be deleted at the moment.***
-This error occurs when you try to delete a volume when it is already being deleted.
+This error occurs when you try to delete a volume when it's already being deleted.
* Cause: A volume is already being deleted when you try to delete the volume.
This error occurs when you try to create an SMB volume, but a DNS server (specif
* Cause: You're trying to create an SMB volume, but a DNS server (specified in your Active Directory configuration) is unreachable. * Solution:
-Review your Active Directory configuration and make sure that the DNS server IP addresses are correct and reachable.
+Review your Active Directory configuration. Make sure that the DNS server IP addresses are correct and reachable.
If thereΓÇÖs no issues with the DNS server IP addresses, then verify that no firewalls are blocking the access. ***Too many concurrent jobs***
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"maxAllowedAgeInDays": 730 }, "use-recent-module-versions": {
- "level": "warning",
+ "level": "warning"
}, "use-resource-id-functions": { "level": "warning"
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md
param emptyArray array = []
param numberArray array = [1, 2, 3] output foo bool = empty(emptyArray) || emptyArray[0] == 'bar'
-output bar bool = length(numberArray) >= 3 || numberArray[3] == 4
+output bar bool = length(numberArray) <= 3 || numberArray[3] == 4
``` ## Booleans
azure-signalr Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md
Deploy the Bicep file using Azure CLI
-
+> [!NOTE]
+> * The replica count is currently limited to a maximum of 8 per primary resource.
+ ## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`.
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
Update **webpubsub** extension to the latest version, then run:
-
+> [!NOTE]
+> * The replica count is currently limited to a maximum of 8 per primary resource.
+ ## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`.
cdn Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/migrate-tier.md
Title: Migrate Azure CDN from Microsoft (classic) to Azure Front Door Standard or Premium tier (preview)
+ Title: Migrate Azure CDN from Microsoft (classic) to Azure Front Door Standard or Premium tier
description: This article provides step-by-step instructions on how to migrate from an Azure CDN from Microsoft (classic) profile to an Azure Front Door Standard or Premium tier profile.
Last updated 06/25/2024
-# Migrate Azure CDN from Microsoft (classic) to Standard/Premium tier (preview)
-
-> [!IMPORTANT]
-> Azure CDN from Microsoft to Azure Front Door migration is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Migrate Azure CDN from Microsoft (classic) to Standard/Premium tier
Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users using the Microsoft global network. This article guides you through the migration process to move your Azure CDN from Microsoft (classic) profile to either a Standard or Premium tier profile.
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
An organizer-assigned policy can disable the feature in all meetings this user o
|Setting name | Policy scope|Description |Tenant policy| property | |--|--|--|--|--| |[Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meetings. | [CsExternalAccessPolicy](/PowerShell/module/skype/set-csexternalaccesspolicy)| EnableAcsFederationAccess |
-|Blocked anonymous join client types | per-organizer | If the property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can join Teams meeting. | [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) | BlockedAnonymousJoinClientTypes |
+|Blocked anonymous join client types | per-organizer | If the property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can't join Teams meeting. | [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) | BlockedAnonymousJoinClientTypes |
|[Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meetings. |[CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy)| AllowAnonymousUsersToJoinMeeting| |[Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | Teams external users can start a Teams meeting without Teams user if enabled. | [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) |AllowAnonymousUsersToStartMeeting| |Anonymous users can dial out to phone users | per-organizer | If enabled, Teams external users can add phone participants to the meeting.| [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) |AllowAnonymousUsersToDialOut|
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/closed-captions.md
Microsoft indicates to you via the Azure Communication Services API that recordi
- Get started with a [Closed Captions Quickstart](../../quickstarts/voice-video-calling/get-started-with-closed-captions.md) - Learn more about using closed captions in [Teams interop](../interop/enable-closed-captions.md) scenarios.
+- Learn more about the [UI Library](../ui-library/ui-library-overview.md).
communication-services Closed Captions Teams Interop How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/closed-captions-teams-interop-how-to.md
For more information, see the following articles:
- Learn about [Voice](./manage-calls.md) and [Video calling](./manage-video.md). - Learn about [Teams interoperability](./teams-interoperability.md). - Learn more about Microsoft Teams [live translated captions](https://support.microsoft.com//office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260).
+- Learn more about the [UI Library](../../concepts//ui-library/ui-library-overview.md).
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/closed-captions.md
+
+ Title: Enable scenarios using closed captions and the UI Library
+
+description: Enable scenarios using closed captions and Azure Communication Services UI Library.
+++++ Last updated : 07/01/2024+
+zone_pivot_groups: acs-plat-ios-android
+
+#Customer intent: As a developer, I want setup closed captions into a call using the UI Library.
++
+# Closed captions
+
+Closed captions play a critical role in video voice calling apps, providing numerous benefits that enhance the accessibility, usability, and overall user experience of these platforms.
+
+In this article, you learn how to enable closed captions scenarios using the UI Library. There's two main scenarios to enable closed captions: Azure Communication Services video and voice calls and Interop calls.
+
+## Azure Communication Service based captions
+
+Supported for calls involving Azure Communication Service users only. Currently, Azure Communication Service captions **do not support language translation**.
+
+## Teams Interop closed captions
+
+Supported during calls with one or more Teams users.
+
+### Translation support
+
+Unlike Azure Communication Service closed captions, Teams Interop closed captions support translation. Users can opt to have closed captions translated into a different language through the captions settings.
+
+## How to use captions
+
+Captions are seamlessly integrated within the `CallingUILibrary`.
+
+1. **Activate captions**:
+ - During a connected call, navigate to the control bar and click the **more button**.
+ - In the menu pop-up, toggle to turn on captions.
+
+2. **Adjust spoken language**:
+ - If a different language is being used in the meeting, users can change the spoken language via the UI. This change applies to all users in the call.
+
+3. **Set caption language** (for Teams Interop Closed Captions):
+ - By default, live captions are displayed in the meeting or event spoken language. Live translated captions allow users to see captions translated into the language theyΓÇÖre most comfortable with.
+ - Change the caption language by clicking on the **Captions language** button after captions start, if translation to a different language is desired.
++
+> [!NOTE]
+> Live translated captions in meetings are only available as part of [**Teams Premium**](https://learn.microsoft.com/MicrosoftTeams/teams-add-on-licensing/licensing-enhance-teams#meetings), an add-on license that provides additional features to make Teams meetings more personalized, intelligent, and secure. To get access to Teams Premium, contact your IT admin. More details you can find it [here](../calling-sdk/closed-captions-teams-interop-how-to.md).
+
+## Supported languages
+
+Azure Communication Services supports various spoken languages for captions. The next table contains the list of supported language codes that you can use with the `setSpokenLanguage` method to set the desired language for captions.
+
+| Language | ACS Spoken Code | Teams Spoken Code | Teams Captions Code |
+|--|--|-|--|
+| Arabic | ar-ae, ar-sa | ar-ae, ar-sa | ar |
+| Danish | da-dk | da-dk | da |
+| German | de-de | de-de | de |
+| English | en-au, en-ca, en-gb, en-in, en-nz, en-us | en-au, en-ca, en-gb, en-in, en-nz, en-us | en |
+| Spanish | es-es, es-mx | es-es, es-mx | es |
+| Finnish | fi-fi | fi-fi | fi |
+| French | fr-ca, fr-fr | fr-ca, fr-fr | fr, fr-ca |
+| Hindi | hi-in | hi-in | hi |
+| Italian | it-it | it-it | it |
+| Japanese | ja-jp | ja-jp | ja |
+| Korean | ko-kr | ko-kr | ko |
+| Norwegian | nb-no | nb-no | nb |
+| Dutch | nl-be, nl-nl | nl-be, nl-nl | nl |
+| Polish | pl-pl | pl-pl | pl |
+| Portuguese | pt-br | pt-br, pt-pt | pt, pt-pt |
+| Russian | ru-ru | ru-ru | ru |
+| Swedish | sv-se | sv-se | sv |
+| Chinese | zh-cn, zh-hk | zh-cn, zh-hk | zh-Hans, zh-Hant |
+| Czech | ΓÇö | cs-cz | cs |
+| Slovak | ΓÇö | sk-sk | sk |
+| Turkish | ΓÇö | tr-tr | tr |
+| Vietnamese | ΓÇö | vi-vn | vi |
+| Thai | ΓÇö | th-th | th |
+| Hebrew | ΓÇö | he-il | he |
+| Welsh | ΓÇö | cy-gb | cy |
+| Ukrainian | ΓÇö | uk-ua | uk |
+| Greek | ΓÇö | el-gr | el |
+| Hungarian | ΓÇö | hu-hu | hu |
+| Romanian | ΓÇö | ro-ro | ro |
+
+Ensure the spoken language selected matches the language used in the call to accurately generate captions.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the call client. [Get a user access token](../../quickstarts/access-tokens.md).
+- Optional: Completion of the [quickstart for getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md).
+
+## Set up the feature
+++
+## Next steps
+
+- [Learn more about the UI Library](../../concepts/ui-library/ui-library-overview.md)
cost-management-billing Get Small Usage Datasets On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-small-usage-datasets-on-demand.md
description: The article explains how you can use the Cost Details API to get raw, unaggregated cost data that corresponds to your Azure bill. Previously updated : 01/24/2024 Last updated : 08/14/2024
GET https://management.azure.com/{scope}/providers/Microsoft.CostManagement/cost
```JSON {
- "id": "subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.CostManagement/operationResults/00000000-0000-0000-0000-000000000000",
- "name": "00000000-0000-0000-0000-000000000000",
+ "id": "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.CostManagement/operationResults/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"status": "Completed", "manifest": { "manifestVersion": "2022-05-01",
GET https://management.azure.com/{scope}/providers/Microsoft.CostManagement/cost
"byteCount": 160769, "compressData": false, "requestContext": {
- "requestScope": "subscriptions/00000000-0000-0000-0000-000000000000",
+ "requestScope": "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"requestBody": { "metric": "ActualCost", "timePeriod": {
cost-management-billing Get Usage Data Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-usage-data-azure-cli.md
description: This article explains how you get usage data with the Azure CLI. Previously updated : 11/17/2023 Last updated : 08/14/2024
After you sign in, use the [export](/cli/azure/costmanagement/export) commands t
1. Run the [export create](/cli/azure/costmanagement/export#az_costmanagement_export_create) command to create the export: ```azurecli
- az costmanagement export create --name DemoExport --type Usage \--scope "subscriptions/00000000-0000-0000-0000-000000000000" --storage-account-id cmdemo \--storage-container democontainer --timeframe MonthToDate --storage-directory demodirectory
+ az costmanagement export create --name DemoExport --type Usage \--scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e" --storage-account-id cmdemo \--storage-container democontainer --timeframe MonthToDate --storage-directory demodirectory
``` ## Related content
cost-management-billing Migrate Ea Price Sheet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-price-sheet-api.md
description: This article has information to help you migrate from the EA Price Sheet API. Previously updated : 04/23/2024 Last updated : 08/14/2024
The POST request returns a location to poll the report generation status as outl
Status code: 202 ```http
-Location: https://management.azure.com/providers/Microsoft.Billing/billingAccounts/0000000/providers/Microsoft.CostManagement/operationResults/00000000-0000-0000-0000-000000000000?api-version=2023-09-01
+Location: https://management.azure.com/providers/Microsoft.Billing/billingAccounts/0000000/providers/Microsoft.CostManagement/operationResults/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e?api-version=2023-09-01
Retry-After: 60 ```
Status code: 200
### Sample request to poll report generation status ```HTTP
-GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/0000000/providers/Microsoft.CostManagement/operationResults/00000000-0000-0000-0000-000000000000?api-version=2023-09-01
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/0000000/providers/Microsoft.CostManagement/operationResults/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e?api-version=2023-09-01
``` ### Response body changes
cost-management-billing Migrate Ea Reserved Instance Charges Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-charges-api.md
description: This article has information to help you migrate from the EA Reserved Instance Charges API. Previously updated : 04/23/2024 Last updated : 08/14/2024
Old response:
"armSkuName": "Standard_F1s", "term": "P1Y", "region": "eastus",
- "PurchasingsubscriptionGuid": "00000000-0000-0000-0000-000000000000",
+ "PurchasingsubscriptionGuid": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"PurchasingsubscriptionName": "string", "accountName": "string", "accountOwnerEmail": "string",
Old response:
"currentEnrollment": "string", "billingFrequency": "OneTime", "eventDate": "string",
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"description": "Standard_F1s eastus 1 Year", "eventType": "Purchase", "quantity": int,
New response:
"tags": [], "properties": { "eventDate": "2019-09-09T19:19:04Z",
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"description": "Standard_DS1_v2 westus 1 Year", "eventType": "Refund", "quantity": 1,
New response:
"armSkuName": "Standard_DS1_v2", "term": "P1Y", "region": "westus",
- "purchasingSubscriptionGuid": "a838a8c3-a408-49e1-ac90-42cb95bff9b2",
+ "purchasingSubscriptionGuid": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"purchasingSubscriptionName": "Infrastructure Subscription", "accountName": "Microsoft Infrastructure", "accountOwnerEmail": "admin@microsoft.com",
cost-management-billing Migrate Ea Reserved Instance Recommendations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-recommendations-api.md
description: This article has information to help you migrate from the EA Reserved Instance Recommendations API. Previously updated : 04/23/2024 Last updated : 08/14/2024
Old response for Shared scope:
```json { "lookBackPeriod": "Last60Days",
- "meterId": "00000000-0000-0000-0000-000000000000",
+ "meterId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"skuName": "Standard_B1s", "term": "P3Y", "region": "eastus",
Old response for Single scope:
```json {
- "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"lookBackPeriod": "Last60Days",
- "meterId": "00000000-0000-0000-0000-000000000000",
+ "meterId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"skuName": "Standard_B1s", "term": "P3Y", "region": "eastus",
New response:
{ "value": [ {
- "id": "billingAccount/123456/providers/Microsoft.Consumption/reservationRecommendations/00000000-0000-0000-0000-000000000000",
- "name": "00000000-0000-0000-0000-000000000000",
+ "id": "billingAccount/123456/providers/Microsoft.Consumption/reservationRecommendations/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"type": "Microsoft.Consumption/reservationRecommendations", "location": "westus", "sku": "Standard_DS1_v2", "kind": "legacy", "properties": {
- "meterId": "00000000-0000-0000-0000-000000000000",
+ "meterId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"term": "P1Y", "costWithNoReservedInstances": 12.0785105, "recommendedQuantity": 1,
cost-management-billing Migrate Ea Reserved Instance Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-usage-details-api.md
description: This article has information to help you migrate from the EA Reserved Instance Usage Details API. Previously updated : 04/23/2024 Last updated : 08/14/2024
The POST request returns a location to poll the report generation status as outl
Status code 202 ```http
-Location: https://management.azure.com/providers/Microsoft.Billing/billingAccounts/9845612/providers/Microsoft.CostManagement/reservationDetailsOperationResults/cf9f95c9-af6b-41dd-a622-e6f4fc60c3ee?api-version=2023-11-01
+Location: https://management.azure.com/providers/Microsoft.Billing/billingAccounts/9845612/providers/Microsoft.CostManagement/reservationDetailsOperationResults/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb?api-version=2023-11-01
Retry-After: 60 ```
Status code 200
{ "status": "Completed", "properties": {
- "reportUrl": "https://storage.blob.core.windows.net/details/20200911/00000000-0000-0000-0000-000000000000?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
+ "reportUrl": "https://storage.blob.core.windows.net/details/20200911/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
"validUntil": "2020-09-12T02:56:55.5021869Z" } }
GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{bi
{ "status": "Completed", "properties": {
- "reportUrl": "https://storage.blob.core.windows.net/details/20200911/00000000-0000-0000-0000-000000000000?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
+ "reportUrl": "https://storage.blob.core.windows.net/details/20200911/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
"validUntil": "2020-09-12T02:56:55.5021869Z" } }
Old response:
```json {
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "reservationId": "bbbbbbbb-1111-2222-3333-cccccccccccc",
"usageDate": "2018-02-01T00:00:00", "skuName": "Standard_F2s",
- "instanceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resourvegroup1/providers/microsoft.compute/virtualmachines/VM1",
+ "instanceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/resourvegroup1/providers/microsoft.compute/virtualmachines/VM1",
"totalReservedQuantity": 18.000000000000000, "reservedHours": 432.000000000000000, "usedHours": 400.000000000000000
cost-management-billing Migrate Ea Reserved Instance Usage Summary Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reserved-instance-usage-summary-api.md
description: This article has information to help you migrate from the EA Reserved Instance Usage Summary API. Previously updated : 04/23/2024 Last updated : 08/14/2024
Old response:
```json [ {
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "reservationId": "bbbbbbbb-1111-2222-3333-cccccccccccc",
"skuName": "Standard_F1s", "reservedHours": 24, "usageDate": "2018-05-01T00:00:00",
New response:
"type": "Microsoft.Consumption/reservationSummaries", "tags": null, "properties": {
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "reservationId": "bbbbbbbb-1111-2222-3333-cccccccccccc",
"skuName": "Standard_B1s", "reservedHours": 720, "usageDate": "2018-09-01T00:00:00-07:00",
cost-management-billing Ingest Azure Usage At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
description: This article helps you regularly export large amounts of data with exports from Cost Management. Previously updated : 05/14/2024 Last updated : 08/14/2024
Request URL: `PUT https://management.azure.com/{scope}/providers/Microsoft.CostM
"format": "Csv", "deliveryInfo": { "destination": {
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MYDEVTESTRG/providers/Microsoft.Storage/storageAccounts/{yourStorageAccount} ",
+ "resourceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MYDEVTESTRG/providers/Microsoft.Storage/storageAccounts/{yourStorageAccount} ",
"container": "{yourContainer}", "rootFolderPath": "{yourDirectory}" }
cost-management-billing Migrate Cost Management Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/migrate-cost-management-api.md
Title: Migrate EA to Microsoft Customer Agreement APIs - Azure
description: This article helps you understand the consequences of migrating a Microsoft Enterprise Agreement (EA) to a Microsoft Customer Agreement. Previously updated : 02/22/2024 Last updated : 08/14/2024
To view invoice information with the Price Sheet API in CSV format:
| Method | Request URI | | | |
-| POST | `https://management.azure.com/providers/Microsoft.Billing/billingAccounts/2909cffc-b0a2-5de1-bb7b-5d3383764184/billingProfiles/2dcffe0c-ee92-4265-8647-515b8fe7dc78/invoices/{invoiceId}/pricesheet/default/download?api-version=2018-11-01-preview&format=csv` |
+| POST | `https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/invoices/{invoiceId}/pricesheet/default/download?api-version=2018-11-01-preview&format=csv` |
To view invoice information with the Price Sheet API in JSON Format: | Method | Request URI | | | |
-| POST | `https://management.azure.com/providers/Microsoft.Billing/billingAccounts/2909cffc-b0a2-5de1-bb7b-5d3383764184/billingProfiles/2dcffe0c-ee92-4265-8647-515b8fe7dc78/invoices/{invoiceId}/pricesheet/default/download?api-version=2018-11-01-preview&format=json` |
+| POST | `https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/invoices/{invoiceId}/pricesheet/default/download?api-version=2018-11-01-preview&format=json` |
You can also see estimated prices for any Azure Consumption or Marketplace consumption service in the current open billing cycle or service period.
The updated Price Sheet by billing account API gets the Price Sheet in CSV forma
| Method | Request URI | | | |
-| GET | `/providers/Microsoft.Billing/billingAccounts/28ae4b7f-41bb-581e-9fa4-8270c857aa5f/billingProfiles/ef37facb-cd6f-437a-9261-65df15b673f9/providers/Microsoft.Consumption/pricesheets/download?api-version=2019-01-01` |
+| GET | `/providers/Microsoft.Billing/billingAccounts/{billing AccountId}/billingProfiles/{billingProfileId}/providers/Microsoft.Consumption/pricesheets/download?api-version=2019-01-01` |
At the EA's enrollment scope, the API response and properties are identical. The properties correspond to the same MCA properties.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 06/04/2024 Last updated : 08/14/2024
Start by preparing your environment for the Azure CLI:
1. After you sign in, to see your current exports, use the [az costmanagement export list](/cli/azure/costmanagement/export#az-costmanagement-export-list) command: ```azurecli
- az costmanagement export list --scope "subscriptions/00000000-0000-0000-0000-000000000000"
+ az costmanagement export list --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
``` >[!NOTE]
Start by preparing your environment for the Azure CLI:
```azurecli az costmanagement export create --name DemoExport --type ActualCost \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000" \
- --storage-account-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/TreyNetwork/providers/Microsoft.Storage/storageAccounts/cmdemo \
+ --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e" \
+ --storage-account-id /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/TreyNetwork/providers/Microsoft.Storage/storageAccounts/cmdemo \
--storage-container democontainer --timeframe MonthToDate --recurrence Daily \ --recurrence-period from="2020-06-01T00:00:00Z" to="2020-10-31T00:00:00Z" \ --schedule-status Active --storage-directory demodirectory
Start by preparing your environment for the Azure CLI:
```azurecli az costmanagement export show --name DemoExport \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000"
+ --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
``` 1. Update an export by using the [az costmanagement export update](/cli/azure/costmanagement/export#az-costmanagement-export-update) command: ```azurecli az costmanagement export update --name DemoExport
- --scope "subscriptions/00000000-0000-0000-0000-000000000000" --storage-directory demodirectory02
+ --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e" --storage-directory demodirectory02
``` This example changes the output directory.
Start by preparing your environment for the Azure CLI:
You can delete an export by using the [az costmanagement export delete](/cli/azure/costmanagement/export#az-costmanagement-export-delete) command: ```azurecli
-az costmanagement export delete --name DemoExport --scope "subscriptions/00000000-0000-0000-0000-000000000000"
+az costmanagement export delete --name DemoExport --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
``` ### [Azure PowerShell](#tab/azure-powershell)
Start by preparing your environment for Azure PowerShell:
1. After you sign in, to see your current exports, use the [Get-AzCostManagementExport](/powershell/module/Az.CostManagement/get-azcostmanagementexport) cmdlet: ```azurepowershell-interactive
- Get-AzCostManagementExport -Scope 'subscriptions/00000000-0000-0000-0000-000000000000'
+ Get-AzCostManagementExport -Scope 'subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e'
``` >[!NOTE]
Start by preparing your environment for Azure PowerShell:
$Params = @{ Name = 'DemoExport' DefinitionType = 'ActualCost'
- Scope = 'subscriptions/00000000-0000-0000-0000-000000000000'
- DestinationResourceId = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/treynetwork/providers/Microsoft.Storage/storageAccounts/cmdemo'
+ Scope = 'subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e'
+ DestinationResourceId = '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/treynetwork/providers/Microsoft.Storage/storageAccounts/cmdemo'
DestinationContainer = 'democontainer' DefinitionTimeframe = 'MonthToDate' ScheduleRecurrence = 'Daily'
Start by preparing your environment for Azure PowerShell:
1. To see the details of your export operation, use the `Get-AzCostManagementExport` cmdlet: ```azurepowershell-interactive
- Get-AzCostManagementExport -Scope 'subscriptions/00000000-0000-0000-0000-000000000000'
+ Get-AzCostManagementExport -Scope 'subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e'
``` 1. Update an export by using the [Update-AzCostManagementExport](/powershell/module/Az.CostManagement/update-azcostmanagementexport) cmdlet: ```azurepowershell-interactive
- Update-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-0000-0000-0000-000000000000' -DestinationRootFolderPath demodirectory02
+ Update-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e' -DestinationRootFolderPath demodirectory02
``` This example changes the output directory.
Start by preparing your environment for Azure PowerShell:
You can delete an export by using the [Remove-AzCostManagementExport](/powershell/module/Az.CostManagement/remove-azcostmanagementexport) cmdlet: ```azurepowershell-interactive
-Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-0000-0000-0000-000000000000'
+Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e'
```
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 04/22/2024 Last updated : 08/14/2024
Allowed values for `Workload` are `Production` and `DevTest`.
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"provisioningState": "Accepted" } }
GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sample
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"provisioningState": "Succeeded" } }
You get the subscriptionId as part of the response from the command.
"type": "Microsoft.Subscription/aliases", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
} } ```
You get the subscriptionId as part of the response from the command.
"name": "sampleAlias", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
}, "type": "Microsoft.Subscription/aliases" }
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md
Previously updated : 03/21/2024 Last updated : 08/14/2024
When you create an MCA subscription in the source tenant, you must specify the s
Sign in to Azure CLI and use the [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) command: ```sh
-az ad sp show --id 00000000-0000-0000-0000-000000000000 --query 'id'
+az ad sp show --id aaaaaaaa-bbbb-cccc-1111-222222222222 --query 'id'
``` #### Azure PowerShell
az ad sp show --id 00000000-0000-0000-0000-000000000000 --query 'id'
Sign in to Azure PowerShell and use the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) cmdlet: ```sh
-Get-AzADServicePrincipal -ApplicationId 00000000-0000-0000-0000-000000000000 | Select-Object -Property Id
+Get-AzADServicePrincipal -ApplicationId aaaaaaaa-bbbb-cccc-1111-222222222222 | Select-Object -Property Id
``` Save the `Id` value returned by the command.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Previously updated : 03/21/2024 Last updated : 08/14/2024
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{gui
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"provisioningState": "Accepted" } }
GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sample
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"provisioningState": "Succeeded" } }
You get the subscriptionId as part of the response from the command.
"name": "sampleAlias", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
}, "type": "Microsoft.Subscription/aliases" }
You get the subscriptionId as part of the response from the command.
"name": "sampleAlias", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
}, "type": "Microsoft.Subscription/aliases" }
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Previously updated : 03/21/2024 Last updated : 08/14/2024
The API response lists the customers in the billing account with Azure plans. Yo
"billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": "Contoso toys", "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "d49c364c-f866-4cc2-a284-d89f369b7951",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"resellers": null, "type": "Microsoft.Billing/billingAccounts/customers" }
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampl
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"provisioningState": "Accepted" } }
You get the subscriptionId as part of the response from the command.
"name": "sampleAlias", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
}, "type": "Microsoft.Subscription/aliases" }
You get the subscriptionId as part of the response from command.
"name": "sampleAlias", "properties": { "provisioningState": "Succeeded",
- "subscriptionId": "4921139b-ef1e-4370-a331-dd2229f4f510"
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
}, "type": "Microsoft.Subscription/aliases" }
cost-management-billing Review Enterprise Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/review-enterprise-billing.md
Previously updated : 03/21/2024 Last updated : 08/14/2024 #Customer intent: As an administrator or developer, I want to use REST APIs to review billing data for all subscriptions and departments in the enterprise enrollment.
Status code 200 (OK) is returned for a successful response, which contains a lis
"usageStart": "2017-02-13T00:00:00Z", "usageEnd": "2017-02-13T23:59:59Z", "instanceName": "shared1",
- "instanceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
+ "instanceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
"currency": "USD", "usageQuantity": 0.00328, "billableQuantity": 0.00328,
The following example shows the output of the REST API for department `1234`.
"usageStart": "2017-02-13T00:00:00Z", "usageEnd": "2017-02-13T23:59:59Z", "instanceName": "shared1",
- "instanceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
+ "instanceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
"instanceLocation": "eastasia", "currency": "USD", "usageQuantity": 0.00328,
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/spending-limit.md
Previously updated : 12/12/2023 Last updated : 08/14/2024
Custom spending limits aren't available.
If the spending limit banner doesn't appear, you can manually navigate to your subscription's URL. 1. Ensure that you've navigated to the correct tenant/directory in the Azure portal.
-1. Navigate to `https://portal.azure.com/#blade/Microsoft_Azure_Billing/RemoveSpendingLimitBlade/subscriptionId/11111111-1111-1111-1111-111111111111` and replace the example subscription ID with your subscription ID.
+1. Navigate to `https://portal.azure.com/#blade/Microsoft_Azure_Billing/RemoveSpendingLimitBlade/subscriptionId/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` and replace the example subscription ID with your subscription ID.
The spending limit banner should appear.
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
Previously updated : 03/05/2024 Last updated : 08/14/2024 # Manage Reservations for Azure resources
We donΓÇÖt allow changing billing frequency after a reservation is purchased. If
2. Get the details of a reservation: ```powershell
- Get-AzReservation -ReservationOrderId a08160d4-ce6b-4295-bf52-b90a5d4c96a0 -ReservationId b8be062a-fb0a-46c1-808a-5a844714965a
+ Get-AzReservation -ReservationOrderId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb -ReservationId bbbbbbbb-1111-2222-3333-cccccccccccc
``` 3. Split the reservation into two and distribute the instances: ```powershell # Split the reservation. The sum of the reservations, the quantity, must equal the total number of instances in the reservation that you're splitting.
- Split-AzReservation -ReservationOrderId a08160d4-ce6b-4295-bf52-b90a5d4c96a0 -ReservationId b8be062a-fb0a-46c1-808a-5a844714965a -Quantity 3,2
+ Split-AzReservation -ReservationOrderId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb -ReservationId bbbbbbbb-1111-2222-3333-cccccccccccc -Quantity 3,2
``` 4. You can update the scope by running the following command: ```powershell
- Update-AzReservation -ReservationOrderId a08160d4-ce6b-4295-bf52-b90a5d4c96a0 -ReservationId 5257501b-d3e8-449d-a1ab-4879b1863aca -AppliedScopeType Single -AppliedScope /subscriptions/15bb3be0-76d5-491c-8078-61fe3468d414
+ Update-AzReservation -ReservationOrderId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb -ReservationId bbbbbbbb-1111-2222-3333-cccccccccccc -AppliedScopeType Single -AppliedScope /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e
``` ## Cancel, exchange, or refund reservations
cost-management-billing Prepay Hana Large Instances Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md
Previously updated : 04/15/2024 Last updated : 08/14/2024
armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2019-04-
'location': 'eastus', 'properties': { 'reservedResourceType': 'SapHana',
- 'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111',
+ 'billingScopeId': '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e',
'term': 'P1Y', 'quantity': '1', 'billingplan': 'Monthly', 'displayName': 'testreservation_S224om',
- 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111'],
+ 'appliedScopes': ['/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e'],
'appliedScopeType': 'Single', 'instanceFlexibility': 'NotSupported' }
The following example response resembles what you get returned. Note the value y
}, "location": "eastus", "properties": {
- "billingScopeId": "/subscriptions/11111111-1111-1111-111111111111",
+ "billingScopeId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"term": "P1Y", "billingPlan": "Upfront", "quantity": 1, "displayName": "testreservation_S224om", "appliedScopes": [
- "/subscriptions/11111111-1111-1111-111111111111"
+ "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"
], "appliedScopeType": "Single", "reservedResourceType": "SapHana",
The following example response resembles what you get returned. Note the value y
}, "quoteId": "d0fd3a890795", "isBillingPartnerManaged": true,
- "reservationOrderId": "22222222-2222-2222-2222-222222222222",
+ "reservationOrderId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"skuTitle": "SAP HANA on Azure Large Instances - S224om - US East", "skuDescription": "SAP HANA on Azure Large Instances, S224om", "pricingCurrencyTotal": {
Make your purchase using the returned `reservationOrderId` that you got from the
Here's an example request: ```azurepowershell-interactive
-armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-version=2019-04-01 "{
+armclient put /providers/Microsoft.Capacity/reservationOrders/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb?api-version=2019-04-01 "{
'sku': { 'name': 'SAP_HANA_On_Azure_S224om' }, 'location': 'eastus', 'properties': { 'reservedResourceType': 'SapHana',
- 'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111',
+ 'billingScopeId': '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e',
'term': 'P1Y', 'quantity': '1', 'billingplan': 'Monthly', 'displayName': ' testreservation_S224om',
- 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123'],
+ 'appliedScopes': ['/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/123'],
'appliedScopeType': 'Single', 'instanceFlexibility': 'NotSupported', 'renew': true
Here's an example response. If the order is placed successfully, the `provisioni
``` {
- "id": "/providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222",
+ "id": "/providers/microsoft.capacity/reservationOrders/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb2",
"type": "Microsoft.Capacity/reservationOrders",
- "name": "22222222-2222-2222-2222-222222222222",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"etag": 1, "properties": { "displayName": "testreservation_S224om",
Here's an example response. If the order is placed successfully, the `provisioni
"sku": { "name": "SAP_HANA_On_Azure_S224om" },
- "id": "/providers/microsoft.capacity/reservationOrders22222222-2222-2222-2222-222222222222/reservations/33333333-3333-3333-3333-3333333333333",
+ "id": "/providers/microsoft.capacity/reservationOrdersaaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/reservations/bbbbbbbb-1111-2222-3333-cccccccccccc",
"type": "Microsoft.Capacity/reservationOrders/reservations",
- "name": "22222222-2222-2222-2222-222222222222/33333333-3333-3333-3333-3333333333333",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/bbbbbbbb-1111-2222-3333-cccccccccccc",
"etag": 1, "location": "eastusΓÇ¥ "properties": { "appliedScopes": [
- "/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123"
+ "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/123"
], "appliedScopeType": "Single", "quantity": 1,
Here's an example response. If the order is placed successfully, the `provisioni
Run the Reservation order GET request to see the status of the purchase order. `provisioningState` should be `Succeeded`. ```azurepowershell-interactive
-armclient get /providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-version=2018-06-01
+armclient get /providers/microsoft.capacity/reservationOrders/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb?api-version=2018-06-01
``` The response should resemble the following example. ``` {
- "id": "/providers/microsoft.capacity/reservationOrders/44444444-4444-4444-4444-444444444444",
+ "id": "/providers/microsoft.capacity/reservationOrders/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"type": "Microsoft.Capacity/reservationOrders",
- "name": "22222222-2222-2222-2222-222222222222 ",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb ",
"etag": 8, "properties": { "displayName": "testreservation_S224om",
The response should resemble the following example.
"provisioningState": "Succeeded", "reservations": [ {
- "id": "/providers/microsoft.capacity/reservationOrders/22222222-2222-2222-2222-222222222222/reservations/33333333-3333-3333-3333-3333333333333"
+ "id": "/providers/microsoft.capacity/reservationOrders/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/reservations/bbbbbbbb-1111-2222-3333-cccccccccccc"
} ], "originalQuantity": 1,
cost-management-billing Reservation Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-apis.md
Previously updated : 11/17/2023 Last updated : 08/14/2024
Request body:
"location": "westus", "properties": { "reservedResourceType": "VirtualMachines",
- "billingScopeId": "/subscriptions/ed3a1871-612d-abcd-a849-c2542a68be83",
+ "billingScopeId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"term": "P1Y", "quantity": "1", "displayName": "TestReservationOrder",
cost-management-billing Understand Reserved Instance Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage.md
Previously updated : 05/20/2024 Last updated : 08/14/2024
For the following sections, assume that you're running a Standard_DS1_v2 Windows
| Field | Value | || :: |
-|ReservationId |8117adfb-1d94-4675-be2b-f3c1bca808b6|
+|ReservationId |aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb|
|Quantity |1| |SKU | Standard_DS1_v2| |Region | eastus |
Assume that you're running a SQL Database Gen 4 in the east US region and your r
| Field | Value | || |
-|ReservationId |446ec809-423d-467c-8c5c-bbd5d22906b1|
+|ReservationId |bbbbbbbb-1111-2222-3333-cccccccccccc|
|Quantity |2| |Product| SQL Database Gen 4 (2 Core)| |Region | eastus |
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Previously updated : 05/07/2024 Last updated : 08/14/2024
You can buy savings plans by using Azure role-based access control (RBAC) permis
#### Purchase by using Azure RBAC permissions - You must have the savings plan purchaser role within, or be an owner of, the subscription that you plan to use, which is specified as `billingScopeId`.-- The `billingScopeId` property in the request body must use the `/subscriptions/10000000-0000-0000-0000-000000000000` format.
+- The `billingScopeId` property in the request body must use the `/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` format.
#### Purchase by using billing permissions
Permission needed to purchase varies by the type of account that you have:
- **Microsoft Customer Agreement**: You must be a billing profile contributor or higher. - **Microsoft Partner Agreement**: Only Azure RBAC permissions are currently supported.
-The `billingScopeId` property in the request body must use the `/providers/Microsoft.Billing/billingAccounts/{accountId}/billingSubscriptions/10000000-0000-0000-0000-000000000000` format.
+The `billingScopeId` property in the request body must use the `/providers/Microsoft.Billing/billingAccounts/{accountId}/billingSubscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` format.
## View savings plan purchases and payments
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
Previously updated : 02/14/2024 Last updated : 08/14/2024 # View and download your Azure usage and charges
Then use the [az costmanagement export](/cli/azure/costmanagement/export) comman
```azurecli az costmanagement export create --name DemoExport --type Usage \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000" --storage-account-id cmdemo \
+ --scope "subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e" --storage-account-id cmdemo \
--storage-container democontainer --timeframe MonthToDate --storage-directory demodirectory ```
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
Successful scenarios are shown in the following examples:
:::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/tcp-4-handshake-workflow.png" alt-text="Diagram of a TCP 4 handshake workflow.":::
-### Microsoft email notification about updating your network configuration
-
-You might receive the following email notification, which recommends that you update your network configuration to allow communication with new IP addresses for Azure Data Factory by 8 November 2020:
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/email-notification.png" alt-text="Screenshot of Microsoft email notification requesting update of network configuration.":::
- #### Determine whether this notification affects you This notification applies to the following scenarios:
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
In this step, you link your Azure Storage Account to the data factory.
### Create Azure SQL Database linked service. In this step, you link your database to the data factory.
-1. Create a JSON file named **AzureSQLDatabaseLinkedService.json** in **C:\ADFTutorials\IncCopyChangeTrackingTutorial** folder with the following content: Replace **&lt;server&gt; &lt;database name&gt;, &lt;user id&gt;, and &lt;password&gt;** with name of your server, name of your database, user ID, and password before saving the file.
+1. Create a JSON file named **AzureSQLDatabaseLinkedService.json** in **C:\ADFTutorials\IncCopyChangeTrackingTutorial** folder with the following content: Replace &lt;your-server-name&gt; and &lt;your-database-name&gt; with the name of your server and database before you save the file. You must also configure your Azure SQL Server to [grant access to your data factory's managed identity](connector-azure-sql-database.md#user-assigned-managed-identity-authentication).
```json {
- "name": "AzureSQLDatabaseLinkedService",
- "properties": {
+ "name": "AzureSqlDatabaseLinkedService",
+ "properties": {
"type": "AzureSqlDatabase", "typeProperties": {
- "connectionString": "Server = tcp:<server>.database.windows.net,1433;Initial Catalog=<database name>; Persist Security Info=False; User ID=<user name>; Password=<password>; MultipleActiveResultSets = False; Encrypt = True; TrustServerCertificate = False; Connection Timeout = 30;"
- }
+ "connectionString": "Server=tcp:<your-server-name>.database.windows.net,1433;Database=<your-database-name>;"
+ },
+ "authenticationType": "ManagedIdentity",
+ "annotations": []
} } ```
+
2. In **Azure PowerShell**, run the **Set-AzDataFactoryV2LinkedService** cmdlet to create the linked service: **AzureSQLDatabaseLinkedService**. ```powershell
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
You create linked services in a data factory to link your data stores and comput
``` ### Create a SQL Database linked service
-1. Create a JSON file named AzureSQLDatabaseLinkedService.json in the C:\ADF folder with the following content. (Create the folder ADF if it doesn't already exist.) Replace &lt;server&gt;, &lt;database&gt;, &lt;user id&gt;, and &lt;password&gt; with the name of your server, database, user ID, and password before you save the file.
+1. Create a JSON file named AzureSQLDatabaseLinkedService.json in the C:\ADF folder with the following content. (Create the folder ADF if it doesn't already exist.) Replace &lt;your-server-name&gt; and &lt;your-database-name&gt; with the name of your server and database before you save the file. You must also configure your Azure SQL Server to [grant access to your data factory's managed identity](connector-azure-sql-database.md#user-assigned-managed-identity-authentication).
```json {
- "name": "AzureSQLDatabaseLinkedService",
- "properties": {
+ "name": "AzureSqlDatabaseLinkedService",
+ "properties": {
"type": "AzureSqlDatabase", "typeProperties": {
- "connectionString": "Server = tcp:<server>.database.windows.net,1433;Initial Catalog=<database>; Persist Security Info=False; User ID=<user> ; Password=<password>; MultipleActiveResultSets = False; Encrypt = True; TrustServerCertificate = False; Connection Timeout = 30;"
- }
+ "connectionString": "Server=tcp:<your-server-name>.database.windows.net,1433;Database=<your-database-name>;"
+ },
+ "authenticationType": "ManagedIdentity",
+ "annotations": []
} } ```
databox-online Azure Stack Edge Gpu 2407 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2407-release-notes.md
+
+ Title: Azure Stack Edge 2407 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2407 release.
++
+
+++ Last updated : 08/15/2024+++
+# Azure Stack Edge 2407 release notes
++
+The following release notes identify critical open issues and resolved issues for the 2407 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2407** release, which maps to software version **3.2.2754.1029**.
+
+> [!Warning]
+> In this release, you must update the packet core version to AP5GC 2308 before you update to Azure Stack Edge 2407. For detailed steps, see [Azure Private 5G Core 2308 release notes](../private-5g-core/azure-private-5g-core-release-notes-2308.md).
+> If you update to Azure Stack Edge 2407 before updating to Packet Core 2308.0.1, you will experience a total system outage. In this case, you must delete and re-create the Azure Kubernetes service cluster on your Azure Stack Edge device.
+> Each time you change the Kubernetes workload profile, you are prompted for the Kubernetes update. Go ahead and apply the update.
+
+## Supported update paths
+
+To apply the 2407 update, your device must be running version 2403 or later.
+
+ - If you aren't running the minimum required version, you see this error:
+
+ *Update package can't be installed as its dependencies aren't met.*
+
+ - You can update to 2403 from 2303 or later, and then update to 2407.
+
+You can update to the latest version using the following update paths:
+
+| Current version of Azure Stack Edge software and Kubernetes | Update to Azure Stack Edge software and Kubernetes | Desired update to 2407 |
+| --| --| --|
+|2303 |2403 |2407 |
+|2309 |2403 |2407 |
+|2312 |2403 |2407 |
+|2403 |Directly to |2407 |
+
+## What's new
+
+The 2407 release has the following new features and enhancements:
+
+- Base OS updates for Kubernetes nodes.
+- OpenSSH version update for Kubernetes nodes.
+- Azure Stack Edge Kubernetes v1.28.
+- Azure Arc for Kubernetes v1.16.10.
+- Deprecated support for Ubuntu 18.04 LTS GPU extension. The GPU extension is no longer supported on Ubuntu 18.04 GPU VMs running on Azure Stack Edge devices. If you plan to utilize the Ubuntu version 18.04 LTS distro, see steps for manual GPU driver installation at [CUDA Toolkit 12.1 Update 1 Downloads](https://developer.nvidia.com/cuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=deb_local).
+
+ You may need to download the CUDA signing key before the installation.
+
+ For detailed steps to install the signing key, see [Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md#in-versions-lower-than-2205-linux-gpu-extension-installs-old-signing-keys-signature-andor-required-key-missing).
+
+<!--!## Issues fixed in this release
+==previous==
+| No. | Feature | Issue |
+| | | |
+|**1.**| Clustering |-->
+
+## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|VM creation | Image directory is still the old location causing VM creation failure on Azure Stack Edge 2403. | |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command looks like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, might result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error shows as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you might see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You must restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules might require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI might take several seconds to update. |The following scenarios in the local UI might be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information, see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution doesn't stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you must delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**28.**|AKS Update |The AKS Kubernetes update might fail if one of the AKS VMs isn't running. This issue might be seen in the two-node cluster. |If the AKS update has failed, [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md). Check the state of the Kubernetes VMs by running `Get-VM` cmdlet. If the VM is off, run the `Start-VM` cmdlet to restart the VM. Once the Kubernetes VM is running, reapply the update. |
+|**29.**|Wi-Fi |Wi-Fi functionality for Azure Stack Edge Mini R is deprecated. | |
+|**30.**| Azure Storage Explorer | The Blob storage endpoint certificate that's autogenerated by the Azure Stack Edge device might not work properly with Azure Storage Explorer. | Replace the Blob storage endpoint certificate. For detailed steps, see [Bring your own certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates). |
+|**31.**| Network connectivity | On a two-node Azure Stack Edge Pro 2 cluster with a teamed virtual switch for Port 1 and Port 2, if a Port 1 or Port 2 link is down, it can take up to 5 seconds to resume network connectivity on the remaining active port. If a Kubernetes cluster uses this teamed virtual switch for management traffic, pod communication may be disrupted up to 5 seconds. | |
+|**32.**| Virtual machine | After the host or Kubernetes node pool VM is shut down, there's a chance that kubelet in node pool VM fails to start due to a CPU static policy error. Node pool VM shows **Not ready** status, and pods won't be scheduled on this VM. | Enter a support session and ssh into the node pool VM, then follow steps in [Changing the CPU Manager Policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#changing-the-cpu-manager-policy) to remediate the kubelet service. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md).
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 04/17/2024 Last updated : 08/14/2024
-# Update your Azure Stack Edge Pro GPU
+# Update your Azure Stack Edge Pro GPU
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
Apply the software updates or hotfixes to keep your Azure Stack Edge Pro device
## About latest updates
-The current version is Update 2403. This update installs two updates, the device update followed by Kubernetes updates.
+The current version is Update 2407. This update installs two updates, the device update followed by Kubernetes updates.
The associated versions for this update are: -- Device software version: Azure Stack Edge 2403 (3.2.2642.2487).-- Device Kubernetes version: Azure Stack Kubernetes Edge 2403 (3.2.2642.2487).-- Device Kubernetes workload profile: Azure Private MEC.-- Kubernetes server version: v1.27.8.
+- Device software version: Azure Stack Edge 2407 (3.2.2754.1029).
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2407 (3.2.2754.1029).
+- Device Kubernetes workload profile: Other workloads.
+- Kubernetes server version: v1.28.5.
- IoT Edge version: 0.1.0-beta15.-- Azure Arc version: 1.14.5.-- GPU driver version: 535.104.05.
+- Azure Arc version: 1.16.10.
+- GPU driver version: 535.161.08.
- CUDA version: 12.2.
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2403-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2407-release-notes.md).
-**To apply the 2403 update, your device must be running version 2203 or later.**
+**To apply the 2407 update, your device must be running version 2403 or later.**
- If you aren't running the minimum required version, you see this error: *Update package can't be installed as its dependencies aren't met.* -- You can update to 2303 from 2207 or later, and then install 2403.
+- You can update to 2403 from 2303 or later, and then install 2407.
Supported update paths:
-| Current version of Azure Stack Edge software and Kubernetes | Upgrade to Azure Stack Edge software and Kubernetes | Desired update to 2403 |
+| Current version of Azure Stack Edge software and Kubernetes | Upgrade to Azure Stack Edge software and Kubernetes | Desired update to 2407 |
|-|-| |
-| 2207 | 2303 | 2403 |
-| 2209 | 2303 | 2403 |
-| 2210 | 2303 | 2403 |
-| 2301 | 2303 | 2403 |
-| 2303 | Directly to | 2403 |
+| 2303 | 2403 | 2407 |
+| 2309 | 2403 | 2407 |
+| 2312 | 2403 | 2407 |
+| 2403 | Directly to | 2407 |
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2403.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2407.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2403:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2407:
-1. Update your device version to 2303.
+1. Update your device version to 2403.
1. Update your Kubernetes version to 2210.
-1. Update your Kubernetes version to 2303.
-1. Update both device software and Kubernetes to 2403.
+1. Update your Kubernetes version to 2403.
+1. Update both device software and Kubernetes to 2407.
-If you're running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2303 and then to 2403.
+If you're running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2403 and then to 2407.
-If you're running 2303, you can update both your device version and Kubernetes version directly to 2403.
+If you're running 2403, you can update both your device version and Kubernetes version directly to 2407.
-In Azure portal, the process requires two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2403.
+In Azure portal, the process requires two clicks, the first update gets your device version to 2403 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2407.
-From the local UI, you'll have to run each update separately: update the device version to 2303, update Kubernetes version to 2210, update Kubernetes version to 2303, and then the third update gets both the device version and Kubernetes version to 2403.
+From the local UI, you'll have to run each update separately: update the device version to 2403, update Kubernetes version to 2210, update Kubernetes version to 2403, and then the third update gets both the device version and Kubernetes version to 2407.
Each time you change the Kubernetes profile, you're prompted for the Kubernetes update. Go ahead and apply the update.
Depending on the software version that you're running, install process might dif
[!INCLUDE [azure-stack-edge-install-2110-updates](../../includes/azure-stack-edge-install-2110-updates.md)]
-![Screenshot of updated software version in local UI.](./media/azure-stack-edge-gpu-install-update/portal-update-17.png)
+ ![Screenshot of updated software version in local UI.](./media/azure-stack-edge-gpu-install-update/portal-update-17.png)
### [version 2105 and earlier](#tab/version-2105-and-earlier)
defender-for-iot Dell Edge 3200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-3200.md
The following image shows a view of the Dell Edge Gateway 3200 back panel:
|Dimensions| Height: 60 mm <br>Width: 162 cm<br>Depth: 108 mm | |Processor| Intel Atom x6425RE | |Memory|16 GB |
-|Storage| 500 GB Hard Drive |
+|Storage| 512 GB Hard Drive |
|Network controller| Ports: 2* 1 GbE RJ45 | |Management|iDRAC Group Manager, Disabled | |Rack support| Wall mount/ DIN rail support |
defender-for-iot Dell Poweredge R660 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r660.md
The Dell PowerEdge R660 is also available for the on-premises management console
|Appliance characteristic | Description| |||
-|**Hardware profile** | R660 |
+|**Hardware profile** | C5600 |
|**Performance** | Max bandwidth: 3 Gbps<br>Max devices: 12,000 |
-|**Physical Specifications** | Mounting: 1U with rail kit<br>Ports: 6x RJ45 1 GbE|
+|**Physical Specifications** | Mounting: 1U with rail kit<br>Ports: 6x RJ45 1 GbE |
|**Status** | Supported, available as a preconfigured appliance| The following image shows a view of the Dell PowerEdge R660 front panel:
The following image shows a view of the Dell PowerEdge R660 back panel:
|Management|iDRAC Group Manager, Disabled| |Rack support| ReadyRails Sliding Rails With Cable Management Arm|
-## Dell PowerEdge R660 - Bill of Materials
+## Dell PowerEdge R660 - Bill of materials
### Components
The following image shows a view of the Dell PowerEdge R660 back panel:
|1| 338-CHQT | Processor thermal configuration | Heatsink for 2 CPU configuration (CPU less than or equal to 150 W)| |1| 370-AAIP | Memory configuration type | Performance Optimized | |1| 370-AHCL | Memory DIMM type and speed | 4800-MT/s RDIMMs |
-|4| 370-AGZP | Memory capacity | 32 GB RDIMM, 4,800 MT/s dual rank |
+|4| 370-AGZP | Memory capacity | 8 * 16 GB RDIMM, 4,800 MT/s single rank |
|1| 780-BCDS | RAID configuration | unconfigured RAID | |1| 405-AAZB | RAID controller | PERC H755 SAS Front | |1| 750-ACFR | RAID controller | Front PERC Mechanical Parts, front load |
defender-for-iot Dell Xe4 Sff https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-xe4-sff.md
This article describes the **DELL XE4 SFF** appliance deployment and installatio
| Appliance characteristic |Details | ||| |**Hardware profile** | L500 |
-|**Performance** | Max bandwidth: 25 Mbps <br> 1x RJ45 monitoring port |
-|**Physical specifications** | Mounting: Small Form Factor <br> Ports: 1x1Gbps (builtin) and optional expansion PCIe cards for copper and SFP connectors|
+|**Performance** | Max bandwidth: 25 Mbps <br> Max devices: 1,000 |
+|**Physical specifications** | Mounting: Small Form Factor <br> Ports: 1x1Gbps (builtin) and optional expansion PCIe cards for copper and SFP connectors <br> 1x RJ45 monitoring port|
|**Status** | Supported, available preconfigured | The following image shows a sample of the DELL XE4 SFF front panel:
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
This article describes the **HPE ProLiant DL360** appliance for OT sensors, cust
||| |**Hardware profile** | C5600 | |**Performance** | Max bandwidth: 3 Gbps <br> Max devices: 12,000 |
-|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)|
+|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (optional)|
|**Status** | Supported, available preconfigured| The following image describes the hardware elements on the HPE ProLiant DL360 back panel that are used by Defender for IoT:
The following image describes the hardware elements on the HPE ProLiant DL360 ba
|**Power** |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit |**Rack support** | HPE 1U Gen10 SFF easy install rail kit | - ## HPE DL360 - Bill of materials |PN |Description |Quantity|
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsof
|Hardware profile |Appliance |SPAN/TAP throughput |Physical specifications | |||||
-|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: Up to 3 Gbps <br>**Max devices**: 12K <br> 16C[32T] CPU/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 4C[8T] CPU/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8C[8T] CPU/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 4C[8T] CPU/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4C[4T] CPU/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) <br><br><br><br> [Dell PowerEdge R660](appliance-catalog/dell-poweredge-r660.md) | **Max bandwidth**: 3 Gbps <br>**Max devices**: 12K <br> 16 core CPU/ 32G RAM/ 6 TB storage <br><br> **Max bandwidth**: 3 Gbps <br>**Max devices**: 12K <br> 20 core CPU/ 128G RAM/ 7.2 TB storage | **Mounting**: 1U with rail kit<br>**Ports**: 16x RJ45 or 8x SFP (optional) <br><br><br>**Mounting**: 1U with rail kit<br>**Ports**: 6x RJ45 1 GbE or SFP28 (optional) |
+|**E1800** | [HPE ProLiant DL20 Gen11](appliance-catalog/hpe-proliant-dl20-gen-11.md) <br>(4SFF)<br><br><br> [Dell PowerEdge R360](appliance-catalog/dell-poweredge-r360-e1800.md) | **Max bandwidth**: 1 Gbps<br>**Max devices**: 10K <br> 8 core CPU/ 32G RAM/ 2.4TB storage <br><br>**Max bandwidth**: 1 Gbps<br>**Max devices**: 10K <br> 8 core CPU/ 32G RAM/ 2.4TB storage | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT)<br><br><br> **Mounting**: 1U with rail kit<br>**Ports**: 8x RJ45 1 GbE |
+|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbps<br>**Max devices**: 10K <br> 8 core CPU/ 32G RAM/ 500GB storage | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 1Gbe |
+|**L500** | [HPE ProLiant DL20 Gen11 Plus](appliance-catalog/hpe-proliant-dl20-gen-11-nhp-2lff.md)<br> (NHP 2LFF) <br><br> [DELL XE4 SFF](appliance-catalog/dell-xe4-sff.md)| **Max bandwidth**: 200 Mbps<br>**Max devices**: 1,000 <br> 4 core CPU/ 16G RAM/ 1 TB storage <br> <br> **Max bandwidth**: 200 Mbps<br>**Max devices**: 1k <br> 6 core CPU/ 8G RAM/ 512GB storage | **Mounting**: 1U<br>**Ports**: 4x RJ45 <br><br><br>**Mounting**: Small form factor<br>**Ports**: 1x RJ45 |
+|**L100** | [YS-Techsystems YS-FIT2 ](appliance-catalog/ys-techsystems-ys-fit2.md)<br>(Rugged MIL-STD-810G) <br><br><br> [Dell Edge Gateway 3200](appliance-catalog/dell-edge-3200.md) | **Max bandwidth**: 10 Mbps <br>**Max devices**: 100 <br> 4 core CPU/ 8G RAM/ 128GB storage <br><br> **Max bandwidth**: 10 Mbps <br>**Max devices**: 100 <br> 4 core CPU/ 16G RAM/ 512GB storage | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 <br><br><br> **Mounting**: 1U with rail kit<br>**Ports**: 6x RJ45 |
> [!NOTE] > The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store. <br><br>
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
The solution has the following requirements:
### Prerequisites - A subscription to use for the project-- A Microsoft Entra tenant, and a Microsoft Entra Global Administrator who can provide Microsoft Entra ID help and guidance
+- A Microsoft Entra tenant, and a platform engineer who can provide Microsoft Entra ID help and guidance
+ - Ways for project members to collaborate, such as: - Azure Repos for source code and scripts - Microsoft Teams or SharePoint for documents
education-hub Create Assignment Allocate Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-assignment-allocate-credit.md
Title: Create an assignment and allocate credit in the Azure Education Hub
+ Title: Create a lab and allocate credit in the Azure Education Hub
description: Learn how to create an assignment, allocate credit, and invite students to a course in the Azure Education Hub. Previously updated : 06/30/2020 Last updated : 08/14/2024
-# Create an assignment and allocate credit in the Azure Education Hub
+# Create a lab and allocate credit in the Azure Education Hub
-After you set up a course in the Azure Education Hub, you can create an assignment where you allocate credit and invite selected students to your course.
+After you set up a lab in the Azure Education Hub, you can add students and allocate credits to them to deploy resources.
## Prerequisites - An academic grant with an approved credit amount-- A course created in the Education Hub - A work or school account and a subscription within the course that will access your Azure credit ### Accounts
The Education Hub accepts any email address in the standard format. It does *not
When you add a work or school account (for example, *student*@*school*.edu) by using role-based access control (RBAC) in the Educator Sponsor Portal or the [Azure portal](https://portal.azure.com), Azure automatically sends email to the recipient. This email requires the user to accept the new account and Azure role before receiving access to the subscription.
-If you're a teaching assistant or a professor for a course, be sure to inform students of this requirement so that their subscription appears in the Azure portal as expected. The email should look similar to this example:
+If you're a teaching assistant or a professor for a course, be sure to inform students of this requirement so that their subscription appears in the Azure portal as expected.
+## Create a lab and invite students
-## Create an assignment and invite students to the course
+1. Create a lab and fill in the required information such as lab name and method to invite students.
-1. Choose the amount of funds to credit to the student's subscription. If not all students will receive the same amount, you can select **Change** and apply a custom amount to each student or project group.
-
-1. Select **Create assignment**.
-1. Optionally, you can remove existing students by selecting **Remove** next to each student's name.
-1. When you finish, select the **Close** button. The additional permissions appear on the **Sponsor Credit Management** page.
+2. After you create the lab, you can begin inviting students to the lab
+3. Optionally, you can remove existing students by selecting **Remove** next to each student's name.
+4. When you finish, the students added will receive an invitation to join the lab.
## Related content
event-grid Custom Event To Hybrid Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-hybrid-connection.md
You need an application that can retrieve events from the hybrid connection. The
1. Compile and run the application from Visual Studio.
+> [!IMPORTANT]
+> We use connection string to authenticate to Azure Relay namespace to keep the tutorial simple. We recommend that you use Microsoft Entra ID authentication in production environments. When using an application, you can enable managed identity for the application and assign the identity an appropriate role (Azure Relay Owner, Azure Relay Listener, or Azure Relay Sender) on the Relay namespace. For more information, see [Authenticate a managed identity with Microsoft Entra ID to access Azure Relay resources](../azure-relay/authenticate-managed-identity.md).
+ ## Send an event to your topic Let's trigger an event to see how Event Grid distributes the message to your endpoint. This article shows how to use Azure CLI to trigger the event. Alternatively, you can use [Event Grid publisher application](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/tree/master/EventGridPublisher).
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
You've finished setting up your event hub, dedicate SQL pool (formerly SQL Data
:::image type="content" source="media/event-hubs-functions-synapse-analytics/query-results.png" alt-text="Screenshot showing the query results."::: ++
+> [!IMPORTANT]
+> We use connection string to authenticate to Azure Event Hubs namespace to keep the tutorial simple. We recommend that you use Microsoft Entra ID authentication in production environments. When using an application, you can enable managed identity for the application and assign the identity an appropriate role (Azure Event Hubs Owner, Azure Event Hubs Data Sender, or Azure Event Hubs Data Receiver) on the Event Hubs namespace. For more information, see [Authorize access to Event Hubs using Microsoft Entra ID](../event-hubs/authorize-access-azure-active-directory.md).
++ ## Monitor the solution This section helps you with monitoring or troubleshooting the solution.
event-grid Event Schema Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-data-manager-for-agriculture.md
The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**:
"data": { "actionType": "Deleted", "modifiedDateTime": "2022-10-17T18:43:37Z",
- "eTag": "f700fdd7-0000-0700-0000-634da2550000",
+ "eTag": "0000000000-0000-0000-0000-0000000000000",
"properties": { "key1": "value1", "key2": 123.45
The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**:
"id": "<YOUR-PARTY-ID>", "createdDateTime": "2022-10-17T18:43:30Z" },
- "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b",
+ "id": "000000000-0000-0000-0000-0000000000000",
"source": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", "subject": "/parties/<YOUR-PARTY-ID>", "type": "Microsoft.AgFoodPlatform.PartyChanged",
The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**:
"data": { "actionType": "Deleted", "modifiedDateTime": "2022-10-17T18:43:37Z",
- "eTag": "f700fdd7-0000-0700-0000-634da2550000",
+ "eTag": "0000000-0000-0000-0000-000000000000",
"properties": { "key1": "value1", "key2": 123.45
The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**:
"id": "<YOUR-PARTY-ID>", "createdDateTime": "2022-10-17T18:43:30Z" },
- "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b",
+ "id": "0000000-0000-0000-0000-00000000000",
"topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", "subject": "/parties/<YOUR-PARTY-ID>", "eventType": "Microsoft.AgFoodPlatform.PartyChanged",
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-template.md
Deploy the ARM template to Azure:
In the Azure portal, review the deployed resources, especially the firewall rules that use IP Groups. :::image type="content" source="media/quick-create-ipgroup-template/network-rule.png" alt-text="Network rules.":::
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
description: Learn how to view, maintain, update, and delete your management gro
Last updated 07/18/2024 -- + # Manage your Azure subscriptions at scale with management groups If your organization has many subscriptions, you might need a way to efficiently manage access,
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance
description: Learn about management groups, how their permissions work, and how to use them. Last updated 07/18/2024 -- + # What are Azure management groups? If your organization has many Azure subscriptions, you might need a way to efficiently manage access,
subscriptions.
- A single directory can support 10,000 management groups. - A management group tree can support up to six levels of depth.
-
+ This limit doesn't include the root level or the subscription level. - Each management group and subscription can support only one parent. - Each management group can have many children.
healthcare-apis Manage Access Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/manage-access-rbac.md
+
+ Title: Manage access to the de-identification service (preview) with Azure role-based access control (RBAC) in Azure Health Data Services
+description: Learn how to manage access to the de-identification service (preview) using Azure role-based access control.
+++++ Last updated : 07/16/2024++
+# Use Azure role-based access control with the de-identification service (preview)
+
+Microsoft Entra ID authorizes access rights to secured resources through Azure role-based access control (RBAC). The de-identification service (preview) defines a set of built-in roles that encompass common sets of permissions used to access de-identification functionality.
+
+Microsoft Entra ID uses the concept of a security principal, which can be a user, a group, an application service principal, or a [managed identity for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+
+When an Azure role is assigned to a Microsoft Entra ID security principal over a specific scope, Azure grants access to that scope for that security principal. For more information about scopes, see [Understand scope for Azure RBAC](/azure/role-based-access-control/scope-overview).
+
+## Prerequisites
+
+- A de-identification service (preview) in your Azure subscription. If you don't have a de-identification service, follow the steps in [Quickstart: Deploy the de-identification service](quickstart.md).
+
+## Available built-in roles
+
+The de-identification service (preview) has the following built-in roles available:
+
+|Role |Description |
+|--||
+|DeID Data Owner |Full access to de-identification functionality. |
+|DeID Real-time Data User |Execute requests against de-identification API endpoints. |
+|DeID Batch Owner |Create and manage de-identification batch jobs. |
+|DeID Batch Reader |Read-only access to de-identification batch jobs. |
+
+## Assign a built-in role
+
+Keep in mind the following points about Azure role assignments with the de-identification service (preview):
+
+- When you create a de-identification service, you aren't automatically assigned permissions to access data via Microsoft Entra ID. You need to explicitly assign yourself an applicable Azure role. You can assign it at the level of your subscription, resource group, or de-identification service.
+- When roles are assigned, it can take up to 10 minutes for changes to take effect.
+- When the de-identification service is locked with an [Azure Resource Manager read-only lock](/azure/azure-resource-manager/management/lock-resources), the lock prevents the assignment of Azure roles that are scoped to the de-identification service.
+- When Azure deny assignments have been applied, your access might be blocked even if you have a role assignment. For more information, see [Understand Azure deny assignments](/azure/role-based-access-control/deny-assignments).
+
+You can use different tools to assign built-in roles.
+
+# [Azure portal](#tab/azure-portal)
+
+To use the de-identification service (preview), with Microsoft Entra ID credentials, a security principal must be assigned one of the built-in roles. To learn how to assign these roles to a security principal, follow the steps in [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To assign an Azure role to a security principal with PowerShell, call the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or higher.
+
+The format of the command can differ based on the scope of the assignment, but `ObjectId` and `RoleDefinitionName` are required parameters. While the `Scope` parameter is optional, you should set it to retain the principle of least privilege. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised.
+
+The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>`
+
+The example assigns the **DeID Data Owner** built-in role to a user, scoped to a specific de-identification service. Make sure to replace the placeholder values
+in angle brackets `<>` with your own values:
+
+```azurepowershell
+New-AzRoleAssignment
+ -SignInName <Email> `
+ -RoleDefinitionName "DeID Data Owner" `
+ -Scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>"
+```
+
+A successful response should look like:
+
+```
+
+console
+RoleAssignmentId : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>/providers/Microsoft.Authorization/roleAssignments/<Role Assignment ID>
+Scope : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>
+DisplayName : Mark Patrick
+SignInName : markpdaniels@contoso.com
+RoleDefinitionName : DeID Data Owner
+RoleDefinitionId : <Role Definition ID>
+ObjectId : <Object ID>
+ObjectType : User
+CanDelegate : False
+
+```
+
+For more information, see [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell).
+
+# [Azure CLI](#tab/azure-pcli)
+
+To assign an Azure role to a security principal with Azure CLI, use the [az role assignment create](/cli/azure/role/assignment) command. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or higher.
+
+The format of the command can differ based on the type of security principal, but `role` and `scope` are required parameters.
+
+The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>`
+
+The following example assigns the **DeID Data Owner** built-in role to a user, scoped to a specific de-identification service. Make sure to replace the placeholder values in angle brackets `<>` with your own values:
+
+```azurecli
+az role assignment create \
+ --assignee <Email> \
+ --role "DeID Data Owner" \
+ --scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>"
+```
+
+For more information, see [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-cli).
+
+# [ARM template](#tab/azure-resource-manager)
+
+To learn how to use an Azure Resource Manager template to assign an Azure role, see [Assign Azure roles using Azure Resource Manager templates](/azure/role-based-access-control/role-assignments-template).
+++
+## Related content
+
+- [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
+- [Best practices for Azure RBAC](/azure/role-based-access-control/best-practices)
healthcare-apis Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/managed-identities.md
+
+ Title: Use managed identities with the de-identification service (preview) in Azure Health Data Services
+description: Learn how to use managed identities with the Azure Health Data Services de-identification service (preview) using the Azure portal and ARM template.
+++++ Last updated : 07/17/2024++
+# Use managed identities with the de-identification service (preview)
+
+Managed identities provide Azure services with a secure, automatically managed identity in Microsoft Entra ID. Using managed identities eliminates the need for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. The de-identification service supports both.
+
+Managed identities can be used to grant the de-identification service (preview) access to your storage account for batch processing. In this article, you learn how to assign a managed identity to your de-identification service.
+
+## Prerequisites
+
+- Understand the differences between **system-assigned** and **user-assigned** described in [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
+- A de-identification service (preview) in your Azure subscription. If you don't have a de-identification service, follow the steps in [Quickstart: Deploy the de-identification service](quickstart.md).
+
+## Create an instance of the de-identification service (preview) in Azure Health Data Services with a system-assigned managed identity
+
+# [Azure portal](#tab/portal)
+
+1. Access the de-identification service (preview) settings in the Azure portal under the **Security** group in the left navigation pane.
+1. Select **Identity**.
+1. Within the **System assigned** tab, switch **Status** to **On** and choose **Save**.
+
+# [ARM template](#tab/azure-resource-manager)
+
+Any resource of type ``Microsoft.HealthDataAIServices/deidServices`` can be created with a system-assigned identity by including the following block in
+the resource definition:
+
+```json
+"identity": {
+ "type": "SystemAssigned"
+}
+```
+++
+## Assign a user-assigned managed identity to a service instance
+
+# [Azure portal](#tab/portal)
+
+1. Create a user-assigned managed identity resource according to [these instructions](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
+1. In the navigation pane of your de-identification service (preview), scroll to the **Security** group.
+1. Select **Identity**.
+1. Select the **User assigned** tab, and then choose **Add**.
+1. Search for the identity you created, select it, and then choose **Add**.
+
+# [ARM template](#tab/azure-resource-manager)
+
+Any resource of type ``Microsoft.HealthDataAIServices/deidServices`` can be created with a user-assigned identity by including the following block in
+the resource definition, replacing **resource-id** with the Azure Resource Manager (ARM) resource ID of the desired identity:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<resource-id>": {}
+ }
+}
+```
+++
+## Supported scenarios using managed identities
+
+Managed identities assigned to the de-identification service (preview) can be used to allow access to Azure Blob Storage for batch de-identification jobs. The service acquires a token as
+the managed identity to access Blob Storage and de-identify blobs that match a specified pattern. For more information, including how to grant access to your managed identity,
+see [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md).
+
+## Clean-up steps
+
+When you remove a system-assigned identity, you delete it from Microsoft Entra ID. System-assigned identities are also automatically removed from Microsoft Entra ID
+when you delete the de-identification service (preview).
+
+# [Azure portal](#tab/portal)
+
+1. In the navigation pane of your de-identification service (preview), scroll down to the **Security** group.
+1. Select **Identity**, then follow the steps based on the identity type:
+ - **System-assigned identity**: Within the **System assigned** tab, switch **Status** to **Off**, and then choose **Save**.
+ - **User-assigned identity**: Select the **User assigned** tab, select the checkbox for the identity, and select **Remove**. Select **Yes** to confirm.
+
+# [ARM template](#tab/azure-resource-manager)
+
+Any resource of type ``Microsoft.HealthDataAIServices/deidServices`` can have system-assigned identities deleted and user-assigned identities unassigned by
+including this block in the resource definition:
+
+```json
+"identity": {
+ "type": "None"
+}
+```
+++
+## Related content
+
+- [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview)
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/overview.md
+
+ Title: Overview of the de-identification service (preview) in Azure Health Data Services
+description: Learn how the de-identification service (preview) in Azure Health Data Services anonymizes clinical data, ensuring HIPAA compliance while retaining data relevance for research and analytics.
++++ Last updated : 7/17/2024+++
+# What is the de-identification service (preview)?
+
+The de-identification service (preview) in Azure Health Data Services enables healthcare organizations to anonymize clinical data so that the resulting data retains its clinical relevance and distribution while also adhering to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule. The service uses state-of-the-art machine learning models to automatically extract, redact, or surrogate 28 entities, including the HIPAA 18 Protected Health Information (PHI) identifiers ΓÇô from unstructured text such as clinical notes, transcripts, messages, or clinical trial studies.
+
+## Use de-identified data in research, analytics, and machine learning
+
+The de-identification service (preview) unlocks data that was previously difficult to de-identify so organizations can conduct research and derive insights from analytics. The de-identification service supports three operations: **tag**, **redact**, or **surrogate PHI**. The de-identification service offers many benefits, including:
+
+- **Surrogation**: Surrogation, or replacement, is a best practice for PHI protection. The service can replace PHI elements with plausible replacement values, resulting in data that is most representative of the source data. Surrogation strengthens privacy protections as any false-negative PHI values are hidden within a document.
+
+- **Consistent replacement**: Consistent surrogation results enable organizations to retain relationships occurring in the underlying dataset, which is critical for research, analytics, and machine learning. By submitting data in the same batch, our service allows for consistent replacement across entities and preserves the relative temporal relationships between events.
+
+- **Expanded PHI coverage**: The service expands beyond the 18 HIPAA Identifiers to provide stronger privacy protections and more fine-grained distinctions between entity types, such as distinguishing between Doctor and Patient.
+
+## De-identify clinical data securely and efficiently
+
+The de-identification service (preview) offers many benefits, including:
+
+- **PHI compliance**: The de-identification service is designed for protected health information (PHI). The service uses machine learning to identify PHI entities, including HIPAAΓÇÖs 18 identifiers, using the ΓÇ£TAGΓÇ¥ operation. The redaction and surrogation operations replace these identified PHI values with a tag of the entity type or a surrogate, or pseudonym. The service also meets all regional compliance requirements including HIPAA, GDPR, and the California Consumer Privacy Act (CCPA).
+
+- **Security**: The de-identification service is a stateless service. Customer data stays within the customerΓÇÖs tenant.
+
+- **Role-based Access Control (RBAC)**: Azure role-based access control (RBAC) enables you to manage how your organization's data is processed, stored, and accessed. You determine who has access to de-identify datasets based on roles you define for your environment.
+
+## Synchronous or asynchronous endpoints
+
+The de-identification service (preview) offers two ways to interact with the REST API or Client library (Azure SDK).
+
+- Directly submit raw unstructured text for analysis. The API output is returned in your application.
+- Submit a job to asynchronously endpoint process files in bulk from Azure Blob Storage using tag, redact, or surrogation with consistency within a job.
+
+## Input requirements and service limits
+
+The de-identification service (preview) is designed to receive unstructured text. To de-identify data stored in the FHIR&reg; service, see [Export deidentified data](/azure/healthcare-apis/fhir/deidentified-export).
+
+The following service limits are applicable during preview:
+- Requests can't exceed 50 KB.
+- Jobs can process no more than 1,000 documents.
+- Each document processed by a job can't exceed 2 MB.
+
+## Pricing
+As with other Azure Health Data Services, you pay only for what you use. You have a monthly allotment that enables you to try the product for free.
+
+| Transformation Operation (per MB) | Up to 50 MB | Over 50 MB |
+| - | | - |
+| Unstructured text de-identification | $0 | $0.05 |
+
+When you choose to store documents in Azure Blob Storage, you are charged based on Azure Storage pricing.
+
+## Responsible use of AI
+
+An AI system includes the technology, the people who use it, the people affected by it, and the environment where you deploy it. Read the transparency note for the de-identification service (preview) to learn about responsible AI use and deployment in your systems.
+
+## Related content
+
+[De-identification quickstart](quickstart.md)
+
+[Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
+
+[Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
healthcare-apis Quickstart Sdk Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart-sdk-net.md
+
+ Title: "Quickstart: Azure Health De-identification client library for .NET"
+description: A quickstart guide to de-identify health data with the .NET client library
+++++ Last updated : 08/05/2024+++
+# Quickstart: Azure Health De-identification client library for .NET
+
+Get started with the Azure Health De-identification client library for .NET to de-identify your health data. Follow these steps to install the package and try out example code for basic tasks.
+
+[API reference documentation](/dotnet/api/azure.health.deidentification) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/healthdataaiservices) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Health.Deidentification) | [More Samples on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/samples/README.md)
++
+## Prerequisites
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure Storage Account (only for job workflow).
+
+## Setting up
+
+### Create a de-identification service (preview)
+
+A de-identification service (preview) provides you with an endpoint URL. This endpoint url can be utilized as a Rest API or with an SDK.
+
+1. Install [Azure CLI](/cli/azure/install-azure-cli)
+2. Create a de-identification service resource
+
+ ```bash
+ REGION="<Region>"
+ RESOURCE_GROUP_NAME="<ResourceGroupName>"
+ DEID_SERVICE_NAME="<NewDeidServiceName>"
+ az resource create -g $RESOURCE_GROUP_NAME -n $DEID_SERVICE_NAME --resource-type microsoft.healthdataaiservices/deidservices --is-full-object -p "{\"identity\":{\"type\":\"SystemAssigned\"},\"properties\":{},\"location\":\"$REGION\"}"
+ ```
+
+### Create an Azure Storage Account
+
+1. Install [Azure CLI](/cli/azure/install-azure-cli)
+1. Create an Azure Storage Account
+
+ ```bash
+ STORAGE_ACCOUNT_NAME="<NewStorageAccountName>"
+ az storage account create --name $STORAGE_ACCOUNT_NAME --resource-group $RESOURCE_GROUP_NAME --location $REGION
+ ```
+
+### Authorize de-identification service (preview) on storage account
+
+- Give the de-identification service (preview) access to your storage account
+
+ ```bash
+ STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --resource-group $RESOURCE_GROUP_NAME --query id --output tsv)
+ DEID_SERVICE_PRINCIPAL_ID=$(az resource show -n $DEID_SERVICE_NAME -g $RESOURCE_GROUP_NAME --resource-type microsoft.healthdataaiservices/deidservices --query identity.principalId --output tsv)
+ az role assignment create --assignee $DEID_SERVICE_PRINCIPAL_ID --role "Storage Blob Data Contributor" --scope $STORAGE_ACCOUNT_ID
+ ```
+
+### Install the package
+The client library is available through NuGet, as the `Azure.Health.Deidentification` package.
+
+1. Install package
+
+ ```bash
+ dotnet add package Azure.Health.Deidentification
+ ```
+
+1. Also, install the Azure Identity package if not already installed.
+
+ ```bash
+ dotnet add package Azure.Identity
+ ```
++
+## Object model
+
+- [DeidentificationClient](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/src/Generated/DeidentificationClient.cs) is responsible for the communication between the SDK and our De-identification Service Endpoint.
+- [DeidentificationContent](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/src/Generated/DeidentificationContent.cs) is used for string de-identification.
+- [DeidentificationJob](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/src/Generated/DeidentificationJob.cs) is used to create jobs to de-identify documents in an Azure Storage Account.
+- [PhiEntity](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/src/Generated/PhiEntity.cs) is the span and category of a single PHI entity detected via a Tag OperationType.
++
+## Code examples
+- [Create a Deidentification Client](#create-a-deidentification-client)
+- [De-identify a string](#de-identify-a-string)
+- [Tag a string](#tag-a-string)
+- [Create a Deidentification Job](#create-a-deidentification-job)
+- [Get the status of a Deidentification Job](#get-the-status-of-a-deidentification-job)
+
+### Create a Deidentification Client
+
+Before you can create the client, you need to find your **deidentification service (preview) endpoint URL**.
+
+You can find the endpoint URL with the Azure CLI:
+
+```bash
+az resource show -n $DEID_SERVICE_NAME -g $RESOURCE_GROUP_NAME --resource-type microsoft.healthdataaiservices/deidservices --query properties.serviceUrl --output tsv
+```
+Then you can create the client using that value.
+
+```csharp
+using Azure.Identity;
+using Azure.Health.Deidentification;
+
+string serviceEndpoint = "https://example123.api.deid.azure.com";
+
+DeidentificationClient client = new(
+ new Uri(serviceEndpoint),
+ new DefaultAzureCredential()
+);
+```
+
+### De-identify a string
+
+This function allows you to de-identify any string you have in memory.
+
+```csharp
+DeidentificationContent content = new("SSN: 123-04-5678");
+DeidentificationResult result = await client.DeidentifyAsync(content);
+```
+
+### Tag a string
+
+Tagging can be done the same way and de-identifying by changing the `OperationType`.
+
+```csharp
+DeidentificationContent content = new("SSN: 123-04-5678");
+content.Operation = OperationType.Tag;
+
+DeidentificationResult result = await client.DeidentifyAsync(content);
+```
+
+### Create a Deidentification Job
+
+This function allows you to de-identify all files, filtered via prefix, within an Azure Blob Storage Account.
+
+To create the job, we need the URL to the blob endpoint of the Azure Storage Account.
+
+```bash
+az resource show -n $STORAGE_ACCOUNT_NAME -g $RESOURCE_GROUP_NAME --resource-type Microsoft.Storage/storageAccounts --query properties.primaryEndpoints.blob --output tsv
+```
+
+Now we can create the job. This example uses `folder1/` as the prefix. The job will de-identify any document that matches this prefix and write the de-identified version with the `output_files/` prefix.
+
+```csharp
+using Azure;
+
+Uri storageAccountUri = new("");
+
+DeidentificationJob job = new(
+ new SourceStorageLocation(new Uri(storageAccountUrl), "folder1/"),
+ new TargetStorageLocation(new Uri(storageAccountUrl), "output_files/")
+);
+
+job = client.CreateJob(WaitUntil.Started, "my-job-1", job).Value;
+```
+
+### Get the status of a Deidentification Job
+
+Once a job is created, you can view the status and other details of the job.
+
+```csharp
+DeidentificationJob job = client.GetJob("my-job-1").Value;
+```
++
+## Run the code
+
+Once your code is updated in your project, you can run it using:
+
+```bash
+dotnet run
+```
+
+## Clean up resources
+
+### Delete Deidentification Service
+
+```bash
+az resource delete -n $DEID_SERVICE_NAME -g $RESOURCE_GROUP_NAME --resource-type microsoft.healthdataaiservices/deidservices
+```
+
+### Delete Azure Storage Account
+
+```bash
+az resource show -n $STORAGE_ACCOUNT_NAME -g $RESOURCE_GROUP_NAME --resource-type Microsoft.Storage/storageAccounts
+```
+
+### Delete Role Assignment
+
+```bash
+az role assignment delete --assignee $DEID_SERVICE_PRINCIPAL_ID --role "Storage Blob Data Contributor" --scope $STORAGE_ACCOUNT_ID
+```
++
+## Troubleshooting
+
+### Unable to access source or target storage
+
+Ensure the permissions are given and the Managed Identity for the de-identification service (preview) is set up properly.
+
+See [Authorize Deidentification Service on Storage Account](#authorize-de-identification-service-preview-on-storage-account)
+
+### Job failed with status PartialFailed
+
+You can utilize the `GetJobDocuments` function on the `DeidentificationClient` to view per file error messages.
+
+See [Sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdataaiservices/Azure.Health.Deidentification/tests/samples/Sample4_ListCompletedFiles.cs)
++
+## Next steps
+
+In this quickstart, you learned:
+- How to create a de-identification service (preview) and assign a role on a storage account.
+- How to create a Deidentification Client
+- How to de-identify strings and create jobs on documents within a storage account.
+
+> [!div class="nextstepaction"]
+> [View source code and .NET Client Library README](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/healthdataaiservices/Azure.Health.Deidentification)
healthcare-apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart.md
+
+ Title: Quickstart - Deploy the de-identification service (preview) in Azure Health Data Services
+description: Get up and running quickly with the de-identification service (preview) in Azure Health Data Services.
++++ Last updated : 7/16/2024+++
+# Quickstart: Deploy the de-identification service (preview)
+
+In this quickstart, you deploy an instance of the de-identification service (preview) in your Azure subscription.
+
+## Prerequisites
+
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Register the `Microsoft.HealthDataAIServices` resource provider.
+
+## Deploy the de-identification service (preview)
+
+To deploy an instance of the de-identification service (preview), start at the Azure portal home page.
+
+1. Search for **de-identification** in the top search bar.
+1. Select **De-identification Services** in the search results.
+1. Select the **Create** button.
+
+## Complete the Basics tab
+
+In the **Basics** tab, you provide basic information for your de-identification service (preview).
+
+1. Fill in the **Project Details** section:
+
+ | Setting | Action |
+ |-|-|
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create new** and enter **my-deid**. |
+
+1. Fill in the **Instance details** section:
+
+ | Setting | Action |
+ |-|-|
+ | Name | Name your de-identification service. |
+ | Location | Select a supported Azure region. |
+
+## Complete the Tags tab (optional)
+
+Tags are name-value pairs. You can assign the same tag to multiple resources and resource groups to categorize resources and consolidate billing. In this quickstart, you don't need to add any tags.
+For more information, see [Use tags to organize your Azure resources](/azure/azure-resource-manager/management/tag-resources) and [Logging](../logging.md).
+
+## Complete the Managed Identity tab (optional)
+
+In the **Managed Identity** tab, you can assign a managed identity to your de-identification service (preview). For more information, see [managed identities](managed-identities.md).
+
+1. To create a system-assigned managed identity, select **On** under **Status**.
+1. To add a user-assigned managed identity, select **Add** to use the selection pane to choose an existing identity to assign.
+
+## Review and create
+
+After you complete the configuration, you can deploy the de-identification service (preview).
+
+1. Select **Next: Review + create** to review your choices.
+1. Select **Create** to start the deployment of your de-identification service. After the deployment is complete, select **Go to resource** to view your service.
+
+## Clean up resources
+
+If you no longer need them, delete the resource group and de-identification service (preview). To do so, select the resource group and select **Delete**.
+
+## Related content
+
+[De-identification service overview](overview.md)
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
- Title: Get started with Azure IoT Hub device twins (Azure CLI)-
-description: How to use Azure IoT Hub device twins and the Azure CLI to create and simulate devices, add tags to device twins, and execute IoT Hub queries.
----- Previously updated : 02/17/2023---
-# Get started with device twins (Azure CLI)
--
-This article shows you how to:
-
-* Use a simulated device to report its connectivity channel as a *reported property* on the device twin.
-
-* Query devices using filters on the tags and properties previously created.
-
-For more information about using device twin reported properties, see [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md).
-
-This article shows you how to create two Azure CLI sessions:
-
-* A session that creates a simulated device. The simulated device reports its connectivity channel as a reported property on the device's corresponding device twin when initialized.
-
-* A session that updates the tags of the device twin for the simulated device, then queries devices from your IoT hub. The queries use filters based on the tags and properties previously updated in both sessions.
-
-## Prerequisites
-
-* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Prepare the Cloud Shell
-
-If you want to use the Azure Cloud Shell, you must first launch and configure it. If you use the CLI locally, skip to the [Prepare two CLI sessions](#prepare-two-cli-sessions) section.
-
-1. Select the **Cloud Shell** icon from the page header in the Azure portal.
-
- :::image type="content" source="./media/device-twins-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
-
- > [!NOTE]
- > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
-
-2. Use the environment selector in the Cloud Shell toolbar to select your preferred CLI environment. This article uses the **Bash** environment. You can also use the **PowerShell** environment.
-
- > [!NOTE]
- > Some commands require different syntax or formatting in the **Bash** and **PowerShell** environments. For more information, see [Tips for using the Azure CLI successfully](/cli/azure/use-cli-effectively?tabs=bash%2Cbash2).
-
- :::image type="content" source="./media/device-twins-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
-
-## Prepare two CLI sessions
-
-Next, you must prepare two Azure CLI sessions. If you're using the Cloud Shell, you run these sessions in separate Cloud Shell tabs. If using a local CLI client, you run separate CLI instances. Use the separate CLI sessions for the following tasks:
-- The first session simulates an IoT device that communicates with your IoT hub. -- The second session updates your simulated device and queries your IoT hub. -
-1. If you're using the Cloud Shell, skip to the next step. Otherwise, run the [az login](/cli/azure/reference-index#az-login) command in the first CLI session to sign in to your Azure account.
-
- If you're using the Cloud Shell, you're automatically signed into your Azure account. All communication between your Azure CLI session and your IoT hub is authenticated and encrypted. As a result, this article doesn't need extra authentication that you'd use with a real device, such as a connection string. For more information about signing in with Azure CLI, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-
- ```azurecli
- az login
- ```
-
-1. In the first CLI session, run the [az extension add](/cli/azure/extension#az-extension-add) command. The command adds the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI. After you install the extension, you don't need to install it again in any Cloud Shell session.
-
- ```azurecli
- az extension add --name azure-iot
- ```
-
- [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
-
-1. Open the second CLI session. If you're using the Cloud Shell in a browser, select the **Open new session** icon on the toolbar of your first CLI session. If using the CLI locally, open a second CLI instance.
-
- :::image type="content" source="media/device-twins-cli/cloud-shell-new-session.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the Open New Session icon in the toolbar.":::
-
-## Create and simulate a device
-
-In this section, you create a device identity for your IoT hub in the first CLI session, and then simulate a device using that device identity. The simulated device responds to the jobs that you schedule in the second CLI session.
-
-To create and start a simulated device:
-
-1. In the first CLI session, run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command, replacing the following placeholders with their corresponding values. This command creates the device identity for your simulated device.
-
- *{DeviceName}*. The name of your simulated device.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot hub device-identity create --device-id {DeviceName} --hub-name {HubName}
- ```
-
-1. In the first CLI session, run the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command, replacing the following placeholders with their corresponding values. This command simulates the device you created in the previous step. The command also configures the simulated device to report its connectivity channel as a reported property on the device's corresponding device twin when initialized.
-
- *{DeviceName}*. The name of your simulated device.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot device simulate --device-id {DeviceName} --hub-name {HubName} \
- --init-reported-properties '{"connectivity":{"type": "cellular"}}'
- ```
-
- > [!TIP]
- > By default, the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command sends 100 device-to-cloud messages with an interval of 3 seconds between messages. The simulation ends after all messages have been sent. If you want the simulation to run longer, you can use the `--msg-count` parameter to specify more messages or the `--msg-interval` parameter to specify a longer interval between messages. You can also run the command again to restart the simulated device.
-
-## Update the device twin
-
-Once a device identity is created, a device twin is implicitly created in IoT Hub. In this section, you use the second CLI session to update a set of tags on the device twin associated with the device identity you created in the previous section. You can use device twin tags to organize and manage devices in your IoT solutions. For more information about managing devices using tags, see [How to manage devices using device twin tags in Azure IoT Hub](iot-hubs-manage-device-twin-tags.md).
-
-1. Confirm that the simulated device in the first CLI session is running. If not, restart it by running the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command again from [Create and simulate a device](#create-and-simulate-a-device).
-
-1. In the second CLI session, run the [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command, replacing the following placeholders with their corresponding values. In this example, we're updating multiple tags on the device twin for the device identity we created in the previous section.
-
- *{DeviceName}*. The name of your device.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot hub device-twin update --device-id {DeviceName} --hub-name {HubName} \
- --tags '{"location":{"region":"US","plant":"Redmond43"}}'
- ```
-
-1. In the second CLI session, confirm that the JSON response shows the results of the update operation. In the following JSON response example, we used `SampleDevice` for the `{DeviceName}` placeholder in the `az iot hub device-twin update` CLI command.
-
- ```json
- {
- "authenticationType": "sas",
- "capabilities": {
- "iotEdge": false
- },
- "cloudToDeviceMessageCount": 0,
- "connectionState": "Connected",
- "deviceEtag": "MTA2NTU1MDM2Mw==",
- "deviceId": "SampleDevice",
- "deviceScope": null,
- "etag": "AAAAAAAAAAI=",
- "lastActivityTime": "0001-01-01T00:00:00+00:00",
- "modelId": "",
- "moduleId": null,
- "parentScopes": null,
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2023-02-21T10:40:10.5062402Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z",
- "connectivity": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z",
- "type": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z"
- }
- }
- },
- "$version": 2,
- "connectivity": {
- "type": "cellular"
- }
- }
- },
- "status": "enabled",
- "statusReason": null,
- "statusUpdateTime": "0001-01-01T00:00:00+00:00",
- "tags": {
- "location": {
- "plant": "Redmond43",
- "region": "US"
- }
- },
- "version": 4,
- "x509Thumbprint": {
- "primaryThumbprint": null,
- "secondaryThumbprint": null
- }
- }
- ```
-
-## Query your IoT hub for device twins
-
-IoT Hub exposes the device twins for your IoT hub as a document collection called **devices**. In this section, you use the second CLI session to execute two queries on the set of device twins for your IoT hub: the first query selects only the device twins of devices located in the **Redmond43** plant, and the second refines the query to select only the devices that are also connected through a cellular network. Both queries return only the first 100 devices in the result set. For more information about device twin queries, see [Queries for IoT Hub device and module twins](query-twins.md).
-
-1. Confirm that the simulated device in the first CLI session is running. If not, restart it by running the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command again from [Create and simulate a device](#create-and-simulate-a-device).
-
-1. In the second CLI session, run the [az iot hub query](/cli/azure/iot/hub#az-iot-hub-query) command, replacing the following placeholders with their corresponding values. In this example, we're filtering the query to return only the device twins of devices located in the **Redmond43** plant.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot hub query --hub-name {HubName} \
- --query-command "SELECT * FROM devices WHERE tags.location.plant = 'Redmond43'" \
- --top 100
- ```
-
-1. In the second CLI session, confirm that the JSON response shows the results of the query.
-
- ```json
- {
- "authenticationType": "sas",
- "capabilities": {
- "iotEdge": false
- },
- "cloudToDeviceMessageCount": 0,
- "connectionState": "Connected",
- "deviceEtag": "MTA2NTU1MDM2Mw==",
- "deviceId": "SampleDevice",
- "deviceScope": null,
- "etag": "AAAAAAAAAAI=",
- "lastActivityTime": "0001-01-01T00:00:00+00:00",
- "modelId": "",
- "moduleId": null,
- "parentScopes": null,
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2023-02-21T10:40:10.5062402Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z",
- "connectivity": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z",
- "type": {
- "$lastUpdated": "2023-02-21T10:40:43.8539917Z"
- }
- }
- },
- "$version": 2,
- "connectivity": {
- "type": "cellular"
- }
- }
- },
- "status": "enabled",
- "statusReason": null,
- "statusUpdateTime": "0001-01-01T00:00:00+00:00",
- "tags": {
- "location": {
- "plant": "Redmond43",
- "region": "US"
- }
- },
- "version": 4,
- "x509Thumbprint": {
- "primaryThumbprint": null,
- "secondaryThumbprint": null
- }
- }
- ```
-
-1. In the second CLI session, run the [az iot hub query](/cli/azure/iot/hub#az-iot-hub-query) command, replacing the following placeholders with their corresponding values. In this example, we're filtering the query to return only the device twins of devices located in the **Redmond43** plant that are also connected through a cellular network.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot hub query --hub-name {HubName} \
- --query-command "SELECT * FROM devices WHERE tags.location.plant = 'Redmond43' \
- AND properties.reported.connectivity.type = 'cellular'" \
- --top 100
- ```
-
-1. In the second CLI session, confirm that the JSON response shows the results of the query. The results of this query should match the results of the previous query in this section.
-
-In this article, you:
-
-* Added device metadata as tags from an Azure CLI session
-* Simulated a device that reported device connectivity information in the device twin
-* Queried the device twin information, using SQL-like IoT Hub query language in an Azure CLI session
-
-## Next steps
-
-To learn how to:
-
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json).
-
-* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-
-* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md).
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
- Title: Upload files from devices to Azure IoT Hub (.NET)-
-description: How to upload files from a device to the cloud using Azure IoT device SDK for .NET. Uploaded files are stored in an Azure storage blob container.
----- Previously updated : 08/24/2021---
-# Upload files from your device to the cloud with Azure IoT Hub (.NET)
--
-This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using an Azure IoT .NET device and service SDKs.
-
-The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-dotnet.md) article show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) article shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
-
-* Videos
-* Large files that contain images
-* Vibration data sampled at high frequency
-* Some form of preprocessed data
-
-These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-
-At the end of this article, you run two .NET console apps:
-
-* **FileUploadSample**. This device app uploads a file to storage using a SAS URI provided by your IoT hub. This sample is from the Azure IoT C# SDK repository that you download in the prerequisites.
-
-* **ReadFileUploadNotification**. This service app receives file upload notifications from your IoT hub. You create this app.
-
-> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
--
-## Prerequisites
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* The sample applications you run in this article are written using C# with .NET Core.
-
- Download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
-
- Verify the current version of the .NET Core SDK on your development machine using the following command:
-
- ```cmd/sh
- dotnet --version
- ```
-
-* Download the Azure IoT C# SDK from [Download sample](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip) and extract the ZIP archive.
-
-* Port 8883 should be open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
--
-## Upload file from a device app
-
-In this article, you use a sample from the Azure IoT C# SDK repository you downloaded earlier as the device app. You can open the files below using Visual Studio, Visual Studio Code, or a text editor of your choice.
-
-The sample is located at **azure-iot-sdk-csharp/iothub/device/samples/getting started/FileUploadSample** in the folder where you extracted the Azure IoT C# SDK.
-
-Examine the code in **FileUpLoadSample.cs**. This file contains the main sample logic. After creating an IoT Hub device client, it follows the standard three-part procedure for uploading files from a device:
-
-1. The code calls the **GetFileUploadSasUriAsync** method on the device client to get a SAS URI from the IoT hub:
-
- ```csharp
- var fileUploadSasUriRequest = new FileUploadSasUriRequest
- {
- BlobName = fileName
- };
-
- // Lines removed for clarity
-
- FileUploadSasUriResponse sasUri = await _deviceClient.GetFileUploadSasUriAsync(fileUploadSasUriRequest);
- Uri uploadUri = sasUri.GetBlobUri();
- ```
-
-1. The code uses the SAS URI to upload the file to Azure storage. In this sample, it uses the SAS URI to create an Azure storage block blob client and uploads the file:
-
- ```csharp
- var blockBlobClient = new BlockBlobClient(uploadUri);
- await blockBlobClient.UploadAsync(fileStreamSource, new BlobUploadOptions());
- ```
-
-1. The code notifies the IoT hub that it has completed the upload. This tells the IoT hub that it can release resources associated with the upload (the SAS URI). If file upload notifications are enabled, the IoT hub sends a notification message to backend services.
-
- ```csharp
- var successfulFileUploadCompletionNotification = new FileUploadCompletionNotification
- {
- // Mandatory. Must be the same value as the correlation id returned in the sas uri response
- CorrelationId = sasUri.CorrelationId,
-
- // Mandatory. Will be present when service client receives this file upload notification
- IsSuccess = true,
-
- // Optional, user defined status code. Will be present when service client receives this file upload notification
- StatusCode = 200,
-
- // Optional, user-defined status description. Will be present when service client receives this file upload notification
- StatusDescription = "Success"
- };
-
- await _deviceClient.CompleteFileUploadAsync(successfulFileUploadCompletionNotification);
- ```
-
-If you examine the **parameter.cs** file, you see that:
--- The sample requires you to pass a parameter, *p*, which takes a device connection string. --- By default, the device sample uses the MQTT protocol to communicate with IoT Hub. You can use the parameter *t* to change this transport protocol. Regardless of this selection, the Azure blob client always uses HTTPS as the protocol to upload the file Azure storage.-
-## Get the IoT hub connection string
-
-In this article, you create a backend service to receive file upload notification messages from your IoT hub. To receive file upload notification messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Receive a file upload notification
-
-In this section, you create a C# console app that receives file upload notification messages from your IoT hub.
-
-1. Open a command window and go to the folder where you want to create the project. Create a folder named **ReadFileUploadNotifications** and change directories to that folder.
-
- ```cmd/sh
- mkdir ReadFileUploadNotification
- cd ReadFileUploadNotification
- ```
-
-1. Run the following command to create a C# console project. After running the command, the folder will contain a **Program.cs** file and a **ReadFileUploadNotification.csproj** file.
-
- ```cmd/sh
- dotnet new console --language c#
- ```
-
-1. Run the following command to add the **Microsoft.Azure.Devices** package to the project file. This package is the Azure IoT .NET service SDK.
-
- ```cmd/sh
- dotnet add package Microsoft.Azure.Devices
- ```
-
-1. Open the **Program.cs** file and add the following statement at the top of the file:
-
- ```csharp
- using Microsoft.Azure.Devices;
- ```
-1. Add the following fields to the **Program** class. Replace the `{iot hub connection string}` placeholder value with the IoT hub connection string that you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string):
-
- ```csharp
- static ServiceClient serviceClient;
- static string connectionString = "{iot hub connection string}";
- ```
-
-1. Add the following method to the **Program** class:
-
- ```csharp
- private async static void ReceiveFileUploadNotificationAsync()
- {
- var notificationReceiver = serviceClient.GetFileNotificationReceiver();
- Console.WriteLine("\nReceiving file upload notification from service");
- while (true)
- {
- var fileUploadNotification = await notificationReceiver.ReceiveAsync();
- if (fileUploadNotification == null) continue;
- Console.ForegroundColor = ConsoleColor.Yellow;
- Console.WriteLine("Received file upload notification: {0}",
- string.Join(", ", fileUploadNotification.BlobName));
- Console.ResetColor();
- await notificationReceiver.CompleteAsync(fileUploadNotification);
- }
- }
- ```
-
- Note this receive pattern is the same one used to receive cloud-to-device messages from the device app.
-
-1. Finally, replace the lines in the **Main** method with the following:
-
- ```csharp
- Console.WriteLine("Receive file upload notifications\n");
- serviceClient = ServiceClient.CreateFromConnectionString(connectionString);
- ReceiveFileUploadNotificationAsync();
- Console.WriteLine("Press Enter to exit\n");
- Console.ReadLine();
- ```
-
-## Run the applications
-
-Now you're ready to run the applications.
-
-1. First, run the service app to receive file upload notifications from the IoT hub. At your command prompt in the **ReadFileUploadNotification** folder, run the following commands:
-
- ```cmd/sh
- dotnet restore
- dotnet run
- ```
-
- The app starts and waits for a file upload notification from your IoT hub:
-
- ```cmd/sh
- Receive file upload notifications
--
- Receiving file upload notification from service
- Press Enter to exit
- ```
---
-1. Next, run the device app to upload the file to Azure storage. Open a new command prompt and change folders to the **azure-iot-sdk-csharp\iothub\device\samples\getting started\FileUploadSample** under the folder where you expanded the Azure IoT C# SDK. Run the following commands. Replace the `{Your device connection string}` placeholder value in the second command with the device connection string you saw when you registered a device in the IoT hub.
-
- ```cmd/sh
- dotnet restore
- dotnet run --p "{Your device connection string}"
- ```
-
- The following output is from the device app after the upload has completed:
-
- ```cmd/sh
- Uploading file TestPayload.txt
- Getting SAS URI from IoT Hub to use when uploading the file...
- Successfully got SAS URI (https://contosostorage.blob.core.windows.net/contosocontainer/MyDevice%2FTestPayload.txt?sv=2018-03-28&sr=b&sig=x0G1Baf%2BAjR%2BTg3nW34zDNKs07p6dLzkxvZ3ZSmjIhw%3D&se=2021-05-04T16%3A40%3A52Z&sp=rw) from IoT Hub
- Uploading file TestPayload.txt using the Azure Storage SDK and the retrieved SAS URI for authentication
- Successfully uploaded the file to Azure Storage
- Notified IoT Hub that the file upload succeeded and that the SAS URI can be freed.
- Time to upload file: 00:00:01.5077954.
- Done.
- ```
-
-1. Notice that the service app shows that it has received the file upload notification:
-
- ```cmd/sh
- Receive file upload notifications
-
-
- Receiving file upload notification from service
- Press Enter to exit
-
- Received file upload notification: myDeviceId/TestPayload.txt
- ```
-
-## Verify the file upload
-
-You can use the portal to view the uploaded file in the storage container you configured:
-
-1. Navigate to your storage account in Azure portal.
-1. On the left pane of your storage account, select **Containers**.
-1. Select the container you uploaded the file to.
-1. Select the folder named after your device.
-1. Select the blob that you uploaded your file to. In this article, it's the blob named **TestPayload.txt**.
-
- :::image type="content" source="./media/iot-hub-csharp-csharp-file-upload/view-uploaded-file.png" alt-text="Screenshot of selecting the uploaded file in the Azure portal." lightbox="./media/iot-hub-csharp-csharp-file-upload/view-uploaded-file.png":::
-
-1. View the blob properties on the page that opens. You can select **Download** to download the file and view its contents locally.
-
-## Next steps
-
-In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
-
-* [Overview of file uploads with IoT Hub](iot-hub-devguide-file-upload.md)
-
-* [Configure IoT Hub file uploads](iot-hub-configure-file-upload.md)
-
-* [Azure blob storage documentation](../storage/blobs/storage-blobs-introduction.md)
-
-* [Azure blob storage API reference](../storage/blobs/reference.md)
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
- Title: Upload files from devices to Azure IoT Hub (Java)-
-description: How to upload files from a device to the cloud using Azure IoT device SDK for Java. Uploaded files are stored in an Azure storage blob container.
----- Previously updated : 07/18/2021---
-# Upload files from your device to the cloud with Azure IoT Hub (Java)
--
-This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Java.
-
-The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-java) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-java.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure message routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
-
-* Videos
-* Large files that contain images
-* Vibration data sampled at high frequency
-* Some form of preprocessed data.
-
-These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how. View two samples from [azure-iot-sdk-java
-](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples/file-upload-sample/src/main/java/samples/com/microsoft/azure/sdk/iot) in GitHub.
-
-> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, .NET, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
--
-## Prerequisites
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* [Java SE Development Kit 8](/java/azure/jdk/). Make sure you select **Java 8** under **Long-term support** to get to downloads for JDK 8.
-
-* [Maven 3](https://maven.apache.org/download.cgi)
-
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
--
-## Create a project using Maven
-
-Create a directory for your project, and start a shell in that directory. On the command line, execute the following
-
-```cmd/sh
-mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false
-```
-
-This generates a directory with the same name as the *artifactId* and a standard project structure:
-
-```
- my-app
- |-- pom.xml
- -- src
- -- main
- -- java
- -- com
- -- mycompany
- -- app
- --App.Java
-```
-
-Using a text editor, replace the pom.xml file with the following:
-
-```xml
-
-<?xml version="1.0" encoding="UTF-8"?>
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>com.mycompany.app</groupId>
- <artifactId>my-app</artifactId>
- <version>1.0-SNAPSHOT</version>
-
- <name>my-app</name>
- <!-- FIXME change it to the project's website -->
- <url>http://www.example.com</url>
-
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <maven.compiler.source>1.7</maven.compiler.source>
- <maven.compiler.target>1.7</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-device-client</artifactId>
- <version>1.30.1</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>1.7.29</version>
- </dependency>
- <dependency>
- <groupId>junit</groupId>
- <artifactId>junit</artifactId>
- <version>4.11</version>
- <scope>test</scope>
- </dependency>
- </dependencies>
-
- <build>
- <pluginManagement><!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) -->
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.3</version>
- <configuration>
- <source>1.7</source>
- <target>1.7</target>
- </configuration>
- </plugin>
- <plugin>
- <artifactId>maven-shade-plugin</artifactId>
- <version>2.4</version>
- <executions>
- <execution>
- <phase>package</phase>
- <goals>
- <goal>shade</goal>
- </goals>
- <configuration>
- <filters>
- <filter>
- <artifact>*:*</artifact>
- <excludes>
- <exclude>META-INF/*.SF</exclude>
- <exclude>META-INF/*.RSA</exclude>
- </excludes>
- </filter>
- </filters>
- <shadedArtifactAttached>true</shadedArtifactAttached>
- <shadedClassifierName>with-deps</shadedClassifierName>
- </configuration>
- </execution>
- </executions>
- </plugin>
- </plugins>
- </pluginManagement>
- </build>
-</project>
-
-```
-
-## Upload a file from a device app
-
-Copy the file that you want to upload to the `my-app` folder in your project tree. Using a text editor, replace App.java with the following code. Supply your device connection string and file name where noted. You copied the device connection string when you registered the device.
-
-```java
-package com.mycompany.app;
-
-import com.azure.storage.blob.BlobClient;
-import com.azure.storage.blob.BlobClientBuilder;
-import com.microsoft.azure.sdk.iot.deps.serializer.FileUploadCompletionNotification;
-import com.microsoft.azure.sdk.iot.deps.serializer.FileUploadSasUriRequest;
-import com.microsoft.azure.sdk.iot.deps.serializer.FileUploadSasUriResponse;
-import com.microsoft.azure.sdk.iot.device.DeviceClient;
-import com.microsoft.azure.sdk.iot.device.IotHubClientProtocol;
-
-import java.io.BufferedInputStream;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.net.URISyntaxException;
-import java.util.Scanner;
-
-public class App
-{
- /**
- * Upload a single file to blobs using IoT Hub.
- *
- */
- public static void main(String[] args)throws IOException, URISyntaxException
- {
- String connString = "Your device connection string here";
- String fullFileName = "Path of the file to upload";
-
- System.out.println("Starting...");
- System.out.println("Beginning setup.");
-
- // File upload will always use HTTPS, DeviceClient will use this protocol only
- // for the other services like Telemetry, Device Method and Device Twin.
- IotHubClientProtocol protocol = IotHubClientProtocol.MQTT;
-
- System.out.println("Successfully read input parameters.");
-
- DeviceClient client = new DeviceClient(connString, protocol);
-
- System.out.println("Successfully created an IoT Hub client.");
-
- try
- {
- File file = new File(fullFileName);
- if (file.isDirectory())
- {
- throw new IllegalArgumentException(fullFileName + " is a directory, please provide a single file name, or use the FileUploadSample to upload directories.");
- }
-
- System.out.println("Retrieving SAS URI from IoT Hub...");
- FileUploadSasUriResponse sasUriResponse = client.getFileUploadSasUri(new FileUploadSasUriRequest(file.getName()));
-
- System.out.println("Successfully got SAS URI from IoT Hub");
- System.out.println("Correlation Id: " + sasUriResponse.getCorrelationId());
- System.out.println("Container name: " + sasUriResponse.getContainerName());
- System.out.println("Blob name: " + sasUriResponse.getBlobName());
- System.out.println("Blob Uri: " + sasUriResponse.getBlobUri());
-
- System.out.println("Using the Azure Storage SDK to upload file to Azure Storage...");
-
- try
- {
- BlobClient blobClient =
- new BlobClientBuilder()
- .endpoint(sasUriResponse.getBlobUri().toString())
- .buildClient();
-
- blobClient.uploadFromFile(fullFileName);
- }
- catch (Exception e)
- {
- System.out.println("Exception encountered while uploading file to blob: " + e.getMessage());
-
- System.out.println("Failed to upload file to Azure Storage.");
-
- System.out.println("Notifying IoT Hub that the SAS URI can be freed and that the file upload failed.");
-
- // Note that this is done even when the file upload fails. IoT Hub has a fixed number of SAS URIs allowed active
- // at any given time. Once you are done with the file upload, you should free your SAS URI so that other
- // SAS URIs can be generated. If a SAS URI is not freed through this API, then it will free itself eventually
- // based on how long SAS URIs are configured to live on your IoT Hub.
- FileUploadCompletionNotification completionNotification = new FileUploadCompletionNotification(sasUriResponse.getCorrelationId(), false);
- client.completeFileUpload(completionNotification);
-
- System.out.println("Notified IoT Hub that the SAS URI can be freed and that the file upload was a failure.");
-
- client.closeNow();
- return;
- }
-
- System.out.println("Successfully uploaded file to Azure Storage.");
-
- System.out.println("Notifying IoT Hub that the SAS URI can be freed and that the file upload was a success.");
- FileUploadCompletionNotification completionNotification = new FileUploadCompletionNotification(sasUriResponse.getCorrelationId(), true);
- client.completeFileUpload(completionNotification);
- System.out.println("Successfully notified IoT Hub that the SAS URI can be freed, and that the file upload was a success");
- }
- catch (Exception e)
- {
- System.out.println("On exception, shutting down \n" + " Cause: " + e.getCause() + " \nERROR: " + e.getMessage());
- System.out.println("Shutting down...");
- client.closeNow();
- }
-
- System.out.println("Press any key to exit...");
-
- Scanner scanner = new Scanner(System.in);
- scanner.nextLine();
- System.out.println("Shutting down...");
- client.closeNow();
- }
-}
-```
-
-## Build and run the application
-
-At a command prompt in the `my-app` folder, run the following command:
-
-```cmd/sh
-mvn clean package -DskipTests
-```
-
-When the build is complete, run the following command to run the application:
-
-```cmd/sh
-mvn exec:java -Dexec.mainClass="com.mycompany.app.App"
-```
-
-You can use the portal to view the uploaded file in the storage container you configured:
--
-## Receive a file upload notification
-
-In this section, you create a Java console app that receives file upload notification messages from IoT Hub.
-
-1. Create a directory for your project, and start a shell in that directory. On the command line, execute the following
-
- ```cmd/sh
- mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false
- ```
-
-2. At your command prompt, navigate to the new `my-app` folder.
-
-3. Using a text editor, replace the `pom.xml` file in the `my-app` folder with the following. Adding the service client dependency enables you to use the **iothub-java-service-client** package in your application to communicate with your IoT hub service:
-
- ```xml
- <?xml version="1.0" encoding="UTF-8"?>
-
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>com.mycompany.app</groupId>
- <artifactId>my-app</artifactId>
- <version>1.0-SNAPSHOT</version>
-
- <name>my-app</name>
- <!-- FIXME change it to the project's website -->
- <url>http://www.example.com</url>
-
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <maven.compiler.source>1.7</maven.compiler.source>
- <maven.compiler.target>1.7</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-device-client</artifactId>
- <version>1.30.1</version>
- </dependency>
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-service-client</artifactId>
- <version>1.7.23</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>1.7.29</version>
- </dependency>
- <dependency>
- <groupId>junit</groupId>
- <artifactId>junit</artifactId>
- <version>4.11</version>
- <scope>test</scope>
- </dependency>
- </dependencies>
-
- <build>
- <pluginManagement><!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) -->
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.3</version>
- <configuration>
- <source>1.7</source>
- <target>1.7</target>
- </configuration>
- </plugin>
- <plugin>
- <artifactId>maven-shade-plugin</artifactId>
- <version>2.4</version>
- <executions>
- <execution>
- <phase>package</phase>
- <goals>
- <goal>shade</goal>
- </goals>
- <configuration>
- <filters>
- <filter>
- <artifact>*:*</artifact>
- <excludes>
- <exclude>META-INF/*.SF</exclude>
- <exclude>META-INF/*.RSA</exclude>
- </excludes>
- </filter>
- </filters>
- <shadedArtifactAttached>true</shadedArtifactAttached>
- <shadedClassifierName>with-deps</shadedClassifierName>
- </configuration>
- </execution>
- </executions>
- </plugin>
- </plugins>
- </pluginManagement>
- </build>
- </project>
- ```
-
- > [!NOTE]
- > You can check for the latest version of **iot-service-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-service-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
-
-4. Save and close the `pom.xml` file.
-
-5. Get the IoT Hub service connection string.
- [!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)]
-
-6. Using a text editor, open the `my-app\src\main\java\com\mycompany\app\App.java` file and replace the code with the following.
-
- ```java
- package com.mycompany.app;
-
- import com.microsoft.azure.sdk.iot.service.*;
- import java.io.IOException;
- import java.net.URISyntaxException;
- import java.util.concurrent.ExecutorService;
- import java.util.concurrent.Executors;
--
- public class App
- {
- private static final String connectionString = "{Your service connection string here}";
- private static final IotHubServiceClientProtocol protocol = IotHubServiceClientProtocol.AMQPS;
-
- public static void main(String[] args) throws Exception
- {
- ServiceClient sc = ServiceClient.createFromConnectionString(connectionString, protocol);
-
- FileUploadNotificationReceiver receiver = sc.getFileUploadNotificationReceiver();
- receiver.open();
- FileUploadNotification fileUploadNotification = receiver.receive(2000);
-
- if (fileUploadNotification != null)
- {
- System.out.println("File Upload notification received");
- System.out.println("Device Id : " + fileUploadNotification.getDeviceId());
- System.out.println("Blob Uri: " + fileUploadNotification.getBlobUri());
- System.out.println("Blob Name: " + fileUploadNotification.getBlobName());
- System.out.println("Last Updated : " + fileUploadNotification.getLastUpdatedTimeDate());
- System.out.println("Blob Size (Bytes): " + fileUploadNotification.getBlobSizeInBytes());
- System.out.println("Enqueued Time: " + fileUploadNotification.getEnqueuedTimeUtcDate());
- }
- else
- {
- System.out.println("No file upload notification");
- }
-
- receiver.close();
- }
-
- }
- ```
--
-7. Save and close the `my-app\src\main\java\com\mycompany\app\App.java` file.
-
-8. Use the following command to build the app and check for errors:
- ```cmd/sh
- mvn clean package -DskipTests
- ```
-## Run the application
-
-Now you're ready to run the application.
-
-At a command prompt in the `my-app` folder, run the following command:
-
-```cmd/sh
-mvn exec:java -Dexec.mainClass="com.mycompany.app.App"
-```
-The following screenshot shows the output from the **read-file-upload-notification** app:
-
-![Output from read-file-upload-notification app](media/iot-hub-java-java-upload/read-file-upload-notification.png)
-
-## Next steps
-
-In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
-
-* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [Simulating a device with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub File Upload Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-node.md
- Title: Upload files from devices to Azure IoT Hub (Node)-
-description: How to upload files from a device to the cloud using Azure IoT device SDK for Node.js. Uploaded files are stored in an Azure storage blob container.
----- Previously updated : 07/27/2021---
-# Upload files from your device to the cloud with Azure IoT Hub (Node.js)
-
-This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Node.js.
-
-The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-node.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
-
-* Videos
-* Large files that contain images
-* Vibration data sampled at high frequency
-* Some form of pre-processed data.
-
-These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-
-At the end of this article, you run two Node.js console apps:
-
-* **FileUpload.js**, which uploads a file to storage using a SAS URI provided by your IoT hub.
-
-* **FileUploadNotification.js**, which receives file upload notifications from your IoT hub.
-
-> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
--
-## Prerequisites
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* Node.js version 10.0.x or later. The LTS version is recommended. You can download Node.js from [nodejs.org](https://nodejs.org).
-
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
--
-## Upload a file from a device app
-
-In this section, you create a device app to upload a file to IoT hub. The code is based on code available in the [upload_to_blob_advanced.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/upload_to_blob_advanced.js) sample in the [Azure IoT Node.js SDK](https://github.com/Azure/azure-iot-sdk-node) device samples.
-
-1. Create an empty folder called `fileupload`. In the `fileupload` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
-
- ```cmd/sh
- npm init
- ```
-
-1. At your command prompt in the `fileupload` folder, run the following command to install the **azure-iot-device** Device SDK, the **azure-iot-device-mqtt**, and the **@azure/storage-blob** packages:
-
- ```cmd/sh
- npm install azure-iot-device azure-iot-device-mqtt @azure/storage-blob --save
- ```
-
-1. Using a text editor, create a **FileUpload.js** file in the `fileupload` folder, and copy the following code into it.
-
- ```javascript
- 'use strict';
-
- const Client = require('azure-iot-device').Client;
- const Protocol = require('azure-iot-device-mqtt').Mqtt;
- const errors = require('azure-iot-common').errors;
- const path = require('path');
-
- const {
- AnonymousCredential,
- BlockBlobClient,
- newPipeline
- } = require('@azure/storage-blob');
-
- // make sure you set these environment variables prior to running the sample.
- const deviceConnectionString = process.env.DEVICE_CONNECTION_STRING;
- const localFilePath = process.env.PATH_TO_FILE;
- const storageBlobName = path.basename(localFilePath);
-
- async function uploadToBlob(localFilePath, client) {
- const blobInfo = await client.getBlobSharedAccessSignature(storageBlobName);
- if (!blobInfo) {
- throw new errors.ArgumentError('Invalid upload parameters');
- }
-
- const pipeline = newPipeline(new AnonymousCredential(), {
- retryOptions: { maxTries: 4 },
- telemetry: { value: 'HighLevelSample V1.0.0' }, // Customized telemetry string
- keepAliveOptions: { enable: false }
- });
-
- // Construct the blob URL to construct the blob client for file uploads
- const { hostName, containerName, blobName, sasToken } = blobInfo;
- const blobUrl = `https://${hostName}/${containerName}/${blobName}${sasToken}`;
-
- // Create the BlockBlobClient for file upload to the Blob Storage Blob
- const blobClient = new BlockBlobClient(blobUrl, pipeline);
-
- // Setup blank status notification arguments to be filled in on success/failure
- let isSuccess;
- let statusCode;
- let statusDescription;
-
- try {
- const uploadStatus = await blobClient.uploadFile(localFilePath);
- console.log('uploadStreamToBlockBlob success');
-
- // Save successful status notification arguments
- isSuccess = true;
- statusCode = uploadStatus._response.status;
- statusDescription = uploadStatus._response.bodyAsText;
-
- // Notify IoT Hub of upload to blob status (success)
- console.log('notifyBlobUploadStatus success');
- }
- catch (err) {
- isSuccess = false;
- statusCode = err.code;
- statusDescription = err.message;
-
- console.log('notifyBlobUploadStatus failed');
- console.log(err);
- }
-
- await client.notifyBlobUploadStatus(blobInfo.correlationId, isSuccess, statusCode, statusDescription);
- }
-
- // Create a client device from the connection string and upload the local file to blob storage.
- const deviceClient = Client.fromConnectionString(deviceConnectionString, Protocol);
- uploadToBlob(localFilePath, deviceClient)
- .catch((err) => {
- console.log(err);
- })
- .finally(() => {
- process.exit();
- });
- ```
-
-1. Save and close the **FileUpload.js** file.
-
-1. Copy an image file to the `fileupload` folder and give it a name such as `myimage.png`.
-
-1. Add environment variables for your device connection string and the path to the file that you want to upload. You got the device connection string when you registered a device in the IoT hub.
-
- - For Windows:
-
- ```cmd
- set DEVICE_CONNECTION_STRING={your device connection string}
- set PATH_TO_FILE={your image filepath}
- ```
-
- - For Linux/Bash:
-
- ```bash
- export DEVICE_CONNECTION_STRING="{your device connection string}"
- export PATH_TO_FILE="{your image filepath}"
- ```
-
-## Get the IoT hub connection string
-
-In this article, you create a backend service to receive file upload notification messages from the IoT hub you created. To receive file upload notification messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Receive a file upload notification
-
-In this section, you create a Node.js console app that receives file upload notification messages from IoT Hub.
-
-1. Create an empty folder called `fileuploadnotification`. In the `fileuploadnotification` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
-
- ```cmd/sh
- npm init
- ```
-
-1. At your command prompt in the `fileuploadnotification` folder, run the following command to install the **azure-iothub** SDK package:
-
- ```cmd/sh
- npm install azure-iothub --save
- ```
-
-1. Using a text editor, create a **FileUploadNotification.js** file in the `fileuploadnotification` folder.
-
-1. Add the following `require` statements at the start of the **FileUploadNotification.js** file:
-
- ```javascript
- 'use strict';
-
- const Client = require('azure-iothub').Client;
- ```
-
-1. Read the connection string for your IoT hub from the environment:
-
- ```javascript
- const connectionString = process.env.IOT_HUB_CONNECTION_STRING;
- ```
-
-1. Add the following code to create a service client from the connection string:
-
- ```javascript
- const serviceClient = Client.fromConnectionString(connectionString);
- ```
-
-1. Open the client and use the **getFileNotificationReceiver** function to receive status updates.
-
- ```javascript
- serviceClient.open(function (err) {
- if (err) {
- console.error('Could not connect: ' + err.message);
- } else {
- console.log('Service client connected');
- serviceClient.getFileNotificationReceiver(function receiveFileUploadNotification(err, receiver){
- if (err) {
- console.error('error getting the file notification receiver: ' + err.toString());
- } else {
- receiver.on('message', function (msg) {
- console.log('File upload from device:')
- console.log(msg.getData().toString('utf-8'));
- receiver.complete(msg, function (err) {
- if (err) {
- console.error('Could not finish the upload: ' + err.message);
- } else {
- console.log('Upload complete');
- }
- });
- });
- }
- });
- }
- });
- ```
- > [!NOTE]
- > If you want to receive disconnect notifications while you are listening to file upload notifications, you need to register `'error'` by using `receiver.on`. To continue to receive file upload notifications, you need to reconect to IoT Hub by using the `serviceClient.open` method.
-
-1. Save and close the **FileUploadNotification.js** file.
-
-1. Add an environment variable for your IoT Hub connection string. You copied this string previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
-
- - For Windows:
-
- ```cmd
- set IOT_HUB_CONNECTION_STRING={your iot hub connection string}
- ```
-
- - For Linux/Bash:
-
- ```bash
- export IOT_HUB_CONNECTION_STRING="{your iot hub connection string}"
- ```
-
-## Run the applications
-
-Now you're ready to run the applications.
-
-At a command prompt in the `fileuploadnotification` folder, run the following command:
-
-```cmd/sh
-node FileUploadNotification.js
-```
-
-At a command prompt in the `fileupload` folder, run the following command:
-
-```cmd/sh
-node FileUpload.js
-```
-
-The following output is from the **FileUpload** app after the upload has completed:
-
-```output
-uploadStreamToBlockBlob success
-notifyBlobUploadStatus success
-```
-
-The following sample output is from the **FileUploadNotification** app after the upload has completed:
-
-```output
-Service client connected
-File upload from device:
-{"deviceId":"myDeviceId","blobUri":"https://{your storage account name}.blob.core.windows.net/device-upload-container/myDeviceId/image.png","blobName":"myDeviceId/image.png","lastUpdatedTime":"2021-07-23T23:27:06+00:00","blobSizeInBytes":26214,"enqueuedTimeUtc":"2021-07-23T23:27:07.2580791Z"}
-```
-
-## Verify the file upload
-
-You can use the portal to view the uploaded file in the storage container you configured:
-
-1. Navigate to your storage account in Azure portal.
-1. On the left pane of your storage account, select **Containers**.
-1. Select the container you uploaded the file to.
-1. Select the folder named after your device.
-1. Select the blob that you uploaded your file to. In this article, it's the blob with the same name as your file.
-
- :::image type="content" source="./media/iot-hub-node-node-file-upload/view-uploaded-file.png" alt-text="Screenshot of viewing the uploaded file in the Azure portal." lightbox="./media/iot-hub-node-node-file-upload/view-uploaded-file.png":::
-
-1. View the blob properties on the page that opens. You can select **Download** to download the file and view its contents locally.
-
-## Next steps
-
-In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
-
-* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
iot-hub File Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-python.md
- Title: Upload files from devices to Azure IoT Hub (Python)-
-description: How to upload files from a device to the cloud using Azure IoT device SDK for Python. Uploaded files are stored in an Azure storage blob container.
----- Previously updated : 12/28/2022---
-# Upload files from your device to the cloud with Azure IoT Hub (Python)
--
-This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Python.
-
-The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-python) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-python.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
-
-* Videos
-* Large files that contain images
-* Vibration data sampled at high frequency
-* Some form of pre-processed data.
-
-These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-
-At the end of this article, you run the Python console app **FileUpload.py**, which uploads a file to storage using the Python Device SDK.
-
-> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
--
-## Prerequisites
-
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
--
-## Upload a file from a device app
-
-In this section, you create the device app to upload a file to IoT hub.
-
-1. At your command prompt, run the following command to install the **azure-iot-device** package. You use this package to coordinate the file upload with your IoT hub.
-
- ```cmd/sh
- pip install azure-iot-device
- ```
-
-1. At your command prompt, run the following command to install the [**azure.storage.blob**](https://pypi.org/project/azure-storage-blob/) package. You use this package to perform the file upload.
-
- ```cmd/sh
- pip install azure.storage.blob
- ```
-
-1. Create a test file that you'll upload to blob storage.
-
-1. Using a text editor, create a **FileUpload.py** file in your working folder.
-
-1. Add the following `import` statements and variables at the start of the **FileUpload.py** file.
-
- ```python
- import os
- from azure.iot.device import IoTHubDeviceClient
- from azure.core.exceptions import AzureError
- from azure.storage.blob import BlobClient
-
- CONNECTION_STRING = "[Device Connection String]"
- PATH_TO_FILE = r"[Full path to local file]"
- ```
-
-1. In your file, replace `[Device Connection String]` with the connection string of your IoT hub device. Replace `[Full path to local file]` with the path to the test file that you created or any file on your device that you want to upload.
-
-1. Create a function to upload the file to blob storage:
-
- ```python
- def store_blob(blob_info, file_name):
- try:
- sas_url = "https://{}/{}/{}{}".format(
- blob_info["hostName"],
- blob_info["containerName"],
- blob_info["blobName"],
- blob_info["sasToken"]
- )
-
- print("\nUploading file: {} to Azure Storage as blob: {} in container {}\n".format(file_name, blob_info["blobName"], blob_info["containerName"]))
-
- # Upload the specified file
- with BlobClient.from_blob_url(sas_url) as blob_client:
- with open(file_name, "rb") as f:
- result = blob_client.upload_blob(f, overwrite=True)
- return (True, result)
-
- except FileNotFoundError as ex:
- # catch file not found and add an HTTP status code to return in notification to IoT Hub
- ex.status_code = 404
- return (False, ex)
-
- except AzureError as ex:
- # catch Azure errors that might result from the upload operation
- return (False, ex)
- ```
-
- This function parses the *blob_info* structure passed into it to create a URL that it uses to initialize an [azure.storage.blob.BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient). Then it uploads your file to Azure blob storage using this client.
-
-1. Add the following code to connect the client and upload the file:
-
- ```python
- def run_sample(device_client):
- # Connect the client
- device_client.connect()
-
- # Get the storage info for the blob
- blob_name = os.path.basename(PATH_TO_FILE)
- storage_info = device_client.get_storage_info_for_blob(blob_name)
-
- # Upload to blob
- success, result = store_blob(storage_info, PATH_TO_FILE)
-
- if success == True:
- print("Upload succeeded. Result is: \n")
- print(result)
- print()
-
- device_client.notify_blob_upload_status(
- storage_info["correlationId"], True, 200, "OK: {}".format(PATH_TO_FILE)
- )
-
- else :
- # If the upload was not successful, the result is the exception object
- print("Upload failed. Exception is: \n")
- print(result)
- print()
-
- device_client.notify_blob_upload_status(
- storage_info["correlationId"], False, result.status_code, str(result)
- )
-
- def main():
- device_client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
-
- try:
- print ("IoT Hub file upload sample, press Ctrl-C to exit")
- run_sample(device_client)
- except KeyboardInterrupt:
- print ("IoTHubDeviceClient sample stopped")
- finally:
- # Graceful exit
- device_client.shutdown()
--
- if __name__ == "__main__":
- main()
- ```
-
- This code creates an **IoTHubDeviceClient** and uses the following APIs to manage the file upload with your IoT hub:
-
- * **get_storage_info_for_blob** gets information from your IoT hub about the linked Storage Account you created previously. This information includes the hostname, container name, blob name, and a SAS token. The storage info is passed to the **store_blob** function (created in the previous step), so the **BlobClient** in that function can authenticate with Azure storage. The **get_storage_info_for_blob** method also returns a correlation_id, which is used in the **notify_blob_upload_status** method. The correlation_id is IoT Hub's way of marking which blob you're working on.
-
- * **notify_blob_upload_status** notifies IoT Hub of the status of your blob storage operation. You pass it the correlation_id obtained by the **get_storage_info_for_blob** method. It's used by IoT Hub to notify any service that might be listening for a notification on the status of the file upload task.
-
-1. Save and close the **FileUpload.py** file.
-
-## Run the application
-
-Now you're ready to run the application.
-
-1. At a command prompt in your working folder, run the following command:
-
- ```cmd/sh
- python FileUpload.py
- ```
-
-2. The following screenshot shows the output from the **FileUpload** app:
-
- :::image type="content" source="./media/iot-hub-python-python-file-upload/run-device-app.png" alt-text="Screenshot showing output from running the FileUpload app." border="true" lightbox="./media/iot-hub-python-python-file-upload/run-device-app.png":::
-
-3. You can use the portal to view the uploaded file in the storage container you configured:
-
- :::image type="content" source="./media/iot-hub-python-python-file-upload/view-blob.png" alt-text="Screenshot of the container in the Azure portal that shows the uploaded file." border="true" lightbox="./media/iot-hub-python-python-file-upload/view-blob.png":::
-
-## Next steps
-
-In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
-
-* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
-
-Learn more about Azure Blob Storage with the following links:
-
-* [Azure Blob Storage documentation](../storage/blobs/index.yml)
-
-* [Azure Blob Storage for Python API documentation](/python/api/overview/azure/storage-blob-readme)
iot-hub How To File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-file-upload.md
+
+ Title: Upload files from your device to the cloud with Azure IoT Hub
+
+description: How to upload files from a device to the cloud using the Azure IoT SDKs for C#, Python, Java, and Node.js.
+++++ Last updated : 07/01/2024
+zone_pivot_groups: iot-hub-howto-c2d-1
+++
+# Upload files from a device to the cloud with Azure IoT Hub
+
+This article demonstrates how to:
+
+* Use file upload capabilities of IoT Hub to upload a file to Azure Blob Storage, using an Azure IoT device and service SDKs.
+* Notify IoT Hub that the file was successfully uploaded and create a backend service to receive file upload notifications from IoT Hub, using the Azure IoT service SDKs.
+
+In some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. The file upload capabilities in IoT Hub enable you to move large or complex data to the cloud. For example:
+
+* Videos
+* Large files that contain images
+* Vibration data sampled at high frequency
+* Some form of preprocessed data
+
+These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
+
+This article is meant to complement runnable SDK samples that are referenced from within this article.
+
+For more information, see:
+
+* [Overview of file uploads with IoT Hub](iot-hub-devguide-file-upload.md)
+* [Introduction to Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md)
+* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
++
+## Prerequisites
+
+* **An IoT hub**. Some SDK calls require the IoT Hub primary connection string, so make a note of the connection string.
+
+* **A registered device**. Some SDK calls require the device primary connection string, so make a note of the connection string.
+
+* IoT Hub **Service Connect** permission - To receive file upload notification messages, your backend service needs the **Service Connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission. For more information, see [Connect to an IoT hub](/azure/iot-hub/create-hub?&tabs=portal#connect-to-an-iot-hub).
+
+* Configure file upload in your IoT hub by linking an **Azure Storage account** and **Azure Blob Storage container**. You can configure these using the [Azure portal](/azure/iot-hub/iot-hub-configure-file-upload), [Azure CLI](/azure/iot-hub/iot-hub-configure-file-upload-cli), or [Azure PowerShell](/azure/iot-hub/iot-hub-configure-file-upload-powershell).
++++++++++++
iot-hub Iot Hubs Manage Device Twin Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hubs-manage-device-twin-tags.md
- Title: How to manage devices using device twin tags in Azure IoT Hub | Microsoft Docs
-description: How to use device twin tags to manage devices in your Azure IoT hub.
---- Previously updated : 11/01/2022----
-# How to manage devices using device twin tags in Azure IoT Hub
-This article demonstrates how to use tags to manage IoT devices using [device twin tags](iot-hub-devguide-device-twins.md#tags-and-properties-format)
-
-Device twin tags can be used as a powerful tool to help you organize your devices. This is especially important when you have multiple kinds of devices within your IoT solutions, you can use tags to set types, locations etc. For example:
-
-```json
-{
- "deviceId": "mydevice1",
- "status": "enabled",
- "connectionState": "Connected",
- "cloudToDeviceMessageCount": 0,
- "authenticationType": "sas",
- "tags": {
- "deploymentLocation": {
- "building": "43",
- "floor": "1"
- },
- "deviceType":"HDCamera"
- },
- "properties": {
- ...
- }
-}
-```
--
-## Prerequisites
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* At least two registered devices. If you don't have devices in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
--
-## Add and view device twin tags using the Azure portal
-
-This section describes how to create an IoT hub using the [Azure portal](https://portal.azure.com).
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT Hub.
-
-2. Select the **Device** tab in the left navigation.
-
-3. Select the desired devices, select **Assign Tags**.
-
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-select-device-to-assign-tags.png" alt-text="Screenshot of selecting devices to assign tags.":::
-
-4. In the opened view, you can see the tags the devices already have. To add a new basic tag, provide a **name** and **value** for the tag. The format for the name and value pair is found in [Tags and properties format](iot-hub-devguide-device-twins.md#tags-and-properties-format). Select **Save** to save the tag.
-
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-add-basic-tag.png" alt-text="Screenshot of assigning tags to devices screen.":::
-
-5. After saving, you can view the tags that were added by selecting **Assign Tags** again.
-
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-view-basic-tag.png" alt-text="Screenshot of viewing tags added to devices.":::
-
-## Add and view nested tags
-1. Following the example above, you can add a nested tag by selecting the advanced tab in the **Assign Tags** and add a nested json object with two values.
- ```json
- {
- "deploymentLocation": {
- "building": "43",
- "floor": "1"
- }
- }
- ```
-2. Select **Save**
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-twin-tag-add-nested-tag.png" alt-text="Screenshot of adding nested tags to devices.":::
-3. Select the devices again and select **Assign Tags** to view the newly added tags
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-twin-tag-view-nested-tag.png" alt-text="Screenshot of viewing nested tags to devices.":::
-
-## Filtering devices with device twin tags
-Device twin tags is a great way to group devices by type, location etc., and you can manage your devices by filtering through device tags.
-1. Select **+ Add filter**, and select **Device Tag** as the filter type
-2. Enter the desired tag name and value, select **Apply** to retrieve the list of devices that matches the criteria
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-twin-tag-filter.png" alt-text="Screenshot of filtering devices with tags.":::
-
-## Update and delete device twin tags from multiple devices using the Azure portal
-1. Select the two or more devices, select **Assign Tags**.
-2. In the opened panel, you can update existing tags by typing the target tag name in the **Name** field, and the new string in the **Value** field.
-3. To delete a tag from multiple devices, type the target tag name in the **Name** field, and the select the **Delete Tags** button.
- :::image type="content" source="./media/iot-hubs-manage-device-twin-tags/iot-hub-device-twin-tag-bulk-delete.png" alt-text="Screenshot of marking tag for deletion.":::
-4. Select **Save** to delete the tag from the devices that contains the matching tag name.
-
-## Managing device twin tags using the Azure CLI
-The following section walk-through several examples of tagging using the Azure CLI. For full references to the [device twin CLI](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update)
-
-1. At the command prompt, run the [login command](/cli/azure/get-started-with-azure-cli):
-
- ```azurecli
- az login
- ```
-
- Follow the instructions to authenticate using the code and sign in to your Azure account through a web browser.
-
-2. If you have multiple Azure subscriptions, signing in to Azure grants you access to all the Azure accounts associated with your credentials. Use the [az account list](/cli/azure/account) to view the full list of accounts:
- ```azurecli
- az account list
- ```
-
- Use the following command to select the subscription that you want to use to run the commands to create your IoT hub. You can use either the subscription name or ID from the output of the previous command:
-
- ```azurecli
- az account set --subscription {your subscription name or id}
- ```
-
-3. The following command enables file notifications and sets the file notification properties to their default values. (The file upload notification time to live is set to one hour and the lock duration is set to 60 seconds.)
-
- ```azurecli
- az iot hub device-twin update -n {iothub_name} \
- -d {device_id} --tags '{"country": "USA"}'
- ```
-
-4. You can add complex nested tags by importing a json file or adding json directly to the input:
-
- ```azurecli
- az iot hub device-twin update --name {your iot hub name} \
- -d {device_id} --tags /path/to/file
- ```
- ```azurecli
- az iot hub device-twin update --name {your iot hub name} \
- -d {device_id} --tags '{"country":{"county":"king"}}'
- ```
-5. Use the command on an existing tag to update the value:
- ```azurecli
- az iot hub device-twin update --name {your iot hub name} \
- -d {device_id} --tags '{"country": "Germany"}'
- ```
-6. The following command removes the tag that was added by setting the value to **null**.
- ```azurecli
- az iot hub device-twin update --name {your iot hub name} \
- -d {device_id} --tags '{"country": null}'
- ```
-
- > [!NOTE]
- > If you are using Powershell or CloudShell>Powershell mode, you need to add a forward slash '\\' to escape all the double quotes. Example: --tags '{\\"country\\":\\"US\\"}'
-
-## Create jobs to set tags using Azure CLI
-For full references to the [IoT Hub Jobs CLI](/cli/azure/iot/hub/job#az-iot-hub-job-create-examples)
-
-## Next steps
-
-Now you have learned about device twins, you may be interested in the following IoT Hub developer guide topics:
-
-* [Understand and use module twins in IoT Hub](iot-hub-devguide-module-twins.md)
-* [Invoke a direct method on a device](iot-hub-devguide-direct-methods.md)
-* [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md)
-
-To try out some of the concepts described in this article, see the following IoT Hub tutorials:
-
-* [How to use the device twin](device-twins-node.md)
-* [How to use device twin properties](tutorial-device-twins.md)
-
iot-hub Manage Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/manage-device-twins.md
+
+ Title: How to manage devices and modules using twins
+
+description: Use the Azure portal and Azure CLI to query and update device twins and module twins in your Azure IoT hub.
++++ Last updated : 08/14/2024++++
+# How to view and update devices based on device twin properties
+
+Use the Azure portal and Azure CLI to manage devices through device twins and module twins. This article focuses on device twins for simplicity, but all of the concepts and processes work in a similar way for module twins.
+
+This article describes device twin management tasks available in the Azure portal or Azure CLI to manage device twins remotely. For information about developing device applications to handle device twin changes, see [Get started with device twins](./device-twins-dotnet.md).
+
+In IoT Hub, a *device twin* is a JSON document that stores state information. Every *device identity* is automatically associated with a device twin when it's created. A backend app or user can update two elements of a device twin:
+
+* *Desired properties*: Desired properties are half of a linked set of state information. A backend app or user can update the desired properties on a twin to communicate a desired state change, while a device can update the *reported properties* to communicate its current state.
+* *Tags*: You can use device twin tags to organize and manage devices in your IoT solutions. You can set tags for any meaningful category, like device type, location, or function.
+
+For more information, see [Understand and use device twins in IoT Hub](./iot-hub-devguide-device-twins.md) or [Understand and use module twins in IoT Hub](./iot-hub-devguide-module-twins.md).
++
+## Prerequisites
+
+Prepare the following prerequisites before you begin.
+
+### [Azure portal](#tab/portal)
+
+* An IoT hub in your Azure subscription. If you don't have a hub yet, follow the steps in [Create an IoT hub](create-hub.md).
+
+* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
+
+### [Azure CLI](#tab/cli)
+
+* The Azure CLI, version 2.36 or later. To find the version, run `az --version`. To install or upgrade the Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+ You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything.
+
+* An IoT hub in your Azure subscription. If you don't have a hub yet, follow the steps in [Create an IoT hub](create-hub.md).
+
+* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
+++
+## Understand tags for device organization
+
+Device twin tags can be used as a powerful tool to help you organize your devices. When you have multiple kinds of devices within your IoT solutions, you can use tags to set types, locations, etc. For example:
+
+```json
+{
+ "deviceId": "mydevice1",
+ "status": "enabled",
+ "connectionState": "Connected",
+ "cloudToDeviceMessageCount": 0,
+ "authenticationType": "sas",
+ "tags": {
+ "deploymentLocation": {
+ "building": "43",
+ "floor": "1"
+ },
+ "deviceType":"HDCamera"
+ },
+ "properties": {
+ ...
+ }
+}
+```
+
+## View and update device twins
+
+Once a device identity is created, a device twin is implicitly created in IoT Hub. You can use the Azure portal or Azure CLI to retrieve the device twin of a given device. You can also add, edit, or remove tags and desired properties.
+
+### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
+
+1. In your IoT hub, select **Devices** from the **Device management** section of the navigation menu.
+
+ On the **Devices** page, you see a list of all devices registered in your IoT hub. If any of the devices already have tags in their device twins, those tags are shown in the **Tags** column.
+
+1. Select the name of the device that you want to manage.
+
+ >[!TIP]
+ >If you're updating tags, you can select multiple devices then select **Assign tags** to manage them as a group.
+ >
+ >:::image type="content" source="./media/manage-device-twins/multi-select-assign-tags.png" alt-text="A screenshot that shows selecting multiple devices in the Azure portal to assign tags as a group.":::
+
+1. The device details page displays any current tags for the selected device. Select **edit** next to the **Tags** parameter to add, update, or remove tags.
+
+ :::image type="content" source="./media/manage-device-twins/edit-tags.png" alt-text="A screenshot that shows opening the tags editing option in the Azure portal.":::
+
+ >[!TIP]
+ >To add or update nested tags, select the **Advanced** tab and provide the JSON.
+ >
+ >:::image type="content" source="./media/manage-device-twins/edit-tags-advanced.png" alt-text="A screenshot that shows using the advanced tags editor to provide JSON text.":::
+
+1. Select **Device twin** to view and update the device twin JSON.
+
+ You can type directly in the text box to update tags or desired properties. To remove a tag or desired property, set the value of the item to `null`.
+
+1. Select **Save** to save your changes.
+
+1. Back on the device details page, select **Refresh** to update the page to reflect your changes.
+
+If your device has any module identities associated with it, those modules are displayed on the device details page as well. Select a module name, then select **Module identity twin** to view and update the module twin JSON.
+
+### [Azure CLI](#tab/cli)
+
+Use the [az iot hub device-twin](/cli/azure/iot/hub/device-twin) or [az iot hub module-twin](/cli/azure/iot/hub/module-twin) sets of commands to view and update twins.
+
+The [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command returns the device twin JSON. For example:
+
+```azurecli-interactive
+az iot hub device-twin show --device-id <DEVICE_ID> --hub-name <IOTHUB_NAME>
+```
+
+The [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command patches tags or desired properties in a device twin. For example:
+
+```azurecli-interactive
+az iot hub device-twin update --device-id <DEVICE_ID> --hub-name <IOTHUB_NAME> --tags <INLINE_JSON_OR_PATH_TO_JSON_FILE>
+```
+
+The [az iot hub device-twin replace](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-replace) command replaces an entire device twin. For example:
+
+```azurecli-interactive
+az iot hub device-twin replace --device-id <DEVICE_ID> --hub-name <IOTHUB_NAME> --json <INLINE_JSON_OR_PATH_TO_JSON_FILE>
+```
+
+>[!TIP]
+>If you're using PowerShell, add a backslash `\` to escape any double quotes. For example: `--tags '{\"country\":\"US\"}'`.
+++
+## Query for device twins
+
+IoT Hub exposes the device twins for your IoT hub as a document collection called **devices**. You can query devices based on their device twin values.
+
+This section describes how to run twin queries in the Azure portal and Azure CLI. To learn how to write twin queries, see [Queries for IoT Hub device and module twins](./query-twins.md).
+
+### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
+
+1. In your IoT hub, select **Devices** from the **Device management** section of the navigation menu.
+
+1. You can either use a filter or a query to find devices based on their device twin details:
+
+ * **Find devices using a filter**:
+
+ 1. Finding devices using a filter is the default view in the Azure portal. If you don't see these fields, select **Find devices using a filter**.
+
+ 1. Select **Add filter**, and then select **Device tag** as the filter type from the drop-down menu.
+
+ 1. Enter the desired tag name and value, select **Apply** to retrieve the list of devices that matches the criteria.
+
+ :::image type="content" source="./media/manage-device-twins/filter-device-twin-tags.png" alt-text="Screenshot of filtering devices with tags.":::
+
+ * **Find devices using a query**:
+
+ 1. Select **Find devices using a query**.
+
+ 1. Enter your query into the text box, then select **Run query**.
+
+ :::image type="content" source="./media/manage-device-twins/run-query.png" alt-text="Screenshot that shows using the device query filter in the Azure portal.":::
+
+### [Azure CLI](#tab/cli)
+
+Use the [az iot hub query](/cli/azure/iot/hub#az-iot-hub-query) command to return device information based on device twin or module twin queries.
+
+```azurecli
+az iot hub query --hub-name <IOTHUB_NAME> --query-command "SELECT * FROM devices WHERE <QUERY_TEXT>"
+```
+
+The same `query` command can query module twins by adjusting the query command.
+
+```azurecli
+az iot hub query --hub-name <IOTHUB_NAME> --query-command "SELECT * FROM devices.modules WHERE <QUERY_TEXT>"
+```
+++
+## Update device twins using jobs
+
+The *jobs* capability can execute device twin updates against a set of devices at a scheduled time. For more information, see [Schedule jobs on multiple devices](./iot-hub-devguide-jobs.md).
+
+### [Azure portal](#tab/portal)
+
+Jobs aren't supported in the Azure portal. Instead, use the Azure CLI.
+
+### [Azure CLI](#tab/cli)
+
+Use the [az iot hub job](/cli/azure/iot/hub/job) set of commands to create, view, or cancel jobs.
+
+For example, the following command updates desired twin properties on a set of devices at a specific time:
+
+```azurecli
+az iot hub job create --job-id <JOB_NAME> --job-type scheduleUpdateTwin -n <IOTHUB_NAME> --twin-patch <INLINE_JSON_OR_PATH_TO_JSON_FILE> --start-time "<ISO_8601_DATETIME>" --query-condition "<QUERY_TEXT>"
+```
+
+>[!TIP]
+>If you're using PowerShell, add a backslash `\` to escape any double quotes. For example: `--tags '{\"country\":\"US\"}'`.
iot-hub Module Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md
- Title: Get started with module identity and module twins (CLI)-
-description: Learn how to create Azure IoT Hub module identities and update module twin properties using the Azure CLI.
---- Previously updated : 02/17/2023----
-# Get started with IoT Hub module identities and module twins using Azure CLI
--
-[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identities and device twins, but provide finer granularity. Just as Azure IoT Hub device identities and device twins enable a back-end application to configure a device and provide visibility on the device's conditions, module identities and module twins provide these capabilities for the individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
--
-This article shows you how to create an Azure CLI session in which you:
-
-* Create a device identity, then create a module identity for that device.
-
-* Update a set of desired properties for the module twin associated with the module identity.
-
-## Prerequisites
-
-* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
-
-* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Module authentication
-
-You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
-
-```bash
-openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
-```
-
-## Prepare the Cloud Shell
-
-If you want to use the Azure Cloud Shell, you must first launch and configure it. If you use the CLI locally, skip to the [Prepare a CLI session](#prepare-a-cli-session) section.
-
-1. Select the **Cloud Shell** icon from the page header in the Azure portal.
-
- :::image type="content" source="./media/module-twins-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
-
- > [!NOTE]
- > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
-
-2. Use the environment selector in the Cloud Shell toolbar to select your preferred CLI environment. This article uses the **Bash** environment. You can also use the **PowerShell** environment.
-
- > [!NOTE]
- > Some commands require different syntax or formatting in the **Bash** and **PowerShell** environments. For more information, see [Tips for using the Azure CLI successfully](/cli/azure/use-cli-effectively?tabs=bash%2Cbash2).
-
- :::image type="content" source="./media/module-twins-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
-
-## Prepare a CLI session
-
-Next, you must prepare an Azure CLI session. If you're using the Cloud Shell, you run the session in a Cloud Shell tab. If using a local CLI client, you run the session in a CLI instance.
-
-1. If you're using the Cloud Shell, skip to the next step. Otherwise, run the [az login](/cli/azure/reference-index#az-login) command in the CLI session to sign in to your Azure account.
-
- If you're using the Cloud Shell, you're automatically signed into your Azure account. All communication between your Azure CLI session and your IoT hub is authenticated and encrypted. As a result, this article doesn't need extra authentication that you'd use with a real device, such as a connection string. For more information about signing in with Azure CLI, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-
- ```azurecli
- az login
- ```
-
-1. In the CLI session, run the [az extension add](/cli/azure/extension#az-extension-add) command. The command adds the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI. After you install the extension, you don't need to install it again in any Cloud Shell session.
-
- ```azurecli
- az extension add --name azure-iot
- ```
-
- [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
-
-## Create a device identity and module identity
-
-In this section, you create a device identity for your IoT hub in the CLI session, and then create a module identity using that device identity. You can create up to 50 module identities under each device identity.
-
-To create a device identity and module identity:
-
-1. In the CLI session, run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command, replacing the following placeholders with their corresponding values. This command creates the device identity for your module.
-
- *{DeviceName}*. The name of your device.
-
- *{HubName}*. The name of your IoT hub.
-
- ```azurecli
- az iot hub device-identity create --device-id {DeviceName} --hub-name {HubName}
- ```
-
-1. In the CLI session, run the [az iot hub module-identity create](/cli/azure/iot/hub/module-identity#az-iot-hub-module-identity-create) command, replacing the following placeholders with their corresponding values. This command creates the module identity for your module, under the device identity you created in the previous step.
-
- *{DeviceName}*. The name of your device.
-
- *{HubName}*. The name of your IoT hub.
-
- *{ModuleName}*. The name of your device's module.
-
- ```azurecli
- az iot hub module-identity create --device-id {DeviceName} --hub-name {HubName} \
- --module-id {ModuleName}
- ```
-
-## Update the module twin
-
-Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you use the CLI session to update a set of desired properties on the module twin associated with the module identity you created in the previous section.
-
-1. In the CLI session, run the [az iot hub module-twin update](/cli/azure/iot/hub/module-twin#az-iot-hub-module-twin-update) command, replacing the following placeholders with their corresponding values. In this example, we're updating multiple desired properties on the module twin for the module identity we created in the previous section.
-
- *{DeviceName}*. The name of your device.
-
- *{HubName}*. The name of your IoT hub.
-
- *{ModuleName}*. The name of your device's module.
-
- ```azurecli
- az iot hub module-twin update --device-id {DeviceName} --hub-name {HubName} \
- --module-id {ModuleName} \
- --desired '{"conditions":{"temperature":{"warning":75, "critical":100}}}'
- ```
-
-1. In the CLI session, confirm that the JSON response shows the results of the update operation. In the following JSON response example, we used `SampleDevice` and `SampleModule` for the `{DeviceName}` and `{ModuleName}` placeholders, respectively, in the `az iot hub module-twin update` CLI command.
-
- ```json
- {
- "authenticationType": "sas",
- "capabilities": null,
- "cloudToDeviceMessageCount": 0,
- "connectionState": "Disconnected",
- "deviceEtag": "Mzg0OEN1NzW=",
- "deviceId": "SampleDevice",
- "deviceScope": null,
- "etag": "AAAAAAAAAAI=",
- "lastActivityTime": "0001-01-01T00:00:00+00:00",
- "modelId": "",
- "moduleId": "SampleModule",
- "parentScopes": null,
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2023-02-17T21:26:10.5835633Z",
- "$lastUpdatedVersion": 2,
- "conditions": {
- "$lastUpdated": "2023-02-17T21:26:10.5835633Z",
- "$lastUpdatedVersion": 2,
- "temperature": {
- "$lastUpdated": "2023-02-17T21:26:10.5835633Z",
- "$lastUpdatedVersion": 2,
- "critical": {
- "$lastUpdated": "2023-02-17T21:26:10.5835633Z",
- "$lastUpdatedVersion": 2
- },
- "warning": {
- "$lastUpdated": "2023-02-17T21:26:10.5835633Z",
- "$lastUpdatedVersion": 2
- }
- }
- }
- },
- "$version": 2,
- "conditions": {
- "temperature": {
- "critical": 100,
- "warning": 75
- }
- }
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "0001-01-01T00:00:00Z"
- },
- "$version": 1
- }
- },
- "status": "enabled",
- "statusReason": null,
- "statusUpdateTime": "0001-01-01T00:00:00+00:00",
- "tags": null,
- "version": 3,
- "x509Thumbprint": {
- "primaryThumbprint": null,
- "secondaryThumbprint": null
- }
- }
- ```
-
-## Next steps
-
-To learn how to use Azure CLI to extend your IoT solution and schedule updates on devices, see [Schedule and broadcast jobs](schedule-jobs-cli.md).
-
-To continue getting started with IoT Hub and device management patterns, such as end-to-end image-based update, see [Device Update for Azure IoT Hub article using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-operations Concept Dataflow Conversions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-conversions.md
Title: Convert data using dataflow conversions
+ Title: Convert data by using dataflow conversions
description: Learn about dataflow conversions for transforming data in Azure IoT Operations.
Last updated 08/03/2024
#CustomerIntent: As an operator, I want to understand how to use dataflow conversions to transform data.
-# Convert data using dataflow conversions
+# Convert data by using dataflow conversions
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)] You can use dataflow conversions to transform data in Azure IoT Operations. The *conversion* element in a dataflow is used to compute values for output fields. You can use input fields, available operations, data types, and type conversions in dataflow conversions.
-The dataflow *conversion* element is used to compute values for output fields:
+The dataflow conversion element is used to compute values for output fields:
```yaml - inputs:
The dataflow *conversion* element is used to compute values for output fields:
There are several aspects to understand about conversions:
-* Reference to input fields: How to reference values from input fields in the conversion formula.
-* Available operations: Operations that can be utilized in conversions. For example, addition, subtraction, multiplication, and division.
-* Data types: Types of data that a formula can process and manipulate. For example, integer, floating-point, string.
-* Type conversions: How data types are converted between the input field values, the formula evaluation, and the output fields.
+* **Reference to input fields:** How to reference values from input fields in the conversion formula.
+* **Available operations:** Operations that can be utilized in conversions. For example, addition, subtraction, multiplication, and division.
+* **Data types:** Types of data that a formula can process and manipulate. For example, integer, floating point, and string.
+* **Type conversions:** How data types are converted between the input field values, the formula evaluation, and the output fields.
## Input fields
In conversions, formulas can operate on static values like a number such as *25*
expression: ($1, $2, $3, $4) ```
-In this example, the conversion results in an array containing the values of `[Max, Min, Mid.Avg, Mid.Mean]`. The comments in the YAML file (`# - $1`, `# - $2`) are optional but help clarify the connection between each field property and its role in the conversion formula.
+In this example, the conversion results in an array containing the values of `[Max, Min, Mid.Avg, Mid.Mean]`. The comments in the YAML file (`# - $1`, `# - $2`) are optional, but they help to clarify the connection between each field property and its role in the conversion formula.
## Data types
-Different serialization formats support various data types. For instance, JSON offers a few primitive types: string, number, boolean, and null. Also included are arrays of these primitive types. In contrast, other serialization formats like Avro have a more complex type system, including integers with multiple bit field lengths and timestamps with different resolutions. For example, milliseconds and microseconds.
+Different serialization formats support various data types. For instance, JSON offers a few primitive types: string, number, Boolean, and null. Also included are arrays of these primitive types. In contrast, other serialization formats like Avro have a more complex type system, including integers with multiple bit field lengths and timestamps with different resolutions. Examples are milliseconds and microseconds.
When the mapper reads an input property, it converts it into an internal type. This conversion is necessary for holding the data in memory until it's written out into an output field. The conversion to an internal type happens regardless of whether the input and output serialization formats are the same. The internal representation utilizes the following data types:
-| Type | Description |
-|-|-|
-| bool | Logical true/false |
-| integer | Stored as 128-bit signed integer |
-| float | Stored as 64-bit floating point number |
-| string | A UTF-8 string |
-| bytes | Binary data, a string of 8-bit unsigned values |
-| date time | UTC or local time with nanosecond resolution |
-| time | Time of day with nanosecond resolution |
-| duration | A duration with nanosecond resolution |
-| array | An array of any types listed previously |
-| map | A vector of (key, value) pairs of any types listed previously |
+| Type | Description |
+||-|
+| `bool` | Logical true/false. |
+| `integer` | Stored as 128-bit signed integer. |
+| `float` | Stored as 64-bit floating point number. |
+| `string` | A UTF-8 string. |
+| `bytes` | Binary data, a string of 8-bit unsigned values. |
+| `datetime` | UTC or local time with nanosecond resolution. |
+| `time` | Time of day with nanosecond resolution. |
+| `duration` | A duration with nanosecond resolution. |
+| `array` | An array of any types listed previously. |
+| `map` | A vector of (key, value) pairs of any types listed previously. |
### Input record fields When an input record field is read, its underlying type is converted into one of these internal type variants. The internal representation is versatile enough to handle most input types with minimal or no conversion. However, some input types require conversion or are unsupported. Some examples:
-* *Avro's UUID type* is converted to a *string*, as there's no specific *UUID* type in the internal representation.
-* *Avro's Decimal type* isn't supported by the mapper, thus fields of this type can't be included in mappings.
-* *Avro's Duration type* conversion can vary. If the *months* field is set, it's unsupported. If only *days* and *milliseconds* are set, it's converted to the internal *duration* representation.
+* **Avro** `UUID` **type**: It's converted to a `string` because there's no specific `UUID` type in the internal representation.
+* **Avro** `decimal` **type**: It isn't supported by the mapper, so fields of this type can't be included in mappings.
+* **Avro** `duration` **type**: Conversion can vary. If the `months` field is set, it's unsupported. If only `days` and `milliseconds` are set, it's converted to the internal `duration` representation.
-For some formats, surrogate types are used. For example, JSON doesn't have a *datetime* type and instead stores *datetime* values as strings formatted according to ISO8601. When the mapper reads such a field, the internal representation remains a string.
+For some formats, surrogate types are used. For example, JSON doesn't have a `datetime` type and instead stores `datetime` values as strings formatted according to ISO8601. When the mapper reads such a field, the internal representation remains a string.
### Output record fields
-The mapper is designed to be flexible by converting internal types into output types to accommodate scenarios where data comes from a serialization format with a limited type system. The following are some examples of how conversions are handled:
+The mapper is designed to be flexible by converting internal types into output types to accommodate scenarios where data comes from a serialization format with a limited type system. The following examples show how conversions are handled:
-* *Numeric types*: These can be converted to other representations, even if it means losing precision. For example, a 64-bit floating-point number (*f64*) can be converted into a 32-bit integer (*i32*).
-* *Strings to numbers*: If the incoming record contains a string like "123" and the output field is a 32-bit integer, the mapper converts and writes the value as a number.
-* *Strings to other types*:
- * If the output field is a *datetime*, the mapper attempts to parse the string as an ISO8601 formatted *datetime*.
- * If the output field is *binary/bytes*, the mapper tries to deserialize the string from a base64 encoded string.
-* *Boolean values*:
- * Converted to 0/1 if the output field is numerical.
- * Converted to "true"/"false" if the output field is string.
+* **Numeric types:** These types can be converted to other representations, even if it means losing precision. For example, a 64-bit floating-point number (`f64`) can be converted into a 32-bit integer (`i32`).
+* **Strings to numbers:** If the incoming record contains a string like `123` and the output field is a 32-bit integer, the mapper converts and writes the value as a number.
+* **Strings to other types:**
+ * If the output field is `datetime`, the mapper attempts to parse the string as an ISO8601 formatted `datetime`.
+ * If the output field is `binary/bytes`, the mapper tries to deserialize the string from a base64-encoded string.
+* **Boolean values:**
+ * Converted to `0`/`1` if the output field is numerical.
+ * Converted to `true`/`false` if the output field is string.
### Explicit type conversions
-While the automatic conversions operate as one might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and ensure that data integrity is maintained even when automatic methods fall short.
+Although the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and help maintain data integrity even when automatic methods fall short.
-### Using conversion formula with types
+### Use a conversion formula with types
-In mappings, an optional formula can specify how data from the input is processed before being written to the output field. If no formula is specified, the mapper copies the input field to the output using the internal type and conversion rules.
+In mappings, an optional formula can specify how data from the input is processed before being written to the output field. If no formula is specified, the mapper copies the input field to the output by using the internal type and conversion rules.
If a formula is specified, the data types available for use in formulas are limited to:
If a formula is specified, the data types available for use in formulas are limi
* Floating-point numbers * Strings * Booleans
-* Arrays of the above types
+* Arrays of the preceding types
* Missing value
-*Map* and *Byte* can't participate in formulas.
+`Map` and `byte` can't participate in formulas.
-Types related to time (*date time*, *time*, and *duration*) are converted into integer values representing time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, a *datetime* converted to seconds remains an integer. If the value is to be used in date-time fields, an explicit conversion method must be applied. For example, converting the value into an ISO8601 string that is automatically converted to the date-time type of the output serialization format.
+Types related to time (`datetime`, `time`, and `duration`) are converted into integer values that represent time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, `datetime` converted to seconds remains an integer. If the value will be used in `datetime` fields, an explicit conversion method must be applied. An example is converting the value into an ISO8601 string that's automatically converted to the `datetime` type of the output serialization format.
-### Using irregular types
+### Use irregular types
-Special considerations apply to types like arrays and *missing value*:
+Special considerations apply to types like arrays and *missing value*.
### Arrays
-Arrays can be processed using aggregation functions to compute a single value from multiple elements. For example, using the input record:
+Arrays can be processed by using aggregation functions to compute a single value from multiple elements. For example, by using the input record:
```json {
With the mapping:
expression: min($1) ```
-This configuration selects the smallest value from the *Measurements* array for the output field.
+This configuration selects the smallest value from the `Measurements` array for the output field.
-It's also possible to use functions that result a new array:
+It's also possible to use functions that result in a new array:
```yaml - inputs:
Arrays can also be created from multiple single values:
expression: ($1, $2, $3, $4) ```
-This mapping creates an array containing the minimum, maximum, average, and mean.
+This mapping creates an array that contains the minimum, maximum, average, and mean.
### Missing value
-*Missing value* is a special type used in scenarios such as:
+Missing value is a special type used in scenarios, such as:
* Handling missing fields in the input by providing an alternative value. * Conditionally removing a field based on its presence.
-Example mapping using *missing value*:
+Example mapping that uses a missing value:
```json {
Example mapping using *missing value*:
} ```
-The input record contains `BaseSalary` field, but possibly that is optional. Let's say that if the field is missing, a value must be added from a contextualization dataset:
+The input record contains the `BaseSalary` field, but possibly that's optional. Let's say that if the field is missing, a value must be added from a contextualization dataset:
```json {
The input record contains `BaseSalary` field, but possibly that is optional. Let
} ```
-A mapping can check if the field is present in the input record. If found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
+A mapping can check if the field is present in the input record. If the field is found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
```yaml - inputs:
A mapping can check if the field is present in the input record. If found, the o
The `conversion` uses the `if` function that has three parameters:
-* The first parameter is a condition. In the example, it checks if the `BaseSalary` field of the input field (aliased as `$1`) is the *missing value*.
+* The first parameter is a condition. In the example, it checks if the `BaseSalary` field of the input field (aliased as `$1`) is the missing value.
* The second parameter is the result of the function if the condition in the first parameter is true. In this example, it's the `BaseSalary` field of the contextualization dataset (aliased as `$2`). * The third parameter is the value for the condition if the first parameter is false. ## Available functions
-Functions can be used in the conversion formula to perform various operations.
+Functions can be used in the conversion formula to perform various operations:
* `min` to select a single item from an array * `if` to select between values
-* string manipulation (for example, `uppercase()`)
-* explicit conversion (for example, `ISO8601_datetime`)
-* aggregation (for example, `avg()`)
+* String manipulation (for example, `uppercase()`)
+* Explicit conversion (for example, `ISO8601_datetime`)
+* Aggregation (for example, `avg()`)
## Available operations
-Dataflows offer a wide range of out-of-the-box (OOTB) conversion functions that allow users to easily perform unit conversions without the need for complex calculations. These predefined functions cover common conversions such as temperature, pressure, length, weight, and volume. The following is a list of the available conversion functions, along with their corresponding formulas and function names:
+Dataflows offer a wide range of out-of-the-box conversion functions that allow users to easily perform unit conversions without the need for complex calculations. These predefined functions cover common conversions such as temperature, pressure, length, weight, and volume. The following list shows the available conversion functions, along with their corresponding formulas and function names:
-| Conversion | Formula | Function Name |
+| Conversion | Formula | Function name |
| | | |
-| Celsius to Fahrenheit | F = (C * 9/5) + 32 | cToF |
-| PSI to Bar | Bar = PSI * 0.0689476 | psiToBar |
-| Inch to CM | CM = Inch * 2.54 | inToCm |
-| Foot to Meter | Meter = Foot * 0.3048 | ftToM |
-| Lbs to KG | KG = Lbs * 0.453592 | lbToKg |
-| Gallons to Liters | Liters = Gallons * 3.78541 | galToL |
+| Celsius to Fahrenheit | F = (C * 9/5) + 32 | `cToF` |
+| PSI to bar | Bar = PSI * 0.0689476 | `psiToBar` |
+| Inch to cm | Cm = inch * 2.54 | `inToCm` |
+| Foot to meter | Meter = foot * 0.3048 | `ftToM` |
+| Lbs to kg | Kg = lbs * 0.453592 | `lbToKg` |
+| Gallons to liters | Liters = gallons * 3.78541 | `galToL` |
In addition to these unidirectional conversions, we also support the reverse calculations:
-| Conversion | Formula | Function Name |
+| Conversion | Formula | Function name |
| | | |
-| Fahrenheit to Celsius | C = (F - 32) * 5/9 | fToC |
-| Bar to PSI | PSI = Bar / 0.0689476 | barToPsi |
-| CM to Inch | Inch = CM / 2.54 | cmToIn |
-| Meter to Foot | Foot = Meter / 0.3048 | mToFt |
-| KG to Lbs | Lbs = KG / 0.453592 | kgToLb |
-| Liters to Gallons | Gallons = Liters / 3.78541 | lToGal |
+| Fahrenheit to Celsius | C = (F - 32) * 5/9 | `fToC` |
+| Bar to PSI | PSI = bar / 0.0689476 | `barToPsi` |
+| Cm to inch | Inch = cm / 2.54 | `cmToIn` |
+| Meter to foot | Foot = meter / 0.3048 | `mToFt` |
+| Kg to lbs | Lbs = kg / 0.453592 | `kgToLb` |
+| Liters to gallons | Gallons = liters / 3.78541 | `lToGal` |
-These functions are designed to simplify the conversion process, allowing users to input values in one unit and receive the corresponding value in another unit effortlessly.
+These functions are designed to simplify the conversion process. They allow users to input values in one unit and receive the corresponding value in another unit effortlessly.
-Additionally, we provide a scaling function to scale the range of value to the user-defined range. Example-`scale($1,0,10,0,100)`the input value is scaled from the range 0 to 10 to the range 0 to 100.
-
-Moreover, users have the flexibility to define their own conversion functions using simple mathematical formulas. Our system supports basic operators such as addition (`+`), subtraction (`-`), multiplication (`*`), and division (`/`). These operators follow standard rules of precedence (for example, multiplication and division are performed before addition and subtraction), which can be adjusted using parentheses to ensure the correct order of operations. This capability empowers users to customize their unit conversions to meet specific needs or preferences, enhancing the overall utility and versatility of the system.
+We also provide a scaling function to scale the range of value to the user-defined range. For the example `scale($1,0,10,0,100)`, the input value is scaled from the range 0 to 10 to the range 0 to 100.
+Moreover, users have the flexibility to define their own conversion functions by using simple mathematical formulas. Our system supports basic operators such as addition (`+`), subtraction (`-`), multiplication (`*`), and division (`/`). These operators follow standard rules of precedence. For example, multiplication and division are performed before addition and subtraction. Precedence can be adjusted by using parentheses to ensure the correct order of operations. This capability empowers users to customize their unit conversions to meet specific needs or preferences, enhancing the overall utility and versatility of the system.
For more complex calculations, functions like `sqrt` (which finds the square root of a number) are also available.
-### Available arithmetic, comparison, and boolean operators grouped by precedence
+### Available arithmetic, comparison, and Boolean operators grouped by precedence
| Operator | Description | |-|-| | ^ | Exponentiation: $1 ^ 3 |
-Since `Exponentiation` has the highest precedence, it's executed first unless parentheses override this order:
+Because `Exponentiation` has the highest precedence, it's executed first unless parentheses override this order:
* `$1 * 2 ^ 3` is interpreted as `$1 * 8` because the `2 ^ 3` part is executed first, before multiplication. * `($1 * 2) ^ 3` processes the multiplication before exponentiation.
Since `Exponentiation` has the highest precedence, it's executed first unless pa
`Negation` and `Logical not` have high precedence, so they always stick to their immediate neighbor, except when exponentiation is involved:
-* `-$1 * 2` negates $1 first, then multiplies.
-* `-($1 * 2)` multiplies, then negates the result
+* `-$1 * 2` negates `$1` first, and then multiplies.
+* `-($1 * 2)` multiplies, and then negates the result.
| Operator | Description | |-|-|
Since `Exponentiation` has the highest precedence, it's executed first unless pa
| + | Addition for numeric values, concatenation for strings | | - | Subtraction |
-`Addition` and `Subtraction` are considered weaker operations compared to those in the previous group:
+`Addition` and `Subtraction` are considered weaker operations compared to the operations in the previous group:
-* `$1 + 2 * 3` results in `$1 + 6`, as `2 * 3` is executed first due to the higher precedence of `Multiplication`.
-* `($1 + 2) * 3` prioritizes the `addition` before `multiplication`.
+* `$1 + 2 * 3` results in `$1 + 6` because `2 * 3` is executed first because of the higher precedence of `multiplication`.
+* `($1 + 2) * 3` prioritizes `Addition` before `Multiplication`.
| Operator | Description | |-|-|
Since `Exponentiation` has the highest precedence, it's executed first unless pa
| == | Equal to | | != | Not equal to |
-`Comparisons` operate on numeric, boolean, and string values. Since they have lower precedence than arithmetic operators, no parentheses are needed to compare results effectively:
+`Comparisons` operate on numeric, Boolean, and string values. Because they have lower precedence than arithmetic operators, no parentheses are needed to compare results effectively:
* `$1 * 2 <= $2` is equivalent to `($1 * 2) <= $2`.
iot-operations Concept Dataflow Enrich https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-enrich.md
Title: Enrich data using dataflows
+ Title: Enrich data by using dataflows
description: Use contextualization datasets to enrich data in Azure IoT Operations dataflows.
Last updated 08/13/2024
#CustomerIntent: As an operator, I want to understand how to create a dataflow to enrich data sent to endpoints.
-# Enrich data using dataflows
+# Enrich data by using dataflows
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-You can enrich data using the *contextualization datasets* function. When processing incoming records, these datasets can be queried based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions where data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
+You can enrich data by using the *contextualization datasets* function. When incoming records are processed, you can query these datasets based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
For example, consider the following dataset with a few records, represented as JSON records:
For example, consider the following dataset with a few records, represented as J
} ``` --
-The mapper accesses the reference dataset stored in Azure IoT Operations's [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md) using a key value based on a *condition* specified in the mapping configuration. Key names in the distributed state store correspond to a dataset in the dataflow configuration.
+The mapper accesses the reference dataset stored in the Azure IoT Operations [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md) by using a key value based on a *condition* specified in the mapping configuration. Key names in the DSS correspond to a dataset in the dataflow configuration.
```yaml datasets:
datasets:
When a new record is being processed, the mapper performs the following steps:
-* *Data request*: The mapper sends a request to the DSS to retrieve the dataset stored under the key *Position*.
-* *Record matching*: The mapper then queries this dataset to find the first record where the *Position* field in the dataset matches the *Position* field of the incoming record.
+* **Data request:** The mapper sends a request to the DSS to retrieve the dataset stored under the key `Position`.
+* **Record matching:** The mapper then queries this dataset to find the first record where the `Position` field in the dataset matches the `Position` field of the incoming record.
```yaml - inputs:
When a new record is being processed, the mapper performs the following steps:
expression: if($1 == (), $2, $1) ```
-In this example, the *WorkingHours* field is added to the output record, while the *BaseSalary* is used conditionally only when the incoming record doesn't contain the *BaseSalary* field (or the value is *null* if nullable field). The request for the contextualization data doesn't happen with every incoming record. The mapper requests the dataset and then it receives notifications from DSS about the changes, while it uses a cached version of the dataset.
+In this example, the `WorkingHours` field is added to the output record, while the `BaseSalary` is used conditionally only when the incoming record doesn't contain the `BaseSalary` field (or the value is `null` if it's a nullable field). The request for the contextualization data doesn't happen with every incoming record. The mapper requests the dataset and then it receives notifications from DSS about the changes, while it uses a cached version of the dataset.
It's possible to use multiple datasets:
Then use the references mixed:
- $context(permission).NightShift # ```
-The input references use the key of the dataset like *position* or *permission*. If the key in DSS is inconvenient to use, an alias can be defined:
+The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, you can define an alias:
```yaml datasets:
datasets:
expression: $1 == $2 ```
-Which configuration renames the dataset with key *datasets.parag10.rule42* to *position*.
+The configuration renames the dataset with the key `datasets.parag10.rule42` to `position`.
iot-operations Concept Dataflow Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md
Title: Map data using dataflows
+ Title: Map data by using dataflows
description: Learn about the dataflow mapping language for transforming data in Azure IoT Operations.
Last updated 08/03/2024
#CustomerIntent: As an operator, I want to understand how to use the dataflow mapping language to transform data.
-# Map data using dataflows
+# Map data by using dataflows
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
Compare it with the output record:
In the output record, the following changes are made to the input record data:
-* Fields renamed: **Birth Date** is now **Date of Birth**.
-* Fields restructured: Both **Name** and **Date of Birth** are grouped under the new **Employee** category.
-* Field deleted: **Place of birth** is removed, as it isn't present in the output.
-* Field added: **Base Salary** is a new field in the **Employment** category.
-* Field values changed or merged: The **Position** field in the output combines the **Position** and **Office** fields from the input.
+* **Fields renamed**: The `Birth Date` field is now `Date of Birth`.
+* **Fields restructured**: Both `Name` and `Date of Birth` are grouped under the new `Employee` category.
+* **Field deleted**: The `Place of birth` field is removed because it isn't present in the output.
+* **Field added**: The `Base Salary` field is a new field in the `Employment` category.
+* **Field values changed or merged**: The `Position` field in the output combines the `Position` and `Office` fields from the input.
-The transformations are achieved through *mapping* that typically involves:
+The transformations are achieved through *mapping*, which typically involves:
* **Input definition**: Identifying the fields in the input records that are used. * **Output definition**: Specifying where and how the input fields are organized in the output records.
-* **Conversion (optional)**: Modifying the input fields to fit into the output fields. This is required when multiple input fields are combined into a single output field.
+* **Conversion (optional)**: Modifying the input fields to fit into the output fields. Conversion is required when multiple input fields are combined into a single output field.
-The following is an example mapping:
+The following mapping is an example:
```yaml - inputs:
The following is an example mapping:
The example maps:
-* **One-to-one mapping**: The `BirthDate` is directly mapped to `Employee.DateOfBirth` without conversion.
+* **One-to-one mapping**: `BirthDate` is directly mapped to `Employee.DateOfBirth` without conversion.
* **Many-to-one mapping**: Combines `Position` and `Office` into a single `Employment.Position` field. The conversion formula (`$1 + ", " + $2`) merges these fields into a formatted string.
-* **Using contextual data**: The `BaseSalary` is added from a contextual dataset named `position`.
+* **Contextual data**: `BaseSalary` is added from a contextual dataset named `position`.
## Field references
-Field references show how to specify paths in the input and output, using dot notation like `Employee.DateOfBirth` or accessing data from a contextual dataset via `$context(position)`.
+Field references show how to specify paths in the input and output by using dot notation like `Employee.DateOfBirth` or accessing data from a contextual dataset via `$context(position)`.
## Contextualization dataset selectors
-These selectors allow mappings to integrate extra data from external databases, referred to as *contextualization datasets*.
+These selectors allow mappings to integrate extra data from external databases, which are referred to as *contextualization datasets*.
## Record filtering Record filtering involves setting conditions to select which records should be processed or dropped.
-## Dot-notation
+## Dot notation
-Dot-notation is widely used in computer science to reference fields, even recursively. In programming, field names typically consist of letters and numbers. A standard dot-notation might look like this:
+Dot notation is widely used in computer science to reference fields, even recursively. In programming, field names typically consist of letters and numbers. A standard dot-notation sample might look like this example:
```yaml - inputs: - Person.Address.Street.Number ```
-However, in a dataflow, a path described by dot-notation might include strings and some special characters without needing escaping:
+In a dataflow, a path described by dot notation might include strings and some special characters without needing escaping:
```yaml - inputs: - Person.Date of Birth ```
-However, in other cases, escaping is necessary:
+In other cases, escaping is necessary:
```yaml - inputs: - nsu=http://opcfoundation.org/UA/Plc/Applications;s=RandomSignedInt32 ```
-The previous example, among other special characters, contains dots within the field name, which, without escaping, would serve as a separator in the dot-notation itself.
+The previous example, among other special characters, contains dots within the field name. Without escaping, the field name would serve as a separator in the dot notation itself.
While a dataflow parses a path, it treats only two characters as special:
-* Dots ('.') act as field separators.
-* Quotes, when placed at the beginning or the end of a segment, start an escaped section where dots aren't treated as field separators.
+* Dots (`.`) act as field separators.
+* Single quotation marks, when placed at the beginning or the end of a segment, start an escaped section where dots aren't treated as field separators.
Any other characters are treated as part of the field name. This flexibility is useful in formats like JSON, where field names can be arbitrary strings.
-The path definition must also adhere to the rules of YAML. Once a character with special meaning is included in the path, proper quoting is required in the configuration. Consult the YAML documentation for precise rules. Here are some examples that demonstrate the need for careful formatting:
+The path definition must also adhere to the rules of YAML. When a character with special meaning is included in the path, proper quoting is required in the configuration. Consult the YAML documentation for precise rules. Here are some examples that demonstrate the need for careful formatting:
```yaml - inputs:
- - ':Person:.:name:' # ':' cannot be used as the first character without quotes
- - '100 celsius.hot' # numbers followed by text would not be interpreted as a string without quotes
+ - ':Person:.:name:' # ':' cannot be used as the first character without single quotation marks
+ - '100 celsius.hot' # numbers followed by text would not be interpreted as a string without single quotation marks
``` ## Escaping
The primary function of escaping in a dot-notated path is to accommodate the use
- 'Payload."Tag.10".Value' ```
-In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotes (`'`) are necessary because of YAML syntax rules, allowing the inclusion of double quotes within the string.
+In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotation marks (`'`) are necessary because of YAML syntax rules, which allow the inclusion of double quotation marks within the string.
### Escaping rules in dot notation
-* **Escape Each Segment Separately**: If multiple segments contain dots, those segments must be enclosed in quotes. Other segments can also be quoted, but it doesn't affect the path interpretation:
+* **Escape each segment separately:** If multiple segments contain dots, those segments must be enclosed in double quotation marks. Other segments can also be quoted, but it doesn't affect the path interpretation:
-```yaml
-- inputs:
- - 'Payload."Tag.10".Measurements."Vibration.$12".Value'
-```
-
-* **Proper Use of Double Quotes**: A double quote must open and close an escaped segment; any quotes in the middle of the segment are considered part of the field name:
+ ```yaml
+ - inputs:
+ - 'Payload."Tag.10".Measurements."Vibration.$12".Value'
+ ```
+
+* **Proper use of double quotation marks:** Double quotation marks must open and close an escaped segment. Any quotation marks in the middle of the segment are considered part of the field name:
-```yaml
-- inputs:
- - 'Payload.He said: "Hello", and waved'
-```
-
-This example defines two fields in the dataDestination: `Payload` and `He said: "Hello", and waved`. When a dot appears under these circumstances, it continues to serve as a separator, as follows:
-
-```yaml
-- inputs:
- - 'Payload.He said: "No. It's done"'
-```
-
-In this case, the path is split into the segments `Payload`, `He said: "No`, and `It's done"` (starting with a space).
-
+ ```yaml
+ - inputs:
+ - 'Payload.He said: "Hello", and waved'
+ ```
+
+ This example defines two fields in `dataDestination`: `Payload` and `He said: "Hello", and waved`. When a dot appears under these circumstances, it continues to serve as a separator:
+
+ ```yaml
+ - inputs:
+ - 'Payload.He said: "No. It's done"'
+ ```
+
+ In this case, the path is split into the segments `Payload`, `He said: "No`, and `It's done"` (starting with a space).
+
### Segmentation algorithm
-* If the first character of a segment is a quote, the parser searches for the next quote. The string enclosed between these quotes is considered a single segment.
-* If the segment doesn't start with a quote, the parser identifies segments by searching for the next dot or the end of the path.
+* If the first character of a segment is a quotation mark, the parser searches for the next quotation mark. The string enclosed between these quotation marks is considered a single segment.
+* If the segment doesn't start with a quotation mark, the parser identifies segments by searching for the next dot or the end of the path.
## Wildcard
-In many scenarios, the output record closely resembles the input record, with only minor modifications required. When dealing with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
+In many scenarios, the output record closely resembles the input record, with only minor modifications required. When you deal with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
Let's consider a basic scenario to understand the use of asterisks in mappings:
Let's consider a basic scenario to understand the use of asterisks in mappings:
Here's how the asterisk (`*`) operates in this context:
-* **Pattern Matching**: The asterisk can match a single or multiple segments of a path. It serves as a placeholder for any segments in the path.
-* **Field Matching**: During the mapping process, the algorithm evaluates each field in the input record against the pattern specified in the `inputs`. The asterisk in the previous example matches all possible paths, effectively fitting every individual field in the input.
-* **Captured Segment**: The portion of the path that the asterisk matches is referred to as the `captured segment`.
-* **Output Mapping**: In the output configuration, the `captured segment` is placed where the asterisk appears. This means that the structure of the input is preserved in the output, with the `captured segment` filling the placeholder provided by the asterisk.
+* **Pattern matching**: The asterisk can match a single segment or multiple segments of a path. It serves as a placeholder for any segments in the path.
+* **Field matching**: During the mapping process, the algorithm evaluates each field in the input record against the pattern specified in the `inputs`. The asterisk in the previous example matches all possible paths, effectively fitting every individual field in the input.
+* **Captured segment**: The portion of the path that the asterisk matches is referred to as the `captured segment`.
+* **Output mapping**: In the output configuration, the `captured segment` is placed where the asterisk appears. This means that the structure of the input is preserved in the output, with the `captured segment` filling the placeholder provided by the asterisk.
This configuration demonstrates the most generic form of mapping, where every field in the input is directly mapped to a corresponding field in the output without modification.
Original JSON:
} ```
-Mapping Configuration Using Wildcards:
+Mapping configuration that uses wildcards:
```yaml - inputs:
Resulting JSON:
### Wildcard placement
-When placing a wildcard, the following rules must be followed:
+When you place a wildcard, you must follow these rules:
-* **Single Asterisk per dataDestination:** Only one asterisk (`*`) is allowed within a single path.
-* **Full Segment Matching:** The asterisk must always match an entire segment of the path. It can't be used to match only a part of a segment, such as `path1.partial*.path3`.
-* **Positioning:** The asterisk can be positioned in various parts of the dataDestination:
- * **At the Beginning:** `*.path2.path3` - Here, the asterisk matches any segment that leads up to `path2.path3`.
- * **In the Middle:** `path1.*.path3` - In this configuration, the asterisk matches any segment between `path1` and `path3`.
- * **At the End:** `path1.path2.*` - The asterisk at the end matches any segment that follows after `path1.path2`.
+* **Single asterisk per dataDestination:** Only one asterisk (`*`) is allowed within a single path.
+* **Full segment matching:** The asterisk must always match an entire segment of the path. It can't be used to match only a part of a segment, such as `path1.partial*.path3`.
+* **Positioning:** The asterisk can be positioned in various parts of `dataDestination`:
+ * **At the beginning:** `*.path2.path3` - Here, the asterisk matches any segment that leads up to `path2.path3`.
+ * **In the middle:** `path1.*.path3` - In this configuration, the asterisk matches any segment between `path1` and `path3`.
+ * **At the end:** `path1.path2.*` - The asterisk at the end matches any segment that follows after `path1.path2`.
### Multi-input wildcards
-*Original JSON:*
+Original JSON:
```json {
When placing a wildcard, the following rules must be followed:
} ```
-Mapping Configuration Using wildcards:
+Mapping configuration that uses wildcards:
```yaml - inputs:
Resulting JSON:
} ```
-If multi-input wildcards, the asterisk (`*`) must consistently represent the same `Captured Segment` across every input. For example, when `*` captures `Saturation` in the pattern `*.Max`, the mapping algorithm expects the corresponding `Saturation.Min` to match with the pattern `*.Min`. Here, `*` is substituted by the `Captured Segment` from the first input, guiding the matching for subsequent inputs.
+If you use multi-input wildcards, the asterisk (`*`) must consistently represent the same `Captured Segment` across every input. For example, when `*` captures `Saturation` in the pattern `*.Max`, the mapping algorithm expects the corresponding `Saturation.Min` to match with the pattern `*.Min`. Here, `*` is substituted by the `Captured Segment` from the first input, guiding the matching for subsequent inputs.
Consider this detailed example:
Original JSON:
} ```
-Initial mapping configuration using wildcards:
+Initial mapping configuration that uses wildcards:
```yaml - inputs:
Initial mapping configuration using wildcards:
expression: ($1, $2, $3, $4) ```
-This initial mapping tries to build an array (For example, for `Opacity`: `[0.88, 0.91, 0.89, 0.89]`). However, this configuration fails because:
+This initial mapping tries to build an array (for example, for `Opacity`: `[0.88, 0.91, 0.89, 0.89]`). This configuration fails because:
* The first input `*.Max` captures a segment like `Saturation`. * The mapping expects the subsequent inputs to be present at the same level:
This initial mapping tries to build an array (For example, for `Opacity`: `[0.88
* `Saturation.Avg` * `Saturation.Mean`
-Since `Avg` and `Mean` are nested within `Mid`, the asterisk in the initial mapping doesn't correctly capture these paths.
+Because `Avg` and `Mean` are nested within `Mid`, the asterisk in the initial mapping doesn't correctly capture these paths.
Corrected mapping configuration:
Corrected mapping configuration:
expression: ($1, $2, $3, $4) ```
-This revised mapping accurately captures the necessary fields by correctly specifying the paths to include the nested `Mid` object, ensuring that the asterisks work effectively across different levels of the JSON structure.
+This revised mapping accurately captures the necessary fields. It correctly specifies the paths to include the nested `Mid` object, which ensures that the asterisks work effectively across different levels of the JSON structure.
-### Second rule versus specialization
+### Second rule vs. specialization
-Using the previous example from multi-input wildcards, consider the following mappings that generate two derived values for each property:
+When you use the previous example from multi-input wildcards, consider the following mappings that generate two derived values for each property:
```yaml - inputs:
Using the previous example from multi-input wildcards, consider the following ma
expression: abs($1 - $2) ```
-This mapping is intended to create two separate calculations (`Avg` and `Diff`) for each property under `ColorProperties`. The result is as follows:
+This mapping is intended to create two separate calculations (`Avg` and `Diff`) for each property under `ColorProperties`. This example shows the result:
```json {
Now, consider a scenario where a specific field needs a different calculation:
In this case, the `Opacity` field has a unique calculation. Two options to handle this overlapping scenario are: -- Include both mappings for `Opacity`. Since the output fields are different in this example, they wouldn't override each other.
+- Include both mappings for `Opacity`. Because the output fields are different in this example, they wouldn't override each other.
- Use the more specific rule for `Opacity` and remove the more generic one.
-Consider a special case for the same fields to help deciding the right action:
+Consider a special case for the same fields to help decide the right action:
```yaml - inputs:
An empty `output` field in the second definition implies not writing the fields
Resolution of overlapping mappings by dataflows: * The evaluation progresses from the top rule in the mapping definition.
-* If a new mapping resolves to the same fields as a previous rule, the following applies:
- * A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
+* If a new mapping resolves to the same fields as a previous rule, the following conditions apply:
+ * A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If it's only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
* If the `Rank` of the latter rule is equal to or higher than the previous rule, a dataflow treats it as a `Second Rule`.
- * Otherwise, it treats the configuration as a `Specialization`.
+ * Otherwise, the dataflow treats the configuration as a `Specialization`.
-For example, the mapping that directs `Opacity.Max` and `Opacity.Min` to an empty output has a `Rank` of zero. Since the second rule has a lower `Rank` than the previous, it's considered a specialization and overrides the previous rule, which would calculate a value for `Opacity`
+For example, the mapping that directs `Opacity.Max` and `Opacity.Min` to an empty output has a `Rank` of 0. Because the second rule has a lower `Rank` than the previous one, it's considered a specialization and overrides the previous rule, which would calculate a value for `Opacity`.
### Wildcards in contextualization datasets
-While a detailed explanation of contextualization datasets is explained later, let's see now how they can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
+Now, let's see how contextualization datasets can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
```json {
In an earlier example, we used a specific field from this dataset:
output: Employment.BaseSalary ```
-This mapping copies the `BaseSalary` from the context dataset directly into the `Employment` section of the output record. However, if you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can utilize wildcards:
+This mapping copies `BaseSalary` from the context dataset directly into the `Employment` section of the output record. If you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can use wildcards:
```yaml - inputs:
machine-learning Concept Automl Forecasting Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-deep-learning.md
Previously updated : 08/01/2023 Last updated : 08/09/2024 show_latex: true
show_latex: true
This article focuses on the deep learning methods for time series forecasting in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
-Deep learning has made a major impact in fields ranging from [language modeling](../ai-services/openai/concepts/models.md) to [protein folding](https://www.deepmind.com/research/highlighted-research/alphafold), among many others. Time series forecasting has likewise benefitted from recent advances in deep learning technology. For example, deep neural network (DNN) models feature prominently in the top performing models from the [fourth](https://www.uber.com/blog/m4-forecasting-competition/) and [fifth](https://www.sciencedirect.com/science/article/pii/S0169207021001874) iterations of the high-profile Makridakis forecasting competition.
+Deep learning has numerous use cases in fields ranging from [language modeling](../ai-services/openai/concepts/models.md) to [protein folding](https://www.deepmind.com/research/highlighted-research/alphafold), among many others. Time series forecasting also benefits from recent advances in deep learning technology. For example, deep neural network (DNN) models feature prominently in the top performing models from the [fourth](https://www.uber.com/blog/m4-forecasting-competition/) and [fifth](https://www.sciencedirect.com/science/article/pii/S0169207021001874) iterations of the high-profile Makridakis forecasting competition.
-In this article, we'll describe the structure and operation of the TCNForecaster model in AutoML to help you best apply the model to your scenario.
+In this article, we describe the structure and operation of the TCNForecaster model in AutoML to help you best apply the model to your scenario.
## Introduction to TCNForecaster
-TCNForecaster is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, which has a DNN architecture specifically designed for time series data. The model uses historical data for a target quantity, along with related features, to make probabilistic forecasts of the target up to a specified forecast horizon. The following image shows the major components of the TCNForecaster architecture:
+TCNForecaster is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, which has a DNN architecture designed for time series data. The model uses historical data for a target quantity, along with related features, to make probabilistic forecasts of the target up to a specified forecast horizon. The following image shows the major components of the TCNForecaster architecture:
:::image type="content" source="media/how-to-auto-train-forecast/tcn-basic.png" alt-text="Diagram showing major components of AutoML's TCNForecaster."::: TCNForecaster has the following main components:
-* A **pre-mix** layer that mixes the input time series and feature data into an array of signal **channels** that the convolutional stack will process.
+* A **pre-mix** layer that mixes the input time series and feature data into an array of signal **channels** that the convolutional stack processes.
* A stack of **dilated convolution** layers that processes the channel array sequentially; each layer in the stack processes the output of the previous layer to produce a new channel array. Each channel in this output contains a mixture of convolution-filtered signals from the input channels. * A collection of **forecast head** units that coalesce the output signals from the convolution layers and generate forecasts of the target quantity from this latent representation. Each head unit produces forecasts up to the horizon for a quantile of the prediction distribution.
Stacking dilated convolutions gives the TCN the ability to model correlations ov
:::image type="content" source="media/concept-automl-forecasting-deep-learning/tcn-dilated-conv.png" alt-text="Diagram showing stacked, dilated convolution layers.":::
-The dashed lines show paths through the network that end on the output at a time $t$. These paths cover the last eight points in the input, illustrating that each output point is a function of the eight most relatively recent points in the input. The length of history, or "look back," that a convolutional network uses to make predictions is called the **receptive field** and it is determined completely by the TCN architecture.
+The dashed lines show paths through the network that end on the output at a time $t$. These paths cover the last eight points in the input, illustrating that each output point is a function of the eight most relatively recent points in the input. The length of history, or "look back," that a convolutional network uses to make predictions is called the **receptive field** and it's determined completely by the TCN architecture.
### TCNForecaster architecture
We can give a more precise definition of the TCNForecaster architecture in terms
:::image type="content" source="media/concept-automl-forecasting-deep-learning/tcn-equations.png" alt-text="Equations describing TCNForecaster operations.":::
-where $W_{e}$ is an [embedding](https://huggingface.co/blog/getting-started-with-embeddings) matrix for the categorical features, $n_{l} = n_{b}n_{c}$ is the total number of residual cells, the $H_{k}$ denote hidden layer outputs, and the $f_{q}$ are forecast outputs for given quantiles of the prediction distribution. To aid understanding, the dimensions of these variables are in the following table:
+Where $W_{e}$ is an [embedding](https://huggingface.co/blog/getting-started-with-embeddings) matrix for the categorical features, $n_{l} = n_{b}n_{c}$ is the total number of residual cells, the $H_{k}$ denote hidden layer outputs, and the $f_{q}$ are forecast outputs for given quantiles of the prediction distribution. To aid understanding, the dimensions of these variables are in the following table:
|Variable|Description|Dimensions| |--|--|--|
In the table, $n_{\text{input}} = n_{\text{features}} + 1$, the number of predic
TCNForecaster is an optional model in AutoML. To learn how to use it, see [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning).
-In this section, we'll describe how AutoML builds TCNForecaster models with your data, including explanations of data preprocessing, training, and model search.
+In this section, we describe how AutoML builds TCNForecaster models with your data, including explanations of data preprocessing, training, and model search.
### Data preprocessing steps
Fill missing data|[Impute missing values and observation gaps](./concept-automl-
|Target transform|Optionally apply the natural logarithm function to the target depending on the results of certain statistical tests.| |Normalization|[Z-score normalize](https://en.wikipedia.org/wiki/Standard_score) all numeric data; normalization is performed per feature and per time series group, as defined by the [time series ID columns](./how-to-auto-train-forecast.md#forecasting-job-settings).
-These steps are included in AutoML's transform pipelines, so they are automatically applied when needed at inference time. In some cases, the inverse operation to a step is included in the inference pipeline. For example, if AutoML applied a $\log$ transform to the target during training, the raw forecasts are exponentiated in the inference pipeline.
+These steps are included in AutoML's transform pipelines, so they're automatically applied when needed at inference time. In some cases, the inverse operation to a step is included in the inference pipeline. For example, if AutoML applied a $\log$ transform to the target during training, the raw forecasts are exponentiated in the inference pipeline.
### Training
The model search has two phases:
1. AutoML performs a search over 12 "landmark" models. The landmark models are static and chosen to reasonably span the hyper-parameter space. 2. AutoML continues searching through the hyper-parameter space using a random search.
-The search terminates when stopping criteria are met. The stopping criteria depend on the [forecast training job configuration](./how-to-auto-train-forecast.md#configure-experiment), but some examples include time limits, limits on number of search trials to perform, and early stopping logic when the validation metric is not improving.
+The search terminates when stopping criteria are met. The stopping criteria depend on the [forecast training job configuration](./how-to-auto-train-forecast.md#configure-experiment), but some examples include time limits, limits on number of search trials to perform, and early stopping logic when the validation metric isn't improving.
## Next steps
machine-learning Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/llm-tool.md
Last updated 11/02/2023
# LLM tool
-The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/) or [Azure OpenAI Service](../../../cognitive-services/openai/overview.md) for natural language processing.
+The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI Service](../../../cognitive-services/openai/overview.md), or any language model supported by the [Azure AI model inference API](https://aka.ms/azureai/modelinference) for natural language processing.
Prompt flow provides a few different large language model APIs: - [Completion](https://platform.openai.com/docs/api-reference/completions): OpenAI's completion models generate text based on provided prompts.-- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
+- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models and the [Azure AI](https://aka.ms/azureai/modelinference) chat models facilitate interactive conversations with text-based inputs and responses.
> [!NOTE] > We removed the `embedding` option from the LLM tool API setting. You can use an embedding API with the [embedding tool](embedding-tool.md).
Create OpenAI resources:
- Create Azure OpenAI resources with [these instructions](../../../ai-services/openai/how-to/create-resource.md).
+- **Models deployed to Serverless API endpoints**
+
+ - Select the model from the catalog you are interested in [and deploy it with a serverless API endpoint](../../how-to-deploy-models-serverless.md).
+ - To use models deployed to serverless API endpoints supported by the [Azure AI model inference API](https://aka.ms/azureai/modelinference), like Mistral, Cohere, Meta Llama, or Microsoft family of models (among others), you need to [create a connection in your project to your endpoint](../../how-to-connect-models-serverless.md?#create-a-serverless-api-endpoint-connection).
+ ## Connections Set up connections to provisioned resources in prompt flow.
Set up connections to provisioned resources in prompt flow.
| OpenAI | Required | Required | - | - | | Azure OpenAI - API key| Required | Required | Required | Required | | Azure OpenAI - Microsoft Entra ID| Required | - | - | Required |
+| Serverless model | Requred | Required | - | - |
> [!TIP] > - To use Microsoft Entra ID auth type for Azure OpenAI connection, you need assign either the `Cognitive Services OpenAI User` or `Cognitive Services OpenAI Contributor role` to user or user assigned managed identity.
The following sections show various inputs.
| Name | Type | Description | Required | ||-||-| | prompt | string | Text prompt that the language model uses for a response. | Yes |
-| model, deployment_name | string | Language model to use. | Yes |
+| model, deployment_name | string | Language model to use. This parameter is not required if the model is deployed to a serverless API endpoint. | Yes* |
| max\_tokens | integer | Maximum number of tokens to generate in the response. Default is inf. | No | | temperature | float | Randomness of the generated text. Default is 1. | No | | stop | list | Stopping sequence for the generated text. Default is null. | No |
The following sections show various inputs.
## Use the LLM tool
-1. Set up and select the connections to OpenAI resources.
+1. Set up and select the connections to OpenAI resources or to a serverless API endpoint.
1. Configure the large language model API and its parameters. 1. Prepare the prompt with [guidance](prompt-tool.md#write-a-prompt).
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
Previously updated : 08/08/2023 Last updated : 08/09/2024 #Customer intent: As a non-coding data scientist, I want to use automated machine learning techniques so that I can build a classification model.
You complete the following experiment set-up and run steps via the Azure Machine
![Get started page](./media/tutorial-first-experiment-automated-ml/get-started.png)
-1. Select **+New automated ML job**.
+1. Select **+New automated ML job**.
+
+1. Select **Train automatically**
+
+1. Select **Start configuring job**
+
+1. In the **Experiment name** section, select the option **Create new** and enter this experiment name: `my-1st-automl-experiment`
## Create and load a dataset as a data asset + Before you configure your experiment, upload your data file to your workspace in the form of an Azure Machine Learning data asset. In the case of this tutorial, you can think of a data asset as your dataset for the AutoML job. Doing so, allows you to ensure that your data is formatted appropriately for your experiment.
-1. Create a new data asset by selecting **From local files** from the **+Create data asset** drop-down.
+1. Select **Classfication** as your task type.
+
+1. Create a new data asset by selecting **Create**.
1. On the **Basic info** form, give your data asset a name and provide an optional description. The automated ML interface currently only supports TabularDatasets, so the dataset type should default to *Tabular*. 1. Select **Next** on the bottom left 1. On the **Datastore and file selection** form, select the default datastore that was automatically set up during your workspace creation, **workspaceblobstore (Azure Blob Storage)**. This is where you'll upload your data file to make it available to your workspace.- 1. Select **Upload files** from the **Upload** drop-down. 1. Choose the **bankmarketing_train.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv).
Before you configure your experiment, upload your data file to your workspace in
After you load and configure your data, you can set up your experiment. This setup includes experiment design tasks such as, selecting the size of your compute environment and specifying what column you want to predict.
-1. Select the **Create new** radio button.
- 1. Populate the **Configure Job** form as follows:
- 1. Enter this experiment name: `my-1st-automl-experiment`
1. Select **y** as the target column, what you want to predict. This column indicates whether the client subscribed to a term deposit or not.
+ 1. Select **View additional configuration settings** and populate the fields as follows. These settings are to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+
+ Additional&nbsp;configurations|Description|Value&nbsp;for&nbsp;tutorial
+ ||
+ Primary metric| Evaluation metric that the machine learning algorithm will be measured by.|AUC_weighted
+ Explain best model| Automatically shows explainability on the best model created by automated ML.| Enable
+ Blocked algorithms | Algorithms you want to exclude from the training job| None
+ Additional&nbsp;classification settings | These settings help improve the accuracy of your model |Positive class label: None
+ Exit criterion| If a criteria is met, the training job is stopped. |Training&nbsp;job&nbsp;time (hours): 1 <br> Metric&nbsp;score&nbsp;threshold: None
+ Concurrency| The maximum number of parallel iterations executed per iteration| Max&nbsp;concurrent&nbsp;iterations: 5
+
+ 1. Select **Save**.
- 1. Select **compute cluster** as your compute type.
- 1. A compute target is a local or cloud-based resource environment used to run your training script or host your service deployment. For this experiment, you can either try a cloud-based serverless compute (preview) or create your own cloud-based compute.
- 1. To use serverless compute, [enable the preview feature](./how-to-use-serverless-compute.md#how-to-use-serverless-compute), select **Serverless**, and skip the rest of this step.
- 1. To create your own compute target, select **+New** to configure your compute target.
+1. On the **[Optional] Validate and test** form,
+ 1. Select k-fold cross-validation as your **Validation type**.
+ 1. Select 2 as your **Number of cross validations**.
+1. Select **Next**
+1. Select **compute cluster** as your compute type.
+1. A compute target is a local or cloud-based resource environment used to run your training script or host your service deployment. For this experiment, you can either try a cloud-based serverless compute (preview) or create your own cloud-based compute.
+ 1. To use serverless compute, [enable the preview feature](./how-to-use-serverless-compute.md#how-to-use-serverless-compute), select **Serverless**, and skip the rest of this step.
+ 1. To create your own compute target, select **+New** to configure your compute target.
1. Populate the **Select virtual machine** form to set up your compute. Field | Description | Value for tutorial
After you load and configure your data, you can set up your experiment. This set
1. After creation, select your new compute target from the drop-down list.
- 1. Select **Next**.
+1. Select **Next**.
-1. On the **Select task and settings** form, complete the setup for your automated ML experiment by specifying the machine learning task type and configuration settings.
-
- 1. Select **Classification** as the machine learning task type.
- 1. Select **View additional configuration settings** and populate the fields as follows. These settings are to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
- Additional&nbsp;configurations|Description|Value&nbsp;for&nbsp;tutorial
- ||
- Primary metric| Evaluation metric that the machine learning algorithm will be measured by.|AUC_weighted
- Explain best model| Automatically shows explainability on the best model created by automated ML.| Enable
- Blocked algorithms | Algorithms you want to exclude from the training job| None
- Additional&nbsp;classification settings | These settings help improve the accuracy of your model |Positive class label: None
- Exit criterion| If a criteria is met, the training job is stopped. |Training&nbsp;job&nbsp;time (hours): 1 <br> Metric&nbsp;score&nbsp;threshold: None
- Concurrency| The maximum number of parallel iterations executed per iteration| Max&nbsp;concurrent&nbsp;iterations: 5
-
- Select **Save**.
- 1. Select **Next**.
-
-1. On the **[Optional] Validate and test** form,
- 1. Select k-fold cross-validation as your **Validation type**.
- 1. Select 2 as your **Number of cross validations**.
-1. Select **Finish** to run the experiment. The **Job Detail** screen opens with the **Job status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio to inform you of the status of your experiment.
+1. Select **Submit training job** to run the experiment. The **Job overview** screen opens with the **Job status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio to inform you of the status of your experiment.
>[!IMPORTANT] > Preparation takes **10-15 minutes** to prepare the experiment run.
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Previously updated : 07/23/2024 Last updated : 08/08/2024
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the latest releases.
+## Updates - August 2024
+
+You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit.
+
+There's a change in the order of actions performed by Site Reliability Engineers of Azure RedHat OpenShift. To maintain the health of a cluster, a timely action is necessary if control plane resources are over-utilized. Now the control plane is resized proactively to maintain cluster health. After the resize of the control plane, a notification is sent out to you with the details of the changes made to the control plane. Make sure you have the quota available in your subscription for Site Reliability Engineers to perform this action.
+ ## Version 4.14 - May 2024 We're pleased to announce the launch of OpenShift 4.14 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.14](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html). You can check the end of support date on the [support lifecycle page](/azure/openshift/support-lifecycle) for previous versions.
openshift Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/delete-cluster.md
In previous articles for [creating](create-cluster.md) and [connecting](connect-
```bash RESOURCEGROUP=yourresourcegroup
+CLUSTER=clustername
```
-Using this value, delete your cluster:
+Using these values, delete your cluster:
```azurecli
-az group delete --name $RESOURCEGROUP
+az aro delete --resource-group $RESOURCEGROUP --name $CLUSTER
```
+You'll then be prompted to confirm if you are sure you want to perform this operation. After you confirm with `y`, it will take several minutes to delete the cluster. When the command finishes, the cluster will be deleted and all the managed objects.
-You'll then be prompted to confirm if you are sure you want to perform this operation. After you confirm with `y`, it will take several minutes to delete the cluster. When the command finishes, the entire resource group and all resources inside it, including the cluster and the virtual network, will be deleted.
+> [!NOTE]
+> User-created objects such as virtual network and subnets must be manually deleted accordingly.
## Next steps
-Learn more about using OpenShift with the official [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.6/welcome/https://docsupdatetracker.net/index.html).
+Learn more about using OpenShift with the official [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html).
openshift Howto Multiple Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-multiple-ips.md
Title: Configure multiple IP addresses for ARO cluster load balancers (Preview)
+ Title: Configure multiple IP addresses for ARO cluster load balancers
description: Discover how to configure multiple IP addresses for ARO cluster load balancers.
Last updated 03/05/2024 #Customer intent: As an ARO SRE, I need to configure multiple outbound IP addresses per ARO cluster load balancers
-# Configure multiple IP addresses per ARO cluster load balancer (Preview)
+# Configure multiple IP addresses per ARO cluster load balancer
-ARO public clusters are created with a public load balancer that's used for outbound connectivity from inside the cluster. By default, one public IP address is configured on that public load balancer, and that limits the maximum node count of your cluster to 62. To be able to scale your cluster to the maximum supported number of nodes, you need to assign multiple additional public IP addresses to the load balancer.
+ARO public clusters are created with a public load balancer that's used for outbound connectivity from inside the cluster. By default, one public IP address is configured on that public load balancer, and that limits the maximum node count of your cluster to 65. To be able to scale your cluster to the maximum supported number of 250 nodes, you need to assign multiple additional public IP addresses to the load balancer.
You can configure up to 20 IP addresses per cluster. The outbound rules and frontend IP configurations are adjusted to accommodate the number of IP addresses.
+> [!CAUTION]
+> Before deleting a large cluster, descale the cluster to 120 nodes or below.
+>
+
+> [!NOTE]
+> The [API](/rest/api/openshift/open-shift-clusters/update?view=rest-openshift-2023-11-22&tabs=HTTP) method for using this feature is generally available. General availability for using the CLI for this feature is coming soon. The [preview version](#download-aro-extension-wheel-file-preview-only) of this feature can still be used through the CLI.
+>
+ ## Requirements The multiple public IPs feature is only available on the current network architecture used by ARO; older clusters don't support this feature. If your cluster was created before OpenShift Container Platform (OCP) version 4.5, this feature isn't available even if you upgraded your OCP version since then.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* The cluster must have a minimum of three worker nodes and three master nodes. * Don't scale the cluster workers to zero, or attempt a cluster shutdown. Deallocating or powering down any virtual machine in the cluster resource group isn't supported.
+* Don't create more than 250 worker nodes on a cluster. 250 is the maximum number of nodes that can be created on a cluster. See [Configure multiple IP addresses per ARO cluster load balancer](howto-multiple-ips.md) for more information.
* If you're making use of infrastructure nodes, don't run any undesignated workloads on them as this can affect the Service Level Agreement and cluster stability. Also, it's recommended to have three infrastructure nodes; one in each availability zone. See [Deploy infrastructure nodes in an Azure Red Hat OpenShift (ARO) cluster](howto-infrastructure-nodes.md) for more information. * Non-RHCOS compute nodes aren't supported. For example, you can't use an RHEL compute node. * Don't attempt to remove, replace, add, or modify a master node. That's a high risk operation that can cause issues with etcd, permanent network loss, and loss of access and manageability by ARO SRE. If you feel that a master node should be replaced or removed, contact support before making any changes.
operator-nexus Howto Kubernetes Cluster Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-features.md
+
+ Title: Understanding Kubernetes cluster features in Azure Operator Nexus Kubernetes service
+description: Working with Kubernetes cluster features in Azure Operator Nexus Kubernetes clusters
++++ Last updated : 08/14/2024 +++
+# Work with Kubernetes cluster features in Nexus Kubernetes clusters
+
+In this article, you learn how to work with Nexus Kubernetes cluster features. Nexus Kubernetes Cluster Features is a functionality of the Nexus platform that allows customers to enhance their Nexus Kubernetes clusters by adding extra packages or features.
+
+## Prerequisites
+
+Before proceeding with this how-to guide, it's recommended that you:
+
+* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-cli.md) for a comprehensive overview and steps involved.
+* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
+* Minimum required `networkcloud` az-cli extension version: `3.0.0b1`
+
+## Limitations
+
+* You can only create, delete, or update Kubernetes cluster features that have the `Required` field set to `False`.
+* When installing a Kubernetes cluster feature for the first time, the feature's name should be one of the feature names listed in the table. For subsequent actions such as updates or deletions, the feature's name should be obtained using the `az networkcloud kubernetescluster feature list` command.
+* The `metrics-server` feature can't be deleted if a Horizontal Pod Autoscaler (HPA) is in use within the cluster.
+* Storage-related Kubernetes cluster features, such as `csi-nfs` and `csi-volume`, can't be deleted if the respective StorageClass is in use within the cluster.
+
+## Default configuration
+
+When a Nexus Kubernetes cluster is deployed, the list of required Kubernetes cluster features will be installed automatically. After deployment, you can manage optional Kubernetes cluster features by either installing them or uninstalling them (deleting them from the cluster).
+
+You can't control the installation of Kubernetes cluster features marked as "Required." However, you can perform create, update, and delete operations on features that have the "Required" field set to "False." You also have the option to update any Kubernetes cluster features via the update command.
+
+The following Kubernetes cluster features are available to each Nexus Kubernetes cluster. Features with "Required" set to "True" are always installed by default and can't be deleted.
+
+| Name | Description | Required | Installed by default |
+|--|-|-|-|
+| azure-arc-k8sagents | Arc connects Nexus Kubernetes Cluster | True | True |
+| calico | Provides Container Network Interface (CNI) support | True | True |
+| cloud-provider-kubevirt | Supports the Cluster API (CAPI) KubeVirt provider for managing virtual machine-based workloads in Kubernetes | True | True |
+| ipam-cni-plugin | Allocates IP addresses for Layer 3 networks connected to workload containers when `ipamEnabled` is set to True | True | True |
+| metallb | Provides External IPs to LoadBalancer services for load balancing traffic within Kubernetes | True | True |
+| multus | Supports multiple network interfaces to be attached to Kubernetes pods | True | True |
+| node-local-dns | Deploys NodeLocal DNSCache to improve DNS performance and reliability within the Kubernetes cluster | True | True |
+| sriov-dp | Deploys an optional CNI plugin for Single Root I/O Virtualization (SR-IOV) to enhance network performance | True | True |
+| azure-arc-servers | Deploys Azure Arc-enabled servers on each control plane and agent pool node, allowing management of non-Azure resources alongside Azure resources | False | True |
+| csi-nfs | Provides a Container Storage Interface (CSI) driver for NFS (Network File System) to support NFS-based storage in Kubernetes | False| True |
+| csi-volume | Supports the csi-nexus-volume storage class for persistent volume claims within Kubernetes | False | True |
+| metrics-server | Deploys the Metrics Server, which provides resource usage metrics for Kubernetes clusters, such as CPU and memory usage | False| True |
+
+> [!NOTE]
+> * For each cluster, you can create only one feature of each Kubernetes cluster feature type.
+> * If you delete a Kubernetes cluster feature with the "Required" attribute set to "False," the related charts will be removed from the cluster.
+
+## How to manage Kubernetes cluster features
+
+The following interactions allow for the creation and management of the Kubernetes cluster feature configuration.
+
+### Install a Kubernetes cluster feature
+
+To install a Kubernetes cluster feature in the cluster, use the `az networkcloud kubernetescluster feature create` command. If you have multiple Azure subscriptions, you must specify the subscription ID either by using the `--subscription` flag in the CLI command or by selecting the appropriate subscription ID with the [az account set](/cli/azure/account#az-account-set) command.
+
+```azurecli
+az networkcloud kubernetescluster feature create \
+ --name "<FEATURE_NAME>" \
+ --kubernetes-cluster-name "<KUBERNETES_CLUSTER_NAME>" \
+ --resource-group "<RESOURCE_GROUP>" \
+ --location "<LOCATION>" \
+ --tags "<KEY1>=<VALUE1>" "<KEY2>=<VALUE2>"
+```
+
+* Replace the placeholders (`<FEATURE_NAME>`, `<KUBERNETES_CLUSTER_NAME>`, `<RESOURCE_GROUP>`, `<LOCATION>`, `<KEY1>=<VALUE1>`, and `<KEY2>=<VALUE2>`) with your specific information.
+
+To see all available parameters and their descriptions, run the command:
+
+```azurecli
+az networkcloud kubernetescluster feature create --help
+```
+
+#### Kubernetes cluster feature configuration parameters
+
+| Parameter name | Description |
+| --| -- |
+| FEATURE_NAME | Name of Kubernetes cluster `feature` |
+| KUBERNETES_CLUSTER_NAME | Name of Cluster |
+| LOCATION | The Azure Region where the Cluster is deployed |
+| RESOURCE_GROUP | The Cluster resource group name |
+| KEY1 | Optional tag1 to pass to Kubernetes cluster feature create |
+| VALUE1 | Optional tag1 value to pass to Kubernetes cluster feature create |
+| KEY2 | Optional tag2 to pass to Kubernetes cluster feature create |
+| VALUE2 | Optional tag2 value to pass to Kubernetes cluster feature create |
+
+Specifying `--no-wait --debug` options in az command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
+
+### List the Kubernetes cluster feature
+
+You can check the Kubernetes cluster feature resources for a specific cluster by using the `az networkcloud kubernetescluster feature list` command. This command displays a list of all features associated with the specified Kubernetes cluster:
+
+```azurecli
+az networkcloud kubernetescluster feature list \
+ --kubernetes-cluster-name "<KUBERNETES_CLUSTER_NAME>" \
+ --resource-group "<RESOURCE_GROUP>"
+
+```
+
+### Retrieve a Kubernetes cluster feature
+
+After a Kubernetes cluster is created, you can check the details of a specific Kubernetes cluster feature using the `networkcloud kubernetescluster feature show` command. This provides detailed information about the feature:
+
+```azurecli
+az networkcloud kubernetescluster feature show \
+ --cluster-name "<KUBERNETES_CLUSTER_NAME>" \
+ --resource-group "<RESOURCE_GROUP>"
+```
+
+This command returns a JSON representation of the Kubernetes cluster feature configuration.
+
+### Update a Kubernetes cluster feature
+
+Much like the creation of a Kubernetes cluster feature, you can perform an update action to modify the tags assigned to the Kubernetes cluster feature. Use the following command to update the tags:
+
+> [!IMPORTANT]
+> * The `name` parameter should match the "Name" obtained from the output of the `az networkcloud kubernetescluster feature list` command. While the feature name provided during installation can be used initially, once the feature is installed, it is assigned a unique name. Therefore, always use the `list` command to get the actual resource name for update and delete operations, rather than relying on the initial feature name shown in the table.
+
+```azurecli
+az networkcloud kubernetescluster feature update \
+ --name "<FEATURE_NAME>" \
+ --kubernetes-cluster-name "<KUBERNETES_CLUSTER_NAME>" \
+ --resource-group "<RESOURCE_GROUP>" \
+ --tags <KEY1>="<VALUE1>" \
+ <KEy2>="<VALUE2>"
+```
+
+Specifying `--no-wait --debug` options in az command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
+
+### Delete Kubernetes cluster feature
+
+Deleting a Kubernetes cluster feature removes the resource from the cluster. To delete a Kubernetes cluster feature, use the following command:
+
+> [!IMPORTANT]
+> * The `name` parameter should match the "Name" obtained from the output of the `az networkcloud kubernetescluster feature list` command. While the feature name provided during installation can be used initially, once the feature is installed, it is assigned a unique name. Therefore, always use the `list` command to get the actual resource name for update and delete operations, rather than relying on the initial feature name shown in the table.
+
+```azurecli
+az networkcloud kubernetescluster feature delete \
+ --name "<FEATURE_NAME>" \
+ --kubernetes-cluster-name "<KUBERNETES_CLUSTER_NAME>" \
+ --resource-group "<RESOURCE_GROUP>"
+```
+
+Specifying `--no-wait --debug` options in az command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
+
+> [!NOTE]
+> * If you attempt to delete a Kubernetes cluster feature that has `Required=True`, the command will fail and produce an error message stating, "delete not allowed for ... feature as it is a required feature."
+> * In such cases, a subsequent show/list command will display the `provisioningState` as `Failed`. This is a known issue.
+> * To correct the `provisioningState`, you can run a no-op command, such as updating the tags on the affected Kubernetes cluster feature. Use the `--tags` parameter of the update command to do this. This action will reset the `provisioningState` to `Succeeded`.
operator-nexus Howto Kubernetes Cluster Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-upgrade.md
This article provides instructions on how to upgrade an Operator Nexus Kubernete
* An Azure Operator Nexus Kubernetes cluster deployed in a resource group in your Azure subscription. * If you're using Azure CLI, this article requires that you're running the latest Azure CLI version. If you need to install or upgrade, see [Install Azure CLI](./howto-install-cli-extensions.md)
+* Minimum required `networkcloud` az-cli extension version: `3.0.0b1`
* Understand the version bundles concept. For more information, see [Nexus Kubernetes version bundles](./reference-nexus-kubernetes-cluster-supported-versions.md#version-bundles). ## Check for available upgrades
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
-# Migrate Azure API Management to availability zones
+# Migrate Azure API Management to availability zone support
The Azure API Management service supports [zone redundancy](../reliability/availability-zones-overview.md), which provides resiliency and high availability to a service instance in a specific Azure region. With zone redundancy, the gateway and the control plane of your API Management instance (management API, developer portal, Git configuration) are replicated across datacenters in physically separated zones, so they're resilient to a zone failure.
reliability Reliability Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-fabric.md
+ - subject-reliability - references_regions - build-2023 - ignite-2023 Previously updated : 12/13/2023 Last updated : 08/15/2024 # Reliability in Microsoft Fabric
Fabric makes commercially reasonable efforts to support zone-redundant availabil
### Prerequisites - Fabric currently provides partial availability-zone support in a [limited number of regions](#supported-regions). This partial availability-zone support covers experiences (and/or certain functionalities within an experience).-- Experiences such as Data Engineering, Data Science, and Event Streams don't support availability zones. -- Zone availability may or may not be available for Fabric experiences or features/functionalities that are in preview.
+- Experiences such as Event Streams don't support availability zones.
+- Data engineering supports availability zones if you use OneLake. If you use other data sources such as ADLS Gen2, then you need to ensure that Zone-redundant storage (ZRS) is enabled.
+- Zone availability may or may not be available for Fabric experiences and/or features/functionalities that are in preview.
- On-premises gateways and large semantic models in Power BI don't support availability zones. - Data Factory (pipelines) support availability zones in West Europe, but new or inprogress pipelines runs _may_ fail in case of zone outage.
Fabric makes commercially reasonable efforts to support zone-redundant availabil
Fabric makes commercially reasonable efforts to provide availability zone support in various regions as follows:
-| **Americas** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** | **Data Factory (pipelines)** |
-|:|::|::|::|::|::|
-| Brazil South | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Canada Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Central US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | |
-| East US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| East US 2 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false":::| :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| South Central US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| West US 2 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| West US 3 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| **Europe** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** |
-| France Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Germany West Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | ||
-| North Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| UK South | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| West Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | |
-| Norway East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| **Middle East** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** |
-| Qatar Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | |
-| **Africa** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** |
-| South Africa North | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| **Asia Pacific** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** |
-| Australia East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Japan East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Southeast Asia | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | |:::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| **Americas** | **Power BI** | **Datamarts** | **Data Warehouses** | **Real-Time Analytics** | **Data Factory (pipelines)** | **Data Engineering** |
+|:|::|::|::|::|::|::|
+| Brazil South | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| Canada Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| Central US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| East US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| East US 2 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| South Central US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| West US 2 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| West US 3 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| **Europe** | | | | | | |
+| France Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| Germany West Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | |
+| North Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| UK South | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| West Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | |
+| Norway East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| **Middle East** | | | | | | |
+| Qatar Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | |
+| **Africa** | | | | | | |
+| South Africa North | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| **Asia Pacific** | | | | | | |
+| Australia East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| Japan East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |
+| Southeast Asia | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | |
### Zone down experience
-During a zone-wide outage, no action is required during zone recovery. Fabric capabilities in regions listed in [supported regions](#supported-regions) self-heal and rebalance automatically to take advantage of the healthy zone.
+During a zone-wide outage, no action is required during zone recovery. Fabric capabilities in regions listed in [supported regions](#supported-regions) self-heal and rebalance automatically to take advantage of the healthy zone. Running Spark Jobs may fail if the master node is in the failed zone. In such a case, the jobs will need to be resubmitted.
++ >[!IMPORTANT]
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
- ignite-2023 Previously updated : 07/29/2024 Last updated : 08/15/2024 # Retrieval Augmented Generation (RAG) in Azure AI Search
Since you probably know what kind of content you want to search over, consider t
| Content type | Indexed as | Features | |--||-| | text | tokens, unaltered text | [Indexers](search-indexer-overview.md) can pull plain text from other Azure resources like Azure Storage and Cosmos DB. You can also [push any JSON content](search-what-is-data-import.md) to an index. To modify text in flight, use [analyzers](search-analyzers.md) and [normalizers](search-normalizers.md) to add lexical processing during indexing. [Synonym maps](search-synonyms.md) are useful if source documents are missing terminology that might be used in a query. |
-| text | vectors <sup>1</sup> | Text can be chunked and vectorized externally and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. |
+| text | vectors <sup>1</sup> | Text can be chunked and vectorized in an indexer pipeline, or handled externally and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. |
| image | tokens, unaltered text <sup>2</sup> | [Skills](cognitive-search-working-with-skillsets.md) for OCR and Image Analysis can process images for text recognition or image characteristics. Image information is converted to searchable text and added to the index. Skills have an indexer requirement. |
-| image | vectors <sup>1</sup> | Images can be vectorized externally for a mathematical representation of image content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. You can use an open source model like [OpenAI CLIP](https://github.com/openai/CLIP/blob/main/README.md) to vectorize text and images in the same embedding space.|
+| image | vectors <sup>1</sup> | Images can be vectorized in an indexer pipeline, or handled externally for a mathematical representation of image content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. You can use [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) or an open source model like [OpenAI CLIP](https://github.com/openai/CLIP/blob/main/README.md) to vectorize text and images in the same embedding space.|
<!-- | audio | vectors <sup>1</sup> | Vectorized audio content can be [indexed as vector fields](vector-search-how-to-create-index.md) in your index. Vectorization of audio content often requires intermediate processing that converts audio to text, and then text to vecctors. [Azure AI Speech](/azure/ai-services/speech-service/overview) and [OpenAI Whisper](https://platform.openai.com/docs/guides/speech-to-text) are two examples for this scenario. | | video | vectors <sup>1</sup> | Vectorized video content can be [indexed as vector fields](vector-search-how-to-create-index.md) in your index. Similar to audio, vectorization of video content also requires extra processing, such as breaking up the video into frames or smaller chunks for vectorization. | -->
- <sup>1</sup> The generally available functionality of [vector support](vector-search-overview.md) requires that you call other libraries or models for data chunking and vectorization. However, [integrated vectorization](vector-search-integrated-vectorization.md) embeds these steps. For code samples showing both approaches, see [azure-search-vectors repo](https://github.com/Azure/azure-search-vector-samples).
+ <sup>1</sup> Azure AI Search provides [integrated data chunking and vectorization](vector-search-integrated-vectorization.md), but you must take a dependency on indexers and skillsets. If you can't use an indexer, Microsoft's [Semantic Kernel](/semantic-kernel/overview/) or other community offerings can help you with a full stack solution. For code samples showing both approaches, see [azure-search-vectors repo](https://github.com/Azure/azure-search-vector-samples).
-<sup>2</sup> [Skills](cognitive-search-working-with-skillsets.md) are built-in support for [AI enrichment](cognitive-search-concept-intro.md). For OCR and Image Analysis, the indexing pipeline makes an internal call to the Azure AI Vision APIs. These skills pass an extracted image to Azure AI for processing, and receive the output as text that's indexed by Azure AI Search.
+<sup>2</sup> [Skills](cognitive-search-working-with-skillsets.md) are built-in support for [applied AI](cognitive-search-concept-intro.md). For OCR and Image Analysis, the indexing pipeline makes an internal call to the Azure AI Vision APIs. These skills pass an extracted image to Azure AI for processing, and receive the output as text that's indexed by Azure AI Search. Skills are also used for integrated data chunking (Text Split skill) and integrated embedding (skills that call Azure AI Vision multimodal, Azure OpenAI, and models in the Azure AI Studio model catalog.)
Vectors provide the best accommodation for dissimilar content (multiple file formats and languages) because content is expressed universally in mathematic representations. Vectors also support similarity search: matching on the coordinates that are most similar to the vector query. Compared to keyword search (or term search) that matches on tokenized terms, similarity search is more nuanced. It's a better choice if there's ambiguity or interpretation requirements in the content or in queries.
Fields appear in search results when the attribute is "retrievable". A field def
Rows are matches to the query, ranked by relevance, similarity, or both. By default, results are capped at the top 50 matches for full text search or k-nearest-neighbor matches for vector search. You can change the defaults to increase or decrease the limit up to the maximum of 1,000 documents. You can also use top and skip paging parameters to retrieve results as a series of paged results.
-### Rank by relevance
+### Maximize relevance and recall
When you're working with complex processes, a large amount of data, and expectations for millisecond responses, it's critical that each step adds value and improves the quality of the end result. On the information retrieval side, *relevance tuning* is an activity that improves the quality of the results sent to the LLM. Only the most relevant or the most similar matching documents should be included in results.
-Relevance applies to keyword (nonvector) search and to hybrid queries (over the nonvector fields). In Azure AI Search, there's no relevance tuning for similarity search and vector queries. [BM25 ranking](index-similarity-and-scoring.md) is the ranking algorithm for full text search.
+Here are some tips for maximizing relevance and recall:
-Relevance tuning is supported through features that enhance BM25 ranking. These approaches include:
++ [Hybrid queries](hybrid-search-how-to-query.md) that combine keyword (nonvector) search and vector search give you maximum recall when the inputs are the same. In a hybrid query, if you double down on the same input, a text string and its vector equivalent generate parallel queries for keywords and similarity search, returning the most relevant matches from each query type in a unified result set.
-+ [Scoring profiles](index-add-scoring-profiles.md) that boost the search score if matches are found in a specific search field or on other criteria.
-+ [Semantic ranking](semantic-ranking.md) that re-ranks a BM25 results set, using semantic models from Bing to reorder results for a better semantic fit to the original query.
++ Hybrid queries can also be expansive. You can run similarity search over verbose chunked content, and keyword search over names, all in the same request.
-In comparison and benchmark testing, hybrid queries with text and vector fields, supplemented with semantic ranking over the BM25-ranked results, produce the most relevant results.
++ Relevance tuning is supported through:+
+ + [Scoring profiles](index-add-scoring-profiles.md) that boost the search score if matches are found in a specific search field or on other criteria.
+
+ + [Semantic ranking](semantic-ranking.md) that re-ranks an initial results set, using semantic models from Bing to reorder results for a better semantic fit to the original query.
+
+ + Query parameters for fine-tuning. You can [bump up the importance of vector queries](vector-search-how-to-query.md#vector-weighting) or [adjust the amount of BM25-ranked results](vector-search-how-to-query.md#maxtextsizerecall-for-hybrid-search-preview) in a hybrid query. You can also [set minimum thresholds to exclude low scoring results](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview) from a vector query.
+
+In comparison and benchmark testing, hybrid queries with text and vector fields, supplemented with semantic ranking, produce the most relevant results.
### Example code of an Azure AI Search query for RAG scenarios
-The following code is copied from the [retrievethenread.py](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/retrievethenread.py) file from a demo site. It produces `sources_content` for the LLM from hybrid query search results. You can write a simpler query, but this example is inclusive of vector search and keyword search with semantic reranking and spell check. In the demo, this query is used to get initial content.
+The following Python code demonstrates the essential components of a RAG workflow in Azure AI Search. You need to set up the clients, define a system prompt, and provide a query. The prompt tells the LLM to use just the results from the query, and how to return the results. For more steps based on this example, see this [RAG quickstart](search-get-started-rag.md).
```python
-# Use semantic ranker if requested and if retrieval mode is text or hybrid (vectors + text)
-if overrides.get("semantic_ranker") and has_text:
- r = await self.search_client.search(query_text,
- filter=filter,
- query_type=QueryType.SEMANTIC,
- query_language="en-us",
- query_speller="lexicon",
- semantic_configuration_name="default",
- top=top,
- query_caption="extractive|highlight-false" if use_semantic_captions else None,
- vector=query_vector,
- top_k=50 if query_vector else None,
- vector_fields="embedding" if query_vector else None)
-else:
- r = await self.search_client.search(query_text,
- filter=filter,
- top=top,
- vector=query_vector,
- top_k=50 if query_vector else None,
- vector_fields="embedding" if query_vector else None)
-if use_semantic_captions:
- results = [doc[self.sourcepage_field] + ": " + nonewlines(" . ".join([c.text for c in doc['@search.captions']])) async for doc in r]
-else:
- results = [doc[self.sourcepage_field] + ": " + nonewlines(doc[self.content_field]) async for doc in r]
-content = "\n".join(results)
+# Set up the query for generating responses
+from azure.identity import DefaultAzureCredential
+from azure.identity import get_bearer_token_provider
+from azure.search.documents import SearchClient
+from openai import AzureOpenAI
+
+credential = DefaultAzureCredential()
+token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
+openai_client = AzureOpenAI(
+ api_version="2024-06-01",
+ azure_endpoint=AZURE_OPENAI_ACCOUNT,
+ azure_ad_token_provider=token_provider
+)
+
+search_client = SearchClient(
+ endpoint=AZURE_SEARCH_SERVICE,
+ index_name="hotels-sample-index",
+ credential=credential
+)
+
+# This prompt provides instructions to the model
+GROUNDED_PROMPT="""
+You are a friendly assistant that recommends hotels based on activities and amenities.
+Answer the query using only the sources provided below in a friendly and concise bulleted manner.
+Answer ONLY with the facts listed in the list of sources below.
+If there isn't enough information below, say you don't know.
+Do not generate answers that don't use the sources below.
+Query: {query}
+Sources:\n{sources}
+"""
+
+# Query is the question being asked
+query="Can you recommend a few hotels near the ocean with beach access and good views"
+
+# Retrieve the selected fields from the search index related to the question
+search_results = search_client.search(
+ search_text=query,
+ top=5,
+ select="Description,HotelName,Tags"
+)
+sources_formatted = "\n".join([f'{document["HotelName"]}:{document["Description"]}:{document["Tags"]}' for document in search_results])
+
+response = openai_client.chat.completions.create(
+ messages=[
+ {
+ "role": "user",
+ "content": GROUNDED_PROMPT.format(query=query, sources=sources_formatted)
+ }
+ ],
+ model="gpt-4o"
+)
+
+print(response.choices[0].message.content)
``` ## Integration code and LLMs
-A RAG solution that includes Azure AI Search requires other components and code to create a complete solution. Whereas the previous sections covered information retrieval through Azure AI Search and which features are used to create and query searchable content, this section introduces LLM integration and interaction.
+A RAG solution that includes Azure AI Search can leverage [built-in data chunking and vectorization capabilities](vector-search-integrated-vectorization.md), or you can build your own using platforms like Semantic Kernel, LangChain, or LlamaIndex.
-Notebooks in the demo repositories are a great starting point because they show patterns for passing search results to an LLM. Most of the code in a RAG solution consists of calls to the LLM so you need to develop an understanding of how those APIs work, which is outside the scope of this article.
-
-The following cell block in the [chat-read-retrieve-read.ipynb](https://github.com/Azure-Samples/openai/blob/main/End_to_end_Solutions/AOAISearchDemo/notebooks/chat-read-retrieve-read.ipynb) notebook shows search calls in the context of a chat session:
-
-```python
-# Execute this cell multiple times updating user_input to accumulate chat history
-user_input = "Does my plan cover annual eye exams?"
-
-# Exclude category, to simulate scenarios where there's a set of docs you can't see
-exclude_category = None
-
-if len(history) > 0:
- completion = openai.Completion.create(
- engine=AZURE_OPENAI_GPT_DEPLOYMENT,
- prompt=summary_prompt_template.format(summary="\n".join(history), question=user_input),
- temperature=0.7,
- max_tokens=32,
- stop=["\n"])
- search = completion.choices[0].text
-else:
- search = user_input
-
-# Alternatively simply use search_client.search(q, top=3) if not using semantic search
-print("Searching:", search)
-print("-")
-filter = "category ne '{}'".format(exclude_category.replace("'", "''")) if exclude_category else None
-r = search_client.search(search,
- filter=filter,
- query_type=QueryType.SEMANTIC,
- query_language="en-us",
- query_speller="lexicon",
- semantic_configuration_name="default",
- top=3)
-results = [doc[KB_FIELDS_SOURCEPAGE] + ": " + doc[KB_FIELDS_CONTENT].replace("\n", "").replace("\r", "") for doc in r]
-content = "\n".join(results)
-
-prompt = prompt_prefix.format(sources=content) + prompt_history + user_input + turn_suffix
-
-completion = openai.Completion.create(
- engine=AZURE_OPENAI_CHATGPT_DEPLOYMENT,
- prompt=prompt,
- temperature=0.7,
- max_tokens=1024,
- stop=["<|im_end|>", "<|im_start|>"])
-
-prompt_history += user_input + turn_suffix + completion.choices[0].text + "\n<|im_end|>" + turn_prefix
-history.append("user: " + user_input)
-history.append("assistant: " + completion.choices[0].text)
-
-print("\n-\n".join(history))
-print("\n-\nPrompt:\n" + prompt)
-```
+[Notebooks in the demo repository](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/community-integration) are a great starting point because they show patterns for LLM integration. Much of the code in a RAG solution consists of calls to the LLM so you need to develop an understanding of how those APIs work, which is outside the scope of this article.
## How to get started
print("\n-\nPrompt:\n" + prompt)
+ [Use Azure OpenAI Studio and "bring your own data"](/azure/ai-services/openai/concepts/use-your-data) to experiment with prompts on an existing search index in a playground. This step helps you decide what model to use, and shows you how well your existing index works in a RAG scenario.
-+ [Try this quickstart](search-get-started-rag.md) for a demonstration of query integration with chat models over a search index.
++ [Try this RAG quickstart](search-get-started-rag.md) for a demonstration of query integration with chat models over a search index. + Start with solution accelerators:
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
A second service isn't required for high availability. High availability for que
Azure AI Search restricts the [number of search services](search-limits-quotas-capacity.md#subscription-limits) you can initially create in a subscription. If you exhaust your maximum limit, you can request more quota. You must have Owner or Contributor permissions on the subscription to request quota.
+Depending on region and datacenter capacity, you can automatically request more quota to add services to your subscription. If the request fails, you should either decrease the number or file a support ticket. For an large increase in quota, such as more than 30 extra services, you should expect a one-month turnaround.
-Maximum quota for a given tier and region combination is an extra 100 search services over the baseline quota (which means 106, 108, or 116 [depending on the tier](search-limits-quotas-capacity.md#subscription-limits)). For more than 100, file a support ticket. You can't increase quota for the Free tier.
1. Sign in to the Azure portal, search for "quotas" in your dashboard, and then select the **Quotas** service.
sentinel Api Dcr Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/api-dcr-reference.md
https://management.azure.com/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/
} ```
+## Custom logs from text files
+
+The following examples are for DCRs using the AMA to collect custom logs from text files.
+
+### Custom text logs DCR
+
+These examples are of the API request for creating a DCR.
+
+#### Custom text logs DCR creation request body
+
+The following is an example of a DCR creation request for a custom log text file. Replace *`{PLACEHOLDER_VALUES}`* with actual values.
+
+The `outputStream` parameter is required only if the transform changes the schema of the stream.
+
+```json
+{
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "{DCR_NAME}",
+ "location": "{WORKSPACE_LOCATION}",
+ "apiVersion": "2022-06-01",
+ "properties": {
+ "streamDeclarations": {
+ "Custom-Text-{TABLE_NAME}": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ },
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-{TABLE_NAME}"
+ ],
+ "filePatterns": [
+ "{LOCAL_PATH_FILE_1}","{LOCAL_PATH_FILE_2}"
+ ],
+ "format": "text",
+ "name": "Custom-Text-{TABLE_NAME}"
+ }
+ ],
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "{WORKSPACE_RESOURCE_PATH}",
+ "workspaceId": "{WORKSPACE_ID}",
+ "name": "DataCollectionEvent"
+ }
+ ],
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-Text-{TABLE_NAME}"
+ ],
+ "destinations": [
+ "DataCollectionEvent"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-{TABLE_NAME}"
+ }
+ ]
+ }
+}
+```
+
+#### Custom text logs DCR creation response
+
+```json
+{
+ "properties": {
+ "immutableId": "dcr-00112233445566778899aabbccddeeff",
+ "dataCollectionEndpointId": "/subscriptions/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/resourceGroups/Contoso-RG-1/providers/Microsoft.Insights/dataCollectionEndpoints/Microsoft-Sentinel-aaaabbbbccccddddeeeefff",
+ "streamDeclarations": {
+ "Custom-Text-ApacheHTTPServer_CL": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-ApacheHTTPServer_CL"
+ ],
+ "filePatterns": [
+ "C:\\Server\\bin\\log\\Apache24\\logs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "Custom-Text-ApacheHTTPServer_CL"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/resourceGroups/contoso-rg-1/providers/Microsoft.OperationalInsights/workspaces/CyberSOC",
+ "workspaceId": "cccccccc-3333-4444-5555-dddddddddddd",
+ "name": "DataCollectionEvent"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-Text-ApacheHTTPServer_CL"
+ ],
+ "destinations": [
+ "DataCollectionEvent"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-ApacheHTTPServer_CL"
+ }
+ ],
+ "provisioningState": "Succeeded"
+ },
+ "location": "centralus",
+ "tags": {
+ "createdBy": "Sentinel"
+ },
+ "id": "/subscriptions/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb/resourceGroups/Contoso-RG-1/providers/Microsoft.Insights/dataCollectionRules/DCR-CustomLogs-01",
+ "name": "DCR-CustomLogs-01",
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "etag": "\"00000000-1111-2222-3333-444444444444\"",
+ "systemData": {
+ "createdBy": "gbarnes@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2024-08-12T09:29:15.1083961Z",
+ "lastModifiedBy": "gbarnes@contoso.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2024-08-12T09:29:15.1083961Z"
+ }
+}
+```
sentinel Connect Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-active-directory.md
You can use Microsoft Sentinel's built-in connector to collect data from [Micros
- Your user must be assigned the [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) role on the workspace. -- Your user must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles on the tenant you want to stream the logs from.
+- Your user must have the [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) role on the tenant you want to stream the logs from, or the equivalent permissions.
- Your user must have read and write permissions to the Microsoft Entra diagnostic settings in order to be able to see the connection status. - Install the solution for **Microsoft Entra ID** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
sentinel Connect Cef Syslog Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog-ama.md
Title: Ingest syslog CEF messages to Microsoft Sentinel - AMA
-description: Ingest syslog messages from linux machines, devices, and appliances to Microsoft Sentinel using data connectors based on the Azure Monitor Agent (AMA).
+ Title: Ingest syslog and CEF messages to Microsoft Sentinel - AMA
+description: Ingest syslog messages from linux machines and from network and security devices and appliances to Microsoft Sentinel, using data connectors based on the Azure Monitor Agent (AMA).
This article describes how to use the **Syslog via AMA** and **Common Event Form
## Prerequisites
-Before you begin, you must have the resources configured and the appropriate permissions described in this section.
+Before you begin, you must have the resources configured and the appropriate permissions assigned, as described in this section.
### Microsoft Sentinel prerequisites Install the appropriate Microsoft Sentinel solution and make sure you have the permissions to complete the steps in this article. - Install the appropriate solution from the **Content hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).+ - Identify which data connector the Microsoft Sentinel solution requires &mdash; **Syslog via AMA** or **Common Event Format (CEF) via AMA** and whether you need to install the **Syslog** or **Common Event Format** solution. To fulfill this prerequisite, + - In the **Content hub**, select **Manage** on the installed solution and review the data connector listed. + - If either **Syslog via AMA** or **Common Event Format (CEF) via AMA** isn't installed with the solution, identify whether you need to install the **Syslog** or **Common Event Format** solution by finding your appliance or device from one of the following articles: - [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md) - [Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md) Then install either the **Syslog** or **Common Event Format** solution from the content hub to get the related AMA data connector.+ - Have an Azure account with the following Azure role-based access control (Azure RBAC) roles: | Built-in role | Scope | Reason |
If your devices are sending syslog and CEF logs over TLS because, for example, y
The setup process for the Syslog via AMA or Common Event Format (CEF) via AMA data connectors includes the following steps: 1. Install the Azure Monitor Agent and create a Data Collection Rule (DCR) by using either of the following methods:
- - [Azure or Defender portal](?tabs=syslog%2Cportal#create-data-collection-rule)
+ - [Azure or Defender portal](?tabs=syslog%2Cportal#create-data-collection-rule-dcr)
- [Azure Monitor Logs Ingestion API](?tabs=syslog%2Capi#install-the-azure-monitor-agent) 1. If you're collecting logs from other machines using a log forwarder, [**run the "installation" script**](#run-the-installation-script) on the log forwarder to configure the syslog daemon to listen for messages from other machines, and to open the necessary local ports.
Select the appropriate tab for instructions.
# [Azure or Defender portal](#tab/portal)
-### Create data collection rule
+### Create data collection rule (DCR)
-To get started, open either the **Syslog via AMA** or **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel and create a data connector rule.
+To get started, open either the **Syslog via AMA** or **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel and create a data collection rule (DCR).
1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**.
sentinel Connect Custom Logs Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-custom-logs-ama.md
+
+ Title: Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel - AMA
+description: Collect text file-based logs from network or security applications installed on Windows- or Linux-based machines, using the Custom Logs via AMA data connector based on the Azure Monitor Agent (AMA).
++++ Last updated : 08/06/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a security operator, I want to ingest and filter text file-based logs from network or security applications installed on Windows- or Linux-based machines to my Microsoft Sentinel workspace, so that security analysts can monitor activity on these systems and detect security threats.
++
+# Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel
+
+This article describes how to use the **Custom Logs via AMA** connector to quickly filter and ingest logs in text-file format from network or security applications installed on Windows or Linux machines.
+
+Many applications log data to text files instead of standard logging services like Windows Event log or Syslog. You can use the Azure Monitor Agent (AMA) to collect data in text files of nonstandard formats from both Windows and Linux computers. The AMA can also effect transformations on the data at the time of collection, to parse it into different fields.
+
+For more information about the applications for which Microsoft Sentinel has solutions to support log collection, see [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](unified-connector-custom-device.md).
+
+For more general information about ingesting custom logs from text files, see [Collect logs from a text file with Azure Monitor Agent](../azure-monitor/agents/data-collection-log-text.md).
+
+> [!IMPORTANT]
+> - The **Custom Logs via AMA** data connector is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+
+## Prerequisites
+
+Before you begin, you must have the resources configured and the appropriate permissions assigned, as described in this section.
+
+### Microsoft Sentinel prerequisites
+
+- Install the Microsoft Sentinel solution that matches your application and make sure you have the permissions to complete the steps in this article. You can find these solutions in the **Content hub** in Microsoft Sentinel, and they all include the **Custom Logs via AMA** connector.
+
+ For the list of applications that have solutions in the content hub, see [Specific instructions per application](unified-connector-custom-device.md#specific-instructions-per-application-type). If there isn't a solution available for your application, install the **Custom Logs via AMA** solution.
+
+ For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+
+- Have an Azure account with the following Azure role-based access control (Azure RBAC) roles:
+
+ | Built-in role | Scope | Reason |
+ | - | -- | |
+ | - [Virtual Machine Contributor](../role-based-access-control/built-in-roles/compute.md#virtual-machine-contributor)<br>- [Azure Connected Machine<br>&nbsp;&nbsp;&nbsp;Resource Administrator](../role-based-access-control/built-in-roles/management-and-governance.md#azure-connected-machine-resource-administrator) | <li>Virtual machines (VM)<li>Virtual Machine Scale Sets<li>Azure Arc-enabled servers | To deploy the agent |
+ | Any role that includes the action<br>*Microsoft.Resources/deployments/\** | <li>Subscription<li>Resource group<li>Existing data collection rule | To deploy Azure Resource Manager templates |
+ | [Monitoring Contributor](../role-based-access-control/built-in-roles/monitor.md#monitoring-contributor) | <li>Subscription<li>Resource group<li>Existing data collection rule | To create or edit data collection rules |
+
+### Log forwarder prerequisites
+
+Certain custom applications are hosted on closed appliances that necessitate sending their logs to an external log collector/forwarder. In such a scenario, the following prerequisites apply to the log forwarder:
+
+- You must have a designated Linux VM as a log forwarder to collect logs.
+ - [Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md).
+ - [Supported Linux operating systems for Azure Monitor Agent](../azure-monitor/agents/agents-overview.md#linux).
+
+- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](../azure-arc/servers/overview.md) installed on it.
+
+- The Linux log forwarder VM must have Python 2.7 or 3 installed. Use the ``python --version`` or ``python3 --version`` command to check. If you're using Python 3, make sure it's set as the default command on the machine, or run scripts with the 'python3' command instead of 'python'.
+
+- The log forwarder must have either the `syslog-ng` or `rsyslog` daemon enabled.
+
+- For space requirements for your log forwarder, refer to the [Azure Monitor Agent Performance Benchmark](../azure-monitor/agents/azure-monitor-agent-performance.md). You can also review [this blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/designs-for-accomplishing-microsoft-sentinel-scalable-ingestion/ba-p/3741516), which includes designs for scalable ingestion.
+
+- Your log sources, security devices, and appliances must be configured to send their log messages to the log forwarder's syslog daemon instead of to their local syslog daemon.
+
+#### Machine security prerequisites
+
+Configure the log forwarder machine's security according to your organization's security policy. For example, configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. To improve your machine security configuration, [secure your VM in Azure](../virtual-machines/security-policy.md), or review these [best practices for network security](../security/fundamentals/network-best-practices.md).
+
+If your devices are sending logs over TLS because, for example, your log forwarder is in the cloud, you need to configure the syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS. For more information, see:
+
+- [Encrypt Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)
+- [Encrypt log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)
+
+## Configure the data connector
+
+The setup process for the Custom Logs via AMA data connector includes the following steps:
+
+1. Create the destination table in Log Analytics (or Advanced Hunting if you're in the Defender portal).
+
+ The table's name must end with `_CL` and it must consist of only the following two fields:
+ - **TimeGenerated** (of type *DateTime*): the timestamp of the creation of the log message.
+ - **RawData** (of type *String*): the log message in its entirety.
+ (If you're collecting logs from a log forwarder and not directly from the device hosting the application, name this field **Message** instead of **RawData**.)
+
+1. Install the Azure Monitor Agent and create a Data Collection Rule (DCR) by using either of the following methods:
+ - [Azure or Defender portal](?tabs=portal#create-data-collection-rule-dcr)
+ - [Azure Resource Manager template](?tabs=arm#install-the-azure-monitor-agent)
+
+1. If you're collecting logs using a log forwarder, configure the syslog daemon on that machine to listen for messages from other sources, and open the required local ports. For details, see [Configure the log forwarder to accept logs](#configure-the-log-forwarder-to-accept-logs).
+
+Select the appropriate tab for instructions.
+
+# [Azure or Defender portal](#tab/portal)
+
+### Create data collection rule (DCR)
+
+To get started, open either the **Custom Logs via AMA** data connector in Microsoft Sentinel and create a data collection rule (DCR).
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**.
+
+1. Type *custom* in the **Search** box. From the results, select the **Custom Logs via AMA** connector.
+
+1. Select **Open connector page** on the details pane.
+
+ :::image type="content" source="media/connect-custom-logs-ama/custom-logs-connector-open.png" alt-text="Screenshot of custom logs AMA connector in gallery." lightbox="media/connect-custom-logs-ama/custom-logs-connector-open.png":::
+
+1. In the **Configuration** area, select **+Create data collection rule**.
+
+ :::image type="content" source="media/connect-custom-logs-ama/custom-logs-connector-page-create-dcr.png" alt-text="Screenshot showing the Custom Logs via AMA connector page." lightbox="media/connect-custom-logs-ama/custom-logs-connector-page-create-dcr.png":::
+
+1. In the **Basic** tab:
+ - Type a DCR name.
+ - Select your subscription.
+ - Select the resource group where you want to locate your DCR.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-basics-tab.png" alt-text="Screenshot showing the DCR details in the Basic tab." lightbox="media/connect-cef-ama/dcr-basics-tab.png":::
+
+1. Select **Next: Resources >**.
+
+### Define VM resources
+
+In the **Resources** tab, select the machines from which you want to collect the logs. These are either the machines on which your application is installed, or your log forwarder machines. If the machine you're looking for doesn't appear in the list, it might not be an Azure VM with the Azure Connected Machine agent installed.
+
+1. Use the available filters or search box to find the machine you're looking for. Expand a subscription in the list to see its resource groups, and a resource group to see its VMs.
+
+1. Select the machine that you want to collect logs from. The check box appears next to the VM name when you hover over it.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-select-resources.png" alt-text="Screenshot showing how to select resources when setting up the DCR." lightbox="media/connect-cef-ama/dcr-select-resources.png":::
+
+ If the machines you selected don't already have the Azure Monitor Agent installed on them, the agent is installed when the DCR is created and deployed.
+
+1. Review your changes and select **Next: Collect >**.
+
+### Configure the DCR for your application
+
+1. In the **Collect** tab, select your application or device type from the **Select device type (optional)** drop-down box, or leave it as **Custom new table** if your application or device isn't listed.
+
+1. If you chose one of the listed applications or devices, the **Table name** field is automatically populated with the right table name. If you chose **Custom new table**, enter a table name under **Table name**. The name must end with the `_CL` suffix.
+
+1. In the **File pattern** field, enter the path and file name of the text log files to be collected. To find the default file names and paths for each application or device type, see [Specific instructions per application type](unified-connector-custom-device.md#specific-instructions-per-application-type). You don't have to use the default file names or paths, and you can use wildcards in the file name.
+
+1. In the **Transform** field, if you chose a custom new table in step 1, enter a Kusto query that applies a transformation of your choice to the data.
+
+ If you chose one of the listed applications or devices in step 1, this field is automatically populated with the proper transformation. DO NOT edit the transformation that appears there. Depending on the chosen type, this value should be one of the following:
+ - `source` (the default&mdash;no transformation)
+ - `source | project-rename Message=RawData` (for devices that send logs to a forwarder)
+
+1. Review your selections and select **Next: Review + create**.
+
+### Review and create the rule
+
+After you complete all the tabs, review what you entered and create the data collection rule.
+
+1. In the **Review and create** tab, select **Create**.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-review-create.png" alt-text="Screenshot showing how to review the configuration of the DCR and create it.":::
+
+ The connector installs the Azure Monitor Agent on the machines you selected when creating your DCR.
+
+1. Check the notifications in the Azure portal or Microsoft Defender portal to see when the DCR is created and the agent is installed.
+
+1. Select **Refresh** on the connector page to see the DCR displayed in the list.
+
+# [Resource Manager template](#tab/arm)
+
+### Install the Azure Monitor Agent
+
+Follow the appropriate instructions from the Azure Monitor documentation to install the Azure Monitor Agent on the machine hosting your application, or on your log forwarder. Use the instructions for Windows or for Linux, as appropriate.
+- [Install the AMA using PowerShell](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-powershell)
+- [Install the AMA using the Azure CLI](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-cli)
+- [Install the AMA using an Azure Resource Manager template](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-resource-manager)
+
+Create Data Collection Rules (DCRs) using the [Azure Monitor Logs Ingestion API](/rest/api/monitor/data-collection-rules). For more information, see [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md).
+
+### Create the data collection rule
+
+Use the following ARM template to create or modify a DCR for collecting text log files:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "{DCR_NAME}",
+ "location": "{DCR_LOCATION}",
+ "apiVersion": "2022-06-01",
+ "properties": {
+ "streamDeclarations": {
+ "Custom-Text-{TABLE_NAME}": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ },
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-{TABLE_NAME}"
+ ],
+ "filePatterns": [
+ "{LOCAL_PATH_FILE_1}","{LOCAL_PATH_FILE_2}"
+ ],
+ "format": "text",
+ "name": "Custom-Text-{TABLE_NAME}"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "{WORKSPACE_RESOURCE_PATH}",
+ "workspaceId": "{WORKSPACE_ID}",
+ "name": "workspace"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-Text-{TABLE_NAME}"
+ ],
+ "destinations": [
+ "DataCollectionEvent"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-{TABLE_NAME}"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+Replace the {PLACE_HOLDER} values with the following values:
+
+| Placeholder | Value |
+| -- | -- |
+| {DCR_NAME} | The name you choose for your Data Collection Rule. It must be unique within your workspace. |
+| {DCR_LOCATION} | The region where the resource group containing the DCR is located. |
+| {TABLE_NAME} | The name of the destination table in Log Analytics. Must end with `_CL`. |
+| {LOCAL_PATH_FILE_1}&nbsp;*(required)*,<br>{LOCAL_PATH_FILE_2} *(optional)* | Paths and file names of the text files containing the logs you want to collect. These must be on the machine where the Azure Monitor Agent is installed. |
+| {WORKSPACE_RESOURCE_PATH} | The Azure resource path of your Microsoft Sentinel workspace. |
+| {WORKSPACE_ID} | The GUID of your Microsoft Sentinel workspace. |
+
+
+### Associate the DCR with the Azure Monitor Agent
+
+If you create the DCR using an ARM template, you still must associate the DCR with the agents that will use it. You can edit the DCR in the Azure portal and select the agents as described in [Define VM resources](#define-vm-resources).
+++
+## Configure the log forwarder to accept logs
+
+If you're collecting logs from an appliance using a log forwarder, configure the syslog daemon on the log forwarder to listen for messages from other machines, and open the necessary local ports.
+
+1. Copy the following command line:
+
+ ```python
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ ```
+
+1. Sign in to the log forwarder machine where you just installed the AMA.
+
+1. Paste the command you copied in the last step to launch the installation script.
+ The script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the syslog daemon configuration file according to the daemon type running on the machine:
+ - Rsyslog: `/etc/rsyslog.conf`
+ - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
+
+ If you're using Python 3, and it's not set as the default command on the machine, substitute `python3` for `python` in the pasted command. See [Log forwarder prerequisites](#log-forwarder-prerequisites).
+
+ > [!NOTE]
+ > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
+ > For more information, see [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](https://syslog-ng.github.io/).
+
+## Configure the security device or appliance
+
+For specific instructions to configure your security application or appliance, see [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](unified-connector-custom-device.md)
+
+Contact the solution provider for more information or where information is unavailable for the appliance or device.
+
+## Related content
+
+- [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
+- [Collect logs from a text file with Azure Monitor Agent](../azure-monitor/agents/data-collection-log-text.md)
sentinel Connect Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-custom-logs.md
# Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent
-This article describes how to collect data from devices that use custom log formats to Microsoft Sentinel using the **Log Analytics agent**. To learn how to ingest custom logs **using the Azure Monitor Agent (AMA)**, see [Collect logs from a text file with Azure Monitor Agent](../azure-monitor/agents/data-collection-log-text.md) in the Azure Monitor documentation.
+This article describes how to collect data from devices that use custom log formats to Microsoft Sentinel using the **Log Analytics agent**. To learn how to ingest custom logs **using the Azure Monitor Agent (AMA)**, see [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md).
Many applications log data to text files instead of standard logging services like Windows Event log or Syslog. You can use the Log Analytics agent to collect data in text files of nonstandard formats from both Windows and Linux computers. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields.
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
The Defender XDR connector, especially its incident integration feature, is the
Before you begin, you must have the appropriate licensing, access, and configured resources described in this section. - You must have a valid license for Microsoft Defender XDR, as described in [Microsoft Defender XDR prerequisites](/microsoft-365/security/mtp/prerequisites).-- Your user account must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles on the tenant you want to stream the logs from.
+- Your user must have the [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) role on the tenant you want to stream the logs from, or the equivalent permissions.
- You must have read and write permissions on your Microsoft Sentinel workspace. - To make any changes to the connector settings, your account must be a member of the same Microsoft Entra tenant with which your Microsoft Sentinel workspace is associated. - Install the solution for **Microsoft Defender XDR** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
sentinel Connect Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-purview.md
Before you begin, verify that you have:
- A defined Microsoft Sentinel workspace. - A valid license to M365 E3, M365 A3, Microsoft Business Basic or any other Audit eligible license. Read more about [auditing solutions in Microsoft Purview](/microsoft-365/compliance/audit-solutions-overview). - [Enabled Sensitivity labels for Office](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide#use-the-microsoft-purview-compliance-portal-to-enable-support-for-sensitivity-labels&preserve-view=true) and [enabled auditing](/microsoft-365/compliance/turn-audit-log-search-on-or-off?view=o365-worldwide#use-the-compliance-center-to-turn-on-auditing&preserve-view=true).-- The Global Administrator or Security Administrator role on the workspace.
+- The Security Administrator role on the tenant, or the equivalent permissions.
## Set up the connector
sentinel Connect Services Api Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-services-api-based.md
This article presents information that is common to the group of API-based data
## Prerequisites - You must have read and write permissions on the Log Analytics workspace.-- You must have the Global administrator or Security administrator role on your Microsoft Sentinel workspace's tenant.
+- You must have a Security administrator role on your Microsoft Sentinel workspace's tenant, or the equivalent permissions.
- Data connector specific requirements: |Data connector |Licensing, costs, and other prerequisites |
sentinel Connect Threat Intelligence Tip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-tip.md
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
## Prerequisites - In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.-- You must have either the **Global administrator** or **Security administrator** Microsoft Entra roles in order to grant permissions to your TIP product or to any other custom application that uses direct integration with the Microsoft Graph Security tiIndicators API.
+- To grant permissions to your TIP product or any other custom application that uses direct integration with the Microsoft Graph TI Indicators API, you must have the **Security administrator** Microsoft Entra role, or the equivalent permissions.
- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators. ## Instructions
You can get this information from your Microsoft Entra ID through a process call
#### Get consent from your organization to grant these permissions
-1. To get consent, you need a Microsoft Entra Global Administrator to select the **Grant admin consent for your tenant** button on your appΓÇÖs **API permissions** page. If you do not have the Global Administrator role on your account, this button will not be available, and you will need to ask a Global Administrator from your organization to perform this step.
+1. To grant consent, a privileged role is required. For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).
:::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-api-permissions-2.png" alt-text="Grant consent":::
sentinel Enable Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md
As Microsoft Sentinel collects logs and alerts from all of its connected data so
To enable or disable this feature (these prerequisites are not required to use the feature): -- Your user must be assigned the Microsoft Entra ID **Global Administrator** or **Security Administrator** roles in your tenant.
+- Your user must be assigned to the Microsoft Entra ID **Security Administrator** role in your tenant or the equivalent permissions.
- Your user must be assigned at least one of the following **Azure roles** ([Learn more about Azure RBAC](roles.md)): - **Microsoft Sentinel Contributor** at the workspace or resource group levels.
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. | | **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
-More roles might be required depending on the data you ingest or monitor. For example, Microsoft Entra roles might be required, such as the Global Administrator or Security Administrator roles, to set up data connectors for services in other Microsoft portals.
+More roles might be required depending on the data you ingest or monitor. For example, Microsoft Entra roles might be required, such as the Security Administrator role, to set up data connectors for services in other Microsoft portals.
## Resource-based access control
sentinel Unified Connector Cef Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-cef-device.md
Last updated 06/27/2024
# CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion
-Log collection from many security appliances and devices are supported by the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. This article lists provider supplied installation instructions for specific security appliances and devices that use this data connector. Contact the provider for updates, more information, or where information is unavailable for your security appliance or device.
+Log collection from many security appliances and devices is supported by the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. This article lists provider-supplied installation instructions for specific security appliances and devices that use this data connector. Contact the provider for updates, more information, or where information is unavailable for your security appliance or device.
-To forward data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). As you complete those steps, install the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. Then, use the appropriate provider's instructions in this article to complete the setup.
+To ingest data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). Those steps include the installation of the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. After the connector is installed, use the instructions appropriate to your device, shown later in this article, to complete the setup.
For more information about the related Microsoft Sentinel solution for each of these appliances or devices, search the [Azure Marketplace](https://azuremarketplace.microsoft.com/) for the **Product Type** > **Solution Templates** or review the solution from the **Content hub** in Microsoft Sentinel. + ## AI Analyst Darktrace Configure Darktrace to forward syslog messages in CEF format to your Azure workspace via the syslog agent.
sentinel Unified Connector Custom Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-custom-device.md
+
+ Title: Custom logs via AMA connector - Configure data ingestion to Microsoft Sentinel from specific applications
+description: Learn how to configure data ingestion into Microsoft Sentinel from specific or custom applications that produce logs as text files, using the Custom Logs via AMA data connector or manual configuration.
++++ Last updated : 07/31/2024++
+# Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications
+
+Microsoft Sentinel's **Custom Logs via AMA** data connector supports the collection of logs from text files from several different network and security applications and devices.
+
+This article supplies the configuration information, unique to each specific security application, that you need to supply when configuring this data connector. This information is provided by the application providers. Contact the provider for updates, for more information, or when information is unavailable for your security application. For the full instructions to install and configure the connector, see [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md), but refer back to this article for the unique information to supply for each application.
+
+This article also shows you how to ingest data from these applications to your Microsoft Sentinel workspace without using the connector. These steps include installation of the Azure Monitor Agent. After the connector is installed, use the instructions appropriate to your application, shown later in this article, to complete the setup.
+
+The devices from which you collect custom text logs fall into two categories:
+
+- Applications installed on Windows or Linux machines
+
+ The application stores its log files on the machine where it's installed. To collect these logs, the Azure Monitor Agent is installed on this same machine.
+
+- Appliances that are self-contained on closed (usually Linux-based) devices
+
+ These appliances store their logs on an external syslog server. To collect these logs, the Azure Monitor Agentis installed on this external syslog server, often called a log forwarder.
+
+For more information about the related Microsoft Sentinel solution for each of these applications, search the [Azure Marketplace](https://azuremarketplace.microsoft.com/) for the **Product Type** > **Solution Templates** or review the solution from the **Content hub** in Microsoft Sentinel.
+
+> [!IMPORTANT]
+> - The **Custom Logs via AMA** data connector is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+
+## General instructions
+
+The steps for collecting logs from machines hosting applications and appliances follow a general pattern:
+
+1. Create the destination table in Log Analytics (or Advanced Hunting if you're in the Defender portal).
+
+1. Create the data collection rule (DCR) for your application or appliance.
+
+1. Deploy the Azure Monitor Agent to the machine hosting the application, or to the external server (log forwarder) that collects logs from appliances if it's not already deployed.
+
+1. Configure logging on your application. If an appliance, configure it to send its logs to the external server (log forwarder) where the Azure Monitor Agent is installed.
+
+These general steps (except for the last one) are automated when you use the **Custom Logs via AMA** data connector, and are described in detail in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md).
+
+## Specific instructions per application type
+
+The per-application information you need to complete these steps is presented in the rest of this article. Some of these applications are on self-contained appliances and require a different type of configuration, starting with the use of a log forwarder.
+
+Each application section contains the following information:
+
+- Unique parameters to supply to the configuration of the **Custom Logs via AMA** data connector, if you're using it.
+- The outline of the procedure required to ingest data manually, without using the connector. For the details of this procedure, see [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md).
+- Specific instructions for configuring the originating applications or devices themselves, and/or links to the instructions on the providers' web sites. These steps must be taken whether using the connector or not.
+
+**The following devices' instructions are provided here:**
+
+- [Apache HTTP Server](#apache-http-server)
+- [Apache Tomcat](#apache-tomcat)
+- [Cisco Meraki](#cisco-meraki) (appliance)
+- [Jboss Enterprise Application Platform](#jboss-enterprise-application-platform)
+- [JuniperIDP](#juniperidp) (appliance)
+- [MarkLogic Audit](#marklogic-audit)
+- [MongoDB Audit](#mongodb-audit)
+- [NGINX HTTP Server](#nginx-http-server)
+- [Oracle WebLogic Server](#oracle-weblogic-server)
+- [PostgreSQL Events](#postgresql-events)
+- [SecurityBridge Threat Detection for SAP](#securitybridge-threat-detection-for-sap)
+- [SquidProxy](#squidproxy)
+- [Ubiquiti UniFi](#ubiquiti-unifi) (appliance)
+- [VMware vCenter](#vmware-vcenter) (appliance)
+- [Zscaler Private Access (ZPA)](#zscaler-private-access-zpa) (appliance)
+
+### Apache HTTP Server
+
+Follow these steps to ingest log messages from Apache HTTP Server:
+
+1. Table name: `ApacheHTTPServer_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"C:\Server\bin\log\Apache24\logs\*.log"`
+ - Linux: `"/var/log/httpd/*.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### Apache Tomcat
+
+Follow these steps to ingest log messages from Apache Tomcat:
+
+1. Table name: `Tomcat_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Linux: `"/var/log/tomcat/*.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### Cisco Meraki
+
+Follow these steps to ingest log messages from Cisco Meraki:
+
+1. Table name: `meraki_CL`
+
+1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
+
+1. Configure the syslog daemon to export its Meraki log messages to a temporary text file so the AMA can collect them.
+
+ # [rsyslog](#tab/rsyslog)
+
+ 1. Create a custom configuration file for the rsyslog daemon and save it to `/etc/rsyslog.d/10-meraki.conf`. Add the following filtering conditions to this configuration file:
+
+ ```bash
+ if $rawmsg contains "flows" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ if $rawmsg contains "urls" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ if $rawmsg contains "ids-alerts" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ if $rawmsg contains "events" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ if $rawmsg contains "ip_flow_start" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ if $rawmsg contains "ip_flow_end" then {
+ action(type="omfile" file="<LOG_FILE_Name>")
+ stop
+ }
+ ```
+ (Replace `<LOG_FILE_Name>` with the name of the log file you created.)
+
+ To learn more about filtering conditions for rsyslog, see [rsyslog: Filter conditions](https://rsyslog.readthedocs.io/en/latest/configuration/filters.html). We recommend testing and modifying the configuration based on your specific installation.
+
+ 1. Restart rsyslog. The typical command syntax is `systemctl restart rsyslog`.
+
+ # [syslog-ng](#tab/syslog-ng)
+
+ 1. Edit the config file `/etc/syslog-ng/conf.d`, adding the following conditions:
+
+ ```bash
+ filter f_meraki {
+ message("flows") or message("urls") or message("ids-alerts") or message("events") or message("ip_flow_start") or message("ip_flow_end");
+ };
+
+ destination d_meraki {
+ file("<LOG_FILE_NAME>");
+ };
+
+ log {
+ source(s_src);
+ filter(f_meraki);
+ destination(d_meraki);
+ flags(final); #Ensures that once a message matches the filter and is written to the specified destination, it will not be processed by subsequent log statements
+ };
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart syslog-ng. The typical command syntax is `systemctl restart syslog-ng`.
+
+
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ - Replace the column name `"RawData"` with the column name `"Message"`.
+
+ - Replace the transformKql value `"source"` with the value `"sourceΓÇ»| project-rename Message=RawData"`.
+
+ - Replace the `{TABLE_NAME}` and `{LOCAL_PATH_FILE}` placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Configure the machine where the Azure Monitor Agent is installed to open the syslog ports, and configure the syslog daemon there to accept messages from external sources. For detailed instructions and a script to automate this configuration, see [Configure the log forwarder to accept logs](connect-custom-logs-ama.md#configure-the-log-forwarder-to-accept-logs).
+
+1. Configure and connect the Cisco Meraki device(s): follow the [instructions provided by Cisco](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Meraki_Device_Reporting_-_Syslog%2C_SNMP%2C_and_API) for sending syslog messages. Use the IP address or hostname of the virtual machine where the Azure Monitor Agent is installed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### JBoss Enterprise Application Platform
+
+Follow these steps to ingest log messages from JBoss Enterprise Application Platform:
+
+1. Table name: `JBossLogs_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns") - Linux only:
+ - Standalone server: `"{EAP_HOME}/standalone/log/server.log"`
+ - Managed domain: `"{EAP_HOME}/domain/servers/{SERVER_NAME}/log/server.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### JuniperIDP
+
+Follow these steps to ingest log messages from JuniperIDP:
+
+1. Table name: `JuniperIDP_CL`
+
+1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
+
+1. Configure the syslog daemon to export its JuniperIDP log messages to a temporary text file so the AMA can collect them.
+
+ # [rsyslog](#tab/rsyslog)
+
+ 1. Create custom configuration file for the rsyslog daemon, in the `/etc/rsyslog.d/` folder, with the following filtering conditions:
+
+ ```bash
+ # Define a new ruleset
+ ruleset(name="<RULESET_NAME>") {
+ action(type="omfile" file="<LOG_FILE_NAME>")
+ }
+
+ # Set the input on port and bind it to the new ruleset
+ input(type="imudp" port="<PORT>" ruleset="<RULESET_NAME>")
+ ```
+ (Replace `<parameters>` with the actual names of the objects represented. <LOG_FILE_NAME> is the file you created in step 2.)
+
+ 1. Restart rsyslog. The typical command syntax is `systemctl restart rsyslog`.
+
+ # [syslog-ng](#tab/syslog-ng)
+
+ 1. Edit the config file `/etc/syslog-ng/conf.d`, adding the following conditions:
+
+ ```bash
+ source s_network {
+ network (
+ ip(ΓÇ£0.0.0.0ΓÇ¥)
+ port(<PORT>)
+ );
+ };
+ destination d_file {
+ file(ΓÇ£<LOG_FILE_NAME>ΓÇ¥);
+ };
+ log {
+ source(s_network);
+ destination(d_file);
+ };
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart syslog-ng. The typical command syntax is `systemctl restart syslog-ng`.
+
+
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ - Replace the column name `"RawData"` with the column name `"Message"`.
+
+ - Replace the `{TABLE_NAME}` and `{LOCAL_PATH_FILE}` placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+ - Replace the transformKql value `"source"` with the following Kusto query (enclosed in double quotes):
+
+ ```kusto
+ source | parse RawData with tmp_time " " host_s " " ident_s " " tmp_pid " " msgid_s " " extradata | extend dvc_os_s = extract("\\[(junos\\S+)", 1, extradata) | extend event_end_time_s = extract(".*epoch-time=\"(\\S+)\"", 1, extradata) | extend message_type_s = extract(".*message-type=\"(\\S+)\"", 1, extradata) | extend source_address_s = extract(".*source-address=\"(\\S+)\"", 1, extradata) | extend destination_address_s = extract(".*destination-address=\"(\\S+)\"", 1, extradata) | extend destination_port_s = extract(".*destination-port=\"(\\S+)\"", 1, extradata) | extend protocol_name_s = extract(".*protocol-name=\"(\\S+)\"", 1, extradata) | extend service_name_s = extract(".*service-name=\"(\\S+)\"", 1, extradata) | extend application_name_s = extract(".*application-name=\"(\\S+)\"", 1, extradata) | extend rule_name_s = extract(".*rule-name=\"(\\S+)\"", 1, extradata) | extend rulebase_name_s = extract(".*rulebase-name=\"(\\S+)\"", 1, extradata) | extend policy_name_s = extract(".*policy-name=\"(\\S+)\"", 1, extradata) | extend export_id_s = extract(".*export-id=\"(\\S+)\"", 1, extradata) | extend repeat_count_s = extract(".*repeat-count=\"(\\S+)\"", 1, extradata) | extend action_s = extract(".*action=\"(\\S+)\"", 1, extradata) | extend threat_severity_s = extract(".*threat-severity=\"(\\S+)\"", 1, extradata) | extend attack_name_s = extract(".*attack-name=\"(\\S+)\"", 1, extradata) | extend nat_source_address_s = extract(".*nat-source-address=\"(\\S+)\"", 1, extradata) | extend nat_source_port_s = extract(".*nat-source-port=\"(\\S+)\"", 1, extradata) | extend nat_destination_address_s = extract(".*nat-destination-address=\"(\\S+)\"", 1, extradata) | extend nat_destination_port_s = extract(".*nat-destination-port=\"(\\S+)\"", 1, extradata) | extend elapsed_time_s = extract(".*elapsed-time=\"(\\S+)\"", 1, extradata) | extend inbound_bytes_s = extract(".*inbound-bytes=\"(\\S+)\"", 1, extradata) | extend outbound_bytes_s = extract(".*outbound-bytes=\"(\\S+)\"", 1, extradata) | extend inbound_packets_s = extract(".*inbound-packets=\"(\\S+)\"", 1, extradata) | extend outbound_packets_s = extract(".*outbound-packets=\"(\\S+)\"", 1, extradata) | extend source_zone_name_s = extract(".*source-zone-name=\"(\\S+)\"", 1, extradata) | extend source_interface_name_s = extract(".*source-interface-name=\"(\\S+)\"", 1, extradata) | extend destination_zone_name_s = extract(".*destination-zone-name=\"(\\S+)\"", 1, extradata) | extend destination_interface_name_s = extract(".*destination-interface-name=\"(\\S+)\"", 1, extradata) | extend packet_log_id_s = extract(".*packet-log-id=\"(\\S+)\"", 1, extradata) | extend alert_s = extract(".*alert=\"(\\S+)\"", 1, extradata) | extend username_s = extract(".*username=\"(\\S+)\"", 1, extradata) | extend roles_s = extract(".*roles=\"(\\S+)\"", 1, extradata) | extend msg_s = extract(".*message=\"(\\S+)\"", 1, extradata) | project-away RawData
+ ```
+
+1. Configure the machine where the Azure Monitor Agent is installed to open the syslog ports, and configure the syslog daemon there to accept messages from external sources. For detailed instructions and a script to automate this configuration, see [Configure the log forwarder to accept logs](connect-custom-logs-ama.md#configure-the-log-forwarder-to-accept-logs).
+
+1. For the instructions to configure the Juniper IDP appliance to send syslog messages to an external server, see [SRX Getting Started - Configure System Logging.](https://supportportal.juniper.net/s/article/SRX-Getting-Started-Configure-System-Logging).
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### MarkLogic Audit
+
+Follow these steps to ingest log messages from MarkLogic Audit:
+
+1. Table name: `MarkLogicAudit_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"C:\Program Files\MarkLogic\Data\Logs\AuditLog.txt"`
+ - Linux: `"/var/opt/MarkLogic/Logs/AuditLog.txt"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Configure MarkLogic Audit to enable it to write logs: (from MarkLogic documentation)
+ 1. Using your browser, navigate to MarkLogic Admin interface.
+ 1. Open the Audit Configuration screen under Groups > group_name > Auditing.
+ 1. Mark the Audit Enabled radio button. Make sure it is enabled.
+ 1. Configure audit event and/or restrictions desired.
+ 1. Validate by selecting OK.
+ 1. Refer to MarkLogic documentation for [more details and configuration options](https://docs.marklogic.com/guide/admin/auditing).
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### MongoDB Audit
+
+Follow these steps to ingest log messages from MongoDB Audit:
+
+1. Table name: `MongoDBAudit_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"C:\data\db\auditlog.json"`
+ - Linux: `"/data/db/auditlog.json"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Configure MongoDB to write logs:
+ 1. For Windows, edit the configuration file `mongod.cfg`. For Linux, `mongod.conf`.
+ 1. Set the `dbpath` parameter to `data/db`.
+ 1. Set the `path` parameter to `/data/db/auditlog.json`.
+ 1. Refer to MongoDB documentation for [more parameters and details](https://www.mongodb.com/docs/manual/tutorial/configure-auditing/).
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### NGINX HTTP Server
+
+Follow these steps to ingest log messages from NGINX HTTP Server:
+
+1. Table name: `NGINX_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Linux: `"/var/log/nginx.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### Oracle WebLogic Server
+
+Follow these steps to ingest log messages from Oracle WebLogic Server:
+
+1. Table name: `OracleWebLogicServer_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"{DOMAIN_NAME}\Servers\{SERVER_NAME}\logs*.log"`
+ - Linux: `"{DOMAIN_HOME}/servers/{SERVER_NAME}/logs/*.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### PostgreSQL Events
+
+Follow these steps to ingest log messages from PostgreSQL Events:
+
+1. Table name: `PostgreSQL_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"C:\*.log"`
+ - Linux: `"/var/log/*.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Edit the PostgreSQL Events configuration file `postgresql.conf` to output logs to files.
+ 1. Set `log_destination='stderr'`
+ 1. Set `logging_collector=on`
+ 1. Refer to PostgreSQL documentation for [more parameters and details](https://www.postgresql.org/docs/current/runtime-config-logging.html).
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### SecurityBridge Threat Detection for SAP
+
+Follow these steps to ingest log messages from SecurityBridge Threat Detection for SAP:
+
+1. Table name: `SecurityBridgeLogs_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Linux: `"/usr/sap/tmp/sb_events/*.cef"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### SquidProxy
+
+Follow these steps to ingest log messages from SquidProxy:
+
+1. Table name: `SquidProxy_CL`
+
+1. Log storage location: Logs are stored as text files on the application's host machine. Install the AMA on the same machine to collect the files.
+
+ Default file locations ("filePatterns"):
+ - Windows: `"C:\Squid\var\log\squid\*.log"`
+ - Linux: `"/var/log/squid/*.log"`
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### Ubiquiti UniFi
+
+Follow these steps to ingest log messages from Ubiquiti UniFi:
+
+1. Table name: `Ubiquiti_CL`
+
+1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
+
+1. Configure the syslog daemon to export its Ubiquiti log messages to a temporary text file so the AMA can collect them.
+
+ # [rsyslog](#tab/rsyslog)
+
+ 1. Create custom configuration file for the rsyslog daemon, in the `/etc/rsyslog.d/` folder, with the following filtering conditions:
+
+ ```bash
+ # Define a new ruleset
+ ruleset(name="<RULESET_NAME>") {
+ action(type="omfile" file="<LOG_FILE_NAME>")
+ }
+
+ # Set the input on port and bind it to the new ruleset
+ input(type="imudp" port="<PORT>" ruleset="<RULESET_NAME>")
+ ```
+ (Replace `<parameters>` with the actual names of the objects represented. <LOG_FILE_NAME> is the file you created in step 2.)
+
+ 1. Restart rsyslog. The typical command syntax is `systemctl restart rsyslog`.
+
+ # [syslog-ng](#tab/syslog-ng)
+
+ 1. Edit the config file `/etc/syslog-ng/conf.d`, adding the following conditions:
+
+ ```bash
+ source s_network {
+ network (
+ ip(ΓÇ£0.0.0.0ΓÇ¥)
+ port(<PORT>)
+ );
+ };
+ destination d_file {
+ file(ΓÇ£<LOG_FILE_NAME>ΓÇ¥);
+ };
+ log {
+ source(s_network);
+ destination(d_file);
+ };
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart syslog-ng. The typical command syntax is `systemctl restart syslog-ng`.
+
+
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ - Replace the column name `"RawData"` with the column name `"Message"`.
+
+ - Replace the transformKql value `"source"` with the value `"sourceΓÇ»| project-rename Message=RawData"`.
+
+ - Replace the `{TABLE_NAME}` and `{LOCAL_PATH_FILE}` placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Configure the machine where the Azure Monitor Agent is installed to open the syslog ports, and configure the syslog daemon there to accept messages from external sources. For detailed instructions and a script to automate this configuration, see [Configure the log forwarder to accept logs](connect-custom-logs-ama.md#configure-the-log-forwarder-to-accept-logs).
+
+1. Configure and connect the Ubiquiti controller.
+ 1. Follow the [instructions provided by Ubiquiti](https://help.ui.com/hc/en-us/categories/6583256751383) to enable syslog and optionally debugging logs.
+ 1. Select Settings > System Settings > Controller Configuration > Remote Logging and enable syslog.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### VMware vCenter
+
+Follow these steps to ingest log messages from VMware vCenter:
+
+1. Table name: `vcenter_CL`
+
+1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
+
+1. Configure the syslog daemon to export its vCenter log messages to a temporary text file so the AMA can collect them.
+
+ # [rsyslog](#tab/rsyslog)
+
+ 1. Edit the configuration file `/etc/rsyslog.conf` to add the following template line before the *directive* section:
+
+ `$template vcenter,"%timestamp% %hostname% %msg%\ n"`
+
+ 1. Create custom configuration file for the rsyslog daemon, saved as `/etc/rsyslog.d/10-vcenter.conf` with the following filtering conditions:
+
+ ```bash
+ if $rawmsg contains "vpxd" then {
+ action(type="omfile" file="/<LOG_FILE_NAME>")
+ stop
+ }
+ if $rawmsg contains "vcenter-server" then {
+ action(type="omfile" file="/<LOG_FILE_NAME>")
+ stop
+ }
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart rsyslog. The typical command syntax is `sudo systemctl restart rsyslog`.
+
+ # [syslog-ng](#tab/syslog-ng)
+
+ 1. Edit the config file `/etc/syslog-ng/conf.d`, adding the following filtering conditions:
+
+ ```bash
+ filter f_vcenter {
+ message("vpxd") or message("vcenter-server");
+ };
+
+ destination d_vcenter {
+ file("<LOG_FILE_NAME>");
+ };
+
+ log {
+ source(s_src);
+ filter(f_vcenter);
+ destination(d_vcenter);
+ flags(final); #Ensures that once a message matches the filter and is written to the specified destination, it will not be processed by subsequent log statements
+ };
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart syslog-ng. The typical command syntax is `systemctl restart syslog-ng`.
+
+
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ - Replace the column name `"RawData"` with the column name `"Message"`.
+
+ - Replace the transformKql value `"source"` with the value `"sourceΓÇ»| project-rename Message=RawData"`.
+
+ - Replace the `{TABLE_NAME}` and `{LOCAL_PATH_FILE}` placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+ - dataCollectionEndpointId should be populated with your DCE. If you don't have one, define a new one. See [Create a data collection endpoint](../azure-monitor/essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) for the instructions.
+
+1. Configure the machine where the Azure Monitor Agent is installed to open the syslog ports, and configure the syslog daemon there to accept messages from external sources. For detailed instructions and a script to automate this configuration, see [Configure the log forwarder to accept logs](connect-custom-logs-ama.md#configure-the-log-forwarder-to-accept-logs).
+
+1. Configure and connect the vCenter devices.
+ 1. Follow the [instructions provided by VMware](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-9633A961-A5C3-4658-B099-B81E0512DC21.html) for sending syslog messages.
+ 1. Use the IP address or hostname of the machine where the Azure Monitor Agent is installed.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+### Zscaler Private Access (ZPA)
+
+Follow these steps to ingest log messages from Zscaler Private Access (ZPA):
+
+1. Table name: `ZPA_CL`
+
+1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
+
+1. Configure the syslog daemon to export its vCenter log messages to a temporary text file so the AMA can collect them.
+
+ # [rsyslog](#tab/rsyslog)
+
+ 1. Create custom configuration file for the rsyslog daemon, in the `/etc/rsyslog.d/` folder, with the following filtering conditions:
+
+ ```bash
+ # Define a new ruleset
+ ruleset(name="<RULESET_NAME>") {
+ action(type="omfile" file="<LOG_FILE_NAME>")
+ }
+
+ # Set the input on port and bind it to the new ruleset
+ input(type="imudp" port="<PORT>" ruleset="<RULESET_NAME>")
+ ```
+ (Replace `<parameters>` with the actual names of the objects represented.)
+
+ 1. Restart rsyslog. The typical command syntax is `systemctl restart rsyslog`.
+
+ # [syslog-ng](#tab/syslog-ng)
+
+ 1. Edit the config file `/etc/syslog-ng/conf.d`, adding the following conditions:
+
+ ```bash
+ source s_network {
+ network (
+ ip(ΓÇ£0.0.0.0ΓÇ¥)
+ port(<PORT>)
+ );
+ };
+ destination d_file {
+ file(ΓÇ£<LOG_FILE_NAME>ΓÇ¥);
+ };
+ log {
+ source(s_network);
+ destination(d_file);
+ };
+ ```
+ (Replace `<LOG_FILE_NAME>` with the name of the log file you created.)
+
+ 1. Restart syslog-ng. The typical command syntax is `systemctl restart syslog-ng`.
+
+
+
+1. Create the DCR according to the directions in [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md#configure-the-data-connector).
+
+ - Replace the column name `"RawData"` with the column name `"Message"`.
+
+ - Replace the transformKql value `"source"` with the value `"sourceΓÇ»| project-rename Message=RawData"`.
+
+ - Replace the `{TABLE_NAME}` and `{LOCAL_PATH_FILE}` placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
+
+1. Configure the machine where the Azure Monitor Agent is installed to open the syslog ports, and configure the syslog daemon there to accept messages from external sources. For detailed instructions and a script to automate this configuration, see [Configure the log forwarder to accept logs](connect-custom-logs-ama.md#configure-the-log-forwarder-to-accept-logs).
+
+1. Configure and connect the ZPA receiver.
+ 1. Follow the [instructions provided by ZPA](https://help.zscaler.com/zpa/configuring-log-receiver). Select JSON as the log template.
+ 1. Select Settings > System Settings > Controller Configuration > Remote Logging and enable syslog.
+
+[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+
+## Related content
+
+- [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md)
+- [Syslog via AMA and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## August 2024
+- [Unified AMA-based connectors for syslog ingestion](#unified-ama-based-connectors-for-syslog-ingestion)
+- [Better visibility for Windows security events](#better-visibility-for-windows-security-events)
- [New Auxiliary logs retention plan (Preview)](#new-auxiliary-logs-retention-plan-preview) - [Create summary rules for large sets of data (Preview)](#create-summary-rules-in-microsoft-sentinel-for-large-sets-of-data-preview)
+### Unified AMA-based connectors for syslog ingestion
+
+With the impending retirement of the Log Analytics Agent, Microsoft Sentinel has consolidated the collection and ingestion of syslog, CEF, and custom-format log messages into three multi-purpose data connectors based on the Azure Monitor Agent (AMA):
+- **Syslog via AMA**, for any device whose logs are ingested into the *Syslog* table in Log Analytics.
+- **Common Event Format (CEF) via AMA**, for any device whose logs are ingested into the *CommonSecurityLog* table in Log Analytics.
+- **New! Custom Logs via AMA (Preview)**, for any of 15 device types, or any unlisted device, whose logs are ingested into custom tables with names ending in *_CL* in Log Analytics.
+
+These connectors replace nearly all the existing connectors for individual device and appliance types that have existed until now, that were based on either the legacy Log Analytics agent (also known as MMA or OMS) or the current Azure Monitor Agent. The solutions provided in the content hub for all of these devices and appliances now include whichever of these three connectors are appropriate to the solution.* The replaced connectors are now marked as "Deprecated" in the data connector gallery.
+
+The data ingestion graphs that were previously found in each device's connector page can now be found in device-specific workbooks packaged with each device's solution.
+
+\* When installing the solution for any of these applications, devices, or appliances, to ensure that the accompanying data connector is installed, you must select **Install with dependencies** on the solution page, and then mark the data connector on the following page.
+
+For the updated procedures for installing these solutions, see the following articles:
+- [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+- [Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
+- [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](unified-connector-custom-device.md)
+
+### Better visibility for Windows security events
+
+We've enhanced the schema of the *SecurityEvent* table that hosts Windows Security events, and have added new columns to ensure compatibility with the Azure Monitor Agent (AMA) for Windows (version 1.28.2). These enhancements are designed to increase the visibility and transparency of collected Windows events. If you're not interested in receiving data in these fields, you can apply an ingestion-time transformation ("project-away" for example) to drop them.
+ ### New Auxiliary logs retention plan (Preview) The new **Auxiliary logs** retention plan for Log Analytics tables allows you to ingest large quantities of high-volume logs with supplemental value for security at a much lower cost. Auxiliary logs are available with interactive retention for 30 days, in which you can run simple, single-table queries on them, such as to summarize and aggregate the data. Following that 30-day period, auxiliary log data goes to long-term retention, which you can define for up to 12 years, at ultra-low cost. This plan also allows you to run search jobs on the data in long-term retention, extracting only the records you want to a new table that you can treat like a regular Log Analytics table, with full query capabilities.
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
The runtime can be installed independently. However, the SDK requires the runtim
| Package |Version| | | |
-|[Install Service Fabric Runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.10.1.2175.9590.exe) | 10.1.2175.9590 |
-|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.7.1.2175.msi) | 7.1.2175 |
+|[Install Service Fabric Runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.10.1.2338.9590.exe) | 10.1.2338.9590 |
+|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.7.1.2338.msi) | 7.1.2338 |
You can find direct links to the installers for previous releases on [Service Fabric Releases.](https://github.com/microsoft/service-fabric/tree/master/release_notes)
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
For currently supported versions, all releases are listed. For currently unsuppo
| Service Fabric runtime | Can upgrade directly from | Can downgrade to <sup>1</sup> | Compatible SDK or NuGet package version <sup>2</sup> | Supported .NET runtimes <sup>3</sup> | OS Version | End of support | Link to release notes | | - | - | - | - | - | - | - | - |
+| 10.1 CU4<br>10.1.2338.9590 | 9.1 CU6<br>9.1.1851.9590 | 9.0 | Version 7.1 or earlier | .NET 8 **(.NET 8 runtime support is available starting with Cumulative Update 3.0 (CU3) of version 10.1)**, .NET 7, .NET 6 <br> .NET Framework >= 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU3.md) |
| 10.1 CU3<br>10.1.2175.9590 | 9.1 CU6<br>9.1.1851.9590 | 9.0 | Version 7.1 or earlier | .NET 8 **(.NET 8 runtime support is available starting with Cumulative Update 3.0 (CU3) of version 10.1)**, .NET 7, .NET 6 <br> .NET Framework >= 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU3.md) | | 10.1 CU2<br>10.1.1951.9590 | 9.1 CU6<br>9.1.1851.9590 | 9.0 | Version 7.1 or earlier | .NET 7, .NET 6 <br> .NET Framework >= 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU2.md) | | 10.1 RTO<br>10.1.1541.9590 | 9.1 CU6<br>9.1.1851.9590 | 9.0 | Version 7.1 or earlier | .NET 7, .NET 6 <br> .NET Framework >= 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101RTO.md) |
For currently supported versions, all releases are listed. For currently unsuppo
| Service Fabric runtime | Can upgrade directly from | Can downgrade to <sup>1</sup> | Compatible SDK or NuGet package version <sup>2</sup> | Supported .NET runtimes <sup>3</sup> | OS version | End of support | Link to release notes | | - | - | - | - | - | - | - | - |
+| 10.1 CU4<br>10.1.2306.1 | 9.1 CU6<br>9.1.1642.1 | 9.0 | Version 7.1 or earlier | .NET 8 **(.NET 8 runtime support is available starting with Cumulative Update 3.0 (CU3) of version 10.1)**, .NET 7, .NET 6 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU3.md) |
| 10.1 CU3<br>10.1.2108.1 | 9.1 CU6<br>9.1.1642.1 | 9.0 | Version 7.1 or earlier | .NET 8 **(.NET 8 runtime support is available starting with Cumulative Update 3.0 (CU3) of version 10.1)**, .NET 7, .NET 6 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU3.md) | | 10.1 CU2<br>10.1.1885.1 | 9.1 CU6<br>9.1.1642.1 | 9.0 | Version 7.1 or earlier | .NET 7, .NET 6 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101CU2.md) | | 10.1 RTO<br>10.1.1507.1 | 9.1 CU6<br>9.1.1642.1 | 9.0 | Version 7.1 or earlier | .NET 7, .NET 6 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_101RTO.md) |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | - | - | - |
+| 10.1 CU4 | 10.1.2338.9590 | 10.1.2306.1 |
| 10.1 CU3 | 10.1.2175.9590 | 10.1.2108.1 | | 10.1 CU2 | 10.1.1951.9590 | 10.1.1885.1 | | 10.1 RTO | 10.1.1541.9590 | 10.1.1507.1 |
+| 10.0 CU5 | 10.0.2604.9590 | 10.0.2497.1 |
| 10.0 CU4 | 10.0.2382.9590 | 10.0.2261.1 | | 10.0 CU3 | 10.0.2226.9590 | 10.0.2105.1 | | 10.0 CU1 | 10.0.1949.9590 | 10.0.1829.1 | | 10.0 RTO | 10.0.1816.9590 | 10.0.1728.1 |
+| 9.1 CU11 | 9.1.2718.9590 | 9.1.2498.1 |
| 9.1 CU10 | 9.1.2488.9590 | 9.1.2248.1 | | 9.1 CU9 | 9.1.2277.9590 | 9.1.2038.1 | | 9.1 CU7 | 9.1.1993.9590 | 9.1.1740.1 |
storage Blob Upload Function Trigger Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md
Last updated 07/06/2023 ms.devlang: javascript
+# As a JavaScript developer, I want to know how to upload files to blob storage within an application, so that I can adopt this functionality into my own solution.
# JavaScript Tutorial: Upload and analyze a file with Azure Functions and Blob Storage
In this tutorial, you'll learn how to upload an image to Azure Blob Storage and
Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the [overview page](./storage-blobs-introduction.md). > [!WARNING]
-> This tutorial uses publicly accessible storage to simplify the process to finish this tutorial. Anonymous public access presents a security risk. [Learn how to remediate this risk.](/azure/storage/blobs/anonymous-read-access-overview)
+> This tutorial is meant for quick adoption and as such it doesn't follow secure-by-default requirements. To understand more about this scenario with a secure-by-default goal, go to [Security considerations](#security-considerations).
Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Functions is a serverless computer solution that allows you to write and run small blocks of code as highly scalable, serverless, event driven functions. You can read more about Azure Functions on the [overview page](../../azure-functions/functions-overview.md). - In this tutorial, learn how to: > [!div class="checklist"]
If you're not going to continue to use this application, you can delete the reso
1. Find and right-click the `msdocs-storage-function` resource group from the list. 1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
+## Security considerations
+
+This solution, as a beginner tutorial, doesn't demonstrate secure-by-default practices. This is intentional to allow you to be successful in deploying the solution. The next step after that successful deployment is to secure the resources. This solution uses three Azure services, each has its own security features and considerations for secure-by-default configuration:
+
+* Azure Functions - [Securing Azure Functions](/azure/azure-functions/security-concepts)
+* Azure Storage - [Security recommendations for Blob storage](security-recommendations.md)
+* Azure Cognitive services - [Azure AI services security features](/azure/ai-services/security-features)
## Sample code * [Azure Functions sample code](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/blob/main/javascript-v4)
-## Next steps
+## Related content
* [Create a function app that connects to Azure services using identities instead of secrets](/azure/azure-functions/functions-identity-based-connections-tutorial) * [Remediating anonymous public read access for blob data](/azure/storage/blobs/anonymous-read-access-overview)
stream-analytics Sql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-output.md
Partitioning needs to enabled and is based on the PARTITION BY clause in the que
You can configure the max message size by using **Max batch count**. The default maximum is 10,000 and the default minimum is 100 rows per single bulk insert. For more information, see [Azure SQL limits](/azure/azure-sql/database/resource-limits-logical-server). Every batch is initially bulk inserted with maximum batch count. Batch is split in half (until minimum batch count) based on retryable errors from SQL.
+## Output data type mappings
+
+As the schema of the target table in your SQL database must exactly match the fields and their types in your job's output, you can refer to [Data Types (Azure Stream Analytics)](/stream-analytics-query/data-types-azure-stream-analytics) for detailed type mappings between ASA and SQL.
+ ## Limitation Self-signed Secured Sockets Layer (SSL) certificate isn't supported when trying to connect Azure Stream Analytics jobs to SQL on VM.
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1. > [!CAUTION]
-> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.1
-> * Effective August 29, 2024, **disablement** of jobs running on Azure Synapse Runtime for Apache Spark 3.1 will be executed. **Immediately** migrate to higher runtime versions otherwise your jobs will stop executing.
-> * **All Spark jobs running on Azure Synapse Runtime for Apache Spark 3.1 will be disabled as of August 29, 2024.**
- * End of Support for Azure Synapse Runtime for Apache Spark 3.1 announced January 26, 2023.
+> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.1.
+>* **On August 29, 2024,** partial pools and jobs disablement will begin. We will continue with further, **full disablement by September 30, 2024.** **Immediately** migrate to higher runtime versions otherwise your jobs will stop executing.
+> * **All Spark jobs running on Azure Synapse Runtime for Apache Spark 3.1 will be fully disabled as of** **September 30, 2024.**
+* End of Support for Azure Synapse Runtime for Apache Spark 3.1 announced January 26, 2023.
* Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes. * Post January 26, 2024, we will not be addressing any support tickets related to Spark 3.1. There will be no release pipeline in place for bug or security fixes for Spark 3.1. Utilizing Spark 3.1 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns. * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 3.1, but we will not provide any official support for it.
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-mix.md
+
+ Title: Use multiple Virtual Machine sizes with Instance Mix
+description: Use multiple Virtual Machine sizes in a scale set using Instance Mix. Optimize deployments using allocation strategies.
++++ Last updated : 06/26/2024+++
+# Use multiple Virtual Machine sizes with Instance Mix (Preview)
+> [!IMPORTANT]
+> Instance Mix for Virtual Machine Scale Sets with Flexible Orchestration Mode is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this feature may change prior to general availability (GA).
+
+Instance Mix enables you to specify multiple different Virtual Machine (VM) sizes in your Virtual Machine Scale Set with Flexible Orchestration Mode, and an allocation strategy to further optimize your deployments.
+
+Instance Mix is best suited for workloads that are flexible in compute requirements and can be run on various different sized VMs. Using Instance Mix you can:
+- Deploy a heterogeneous mix of VM sizes in a single scale set. You can view max scale set instance counts in the [documentation](./virtual-machine-scale-sets-orchestration-modes.md#what-has-changed-with-flexible-orchestration-mode).
+- Optimize your deployments for cost or capacity through allocation strategies.
+- Continue to make use of scale set features, like [Spot Priority Mix](./spot-priority-mix.md), [Autoscale](./virtual-machine-scale-sets-autoscale-overview.md), or [Upgrade Policies](./virtual-machine-scale-sets-set-upgrade-policy.md).
+- Spread a heterogeneous mix of VMs across Availability Zones and Fault Domains for high availability and reliability.
+
+## Enroll in the Preview
+Register for the `FlexVMScaleSetSkuProfileEnabled` feature flag using the [az feature register](/cli/azure/feature#az-feature-register) command:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.Compute" --name "FlexVMScaleSetSkuProfileEnabled"
+```
+
+It takes a few moments for the feature to register. Verify the registration status by using the [az feature show](/cli/azure/feature#az-feature-register) command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.Compute" --name "FlexVMScaleSetSkuProfileEnabled"
+```
+
+## Changes to existing scale set properties
+### sku.tier
+The `sku.tier` property is currently an optional scale set property and should be set to `null` for Instance Mix scenarios.
+
+### sku.capacity
+The `sku.capacity` property continues to represent the overall size of the scale set in terms of the total number of VMs.
+
+### scaleInPolicy
+The optional scale-in property isn't needed for scale set deployments using Instance Mix. During scaling in events, the scale set utilizes the allocation strategy to inform the decision on which VMs should be scaled in. For example, when you use `LowestPrice`, the scale set scales in by removing the more expensive VMs first.
+
+## New scale set properties
+### skuProfile
+The `skuProfile` property represents the umbrella property for all properties related to Instance Mix, including VM sizes and allocation strategy.
+
+### vmSizes
+The `vmSizes` property is where you specify the specific VM sizes that you're using as part of your scale set deployment with Instance Mix.
+
+### allocationStrategy
+Instance Mix introduces the ability to set allocation strategies for your scale set. The `allocationStrategy` property is where you specify which allocation strategy you'd like to use for your Instance Flexible scale set deployments. There are two options for allocation strategies, `lowestPrice` and `capacityOptimized`. Allocation strategies apply to both Spot and Standard VMs.
+
+#### lowestPrice (default)
+This allocation strategy is focused on workloads where cost and cost-optimization are most important. When evaluating what VM split to use, Azure looks at the lowest priced VMs of the VM sizes specified. Azure also considers capacity as part of this allocation strategy. When using `lowestPrice` allocation strategy, the scale set deploys as many of the lowest priced VMs as it can, depending on available capacity, before moving on to the next lowest priced VM size specified.
+
+#### capacityOptimized
+This allocation strategy is focused on workloads where attaining capacity is the primary concern. When evaluating what VM size split to deploy in the scale set, Azure looks only at the underlying capacity available. It doesn't take price into account when determining what VMs to deploy. Using `capacityOptimized` can result in the scale set deploying the most expensive, but most readily available VMs.
+
+## Cost
+Following the scale set cost model, usage of Instance Mix is free. You continue to only pay for the underlying resources, like the VM, disk, and networking.
+
+## Limitations
+- Instance Mix is currently available in the following regions: West US, West US2, East US, and East US2.
+- Instance Mix is only available for scale sets using Flexible Orchestration Mode.
+- Instance Mix is currently only available through ARM template.
+- You must have quota for the VM sizes you're requesting with Instance Mix.
+- You can specify **up to** five VM sizes with Instance Mix at this time.
+- Existing scale sets can't be updated to use Instance Mix.
+- VM sizes can't be changed once the scale set is deployed.
+- For REST API deployments, you must have an existing virtual network inside of the resource group that you're deploying your scale set with Instance Mix in.
+
+## Deploy a scale set using Instance Mix
+The following example can be used to deploy a scale set using Instance Mix:
+
+### [REST API](#tab/arm-1)
+To deploy an Instance Flexible scale set through REST API, use a `PUT` call to and include the following sections in your request body:
+```json
+PUT https://management.azure.com/subscriptions/{YourSubscriptionId}/resourceGroups/{YourResourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{youScaleSetName}?api-version=2023-09-01
+```
+
+In the request body, ensure `sku.name` is set to Mix:
+```json
+ "sku": {
+ "name": "Mix",
+ "capacity": {TotalNumberVms}
+ },
+```
+Ensure you reference your existing subnet:
+```json
+"subnet": {
+ "id": "/subscriptions/{YourSubscriptionId}/resourceGroups/{YourResourceGroupName}/providers/Microsoft.Network/virtualNetworks/{YourVnetName}/subnets/default"
+},
+```
+Lastly, be sure to specify the `skuProfile` with **up to five** VM sizes. This sample uses three:
+```json
+ "skuProfile": {
+ "vmSizes": [
+ {
+ "name": "Standard_D8s_v5"
+ },
+ {
+ "name": "Standard_E16s_v5"
+ },
+ {
+ "name": "Standard_D2s_v5"
+ }
+ ],
+ "allocationStrategy": "lowestPrice"
+ },
+```
+++
+## Troubleshooting
+| Error Code | Error Message | Troubleshooting options |
+|--|-|-|
+| SkuProfileAllocationStrategyInvalid | Sku ProfileΓÇÖs Allocation Strategy is invalid. | Ensure that you're using either `CapacityOptimized` or `LowestPrice` as the `allocationStrategy` |
+| SkuProfileVMSizesCannotBeNullOrEmpty | Sku Profile VM Sizes cannot be null or empty. Please provide a valid list of VM Sizes and retry. | Provide at least one VM size in the `skuProfile`. |
+| SkuProfileHasTooManyVMSizesInRequest | Too many VM Sizes were specified in the request. Please provide no more than 5 VM Sizes. | At this time, you can specify up to five VM sizes with Instance Mix. |
+| SkuProfileVMSizesCannotHaveDuplicates | Sku Profile contains duplicate VM Size: {duplicateVmSize}. Please remove any duplicates and retry. | Check the VM SKUs listed in the `skuProfile` and remove the duplicate VM size. |
+| SkuProfileUpdateNotAllowed | Virtual Machine Scale Sets with Sku Profile property cannot be updated. | At this time, you can't update the `skuProfile` of a scale set using Instance Mix. |
+| SkuProfileScenarioNotSupported | {propertyName} is not supported on Virtual Machine Scale Sets with Sku Profile | Instance Mix doesnΓÇÖt support certain scenarios today, like Azure Dedicated Host (`properties.hostGroup`), Capacity Reservations (`properties.virtualMachineProfile.capacityReservation`), and StandbyPools (`properties.standbyPoolProfile`). Adjust the template to ensure youΓÇÖre not using unsupported properties. |
+| SkuNameMustBeMixIfSkuProfileIsSpecified | Sku name is {skuNameValue}. Virtual Machine Scale Sets with Sku Profile must have the Sku name property set to "Mix" | Ensure that the `sku.name property` is set to `"Mix"`. |
+| SkuTierMustNotBeSetIfSkuProfileIsSpecified | Sku tier is {skuTierValue}. Virtual Machine Scale Sets with Sku Profile must not have the Sku tier property set. | `sku.tier` is an optional property for scale sets. With Instance Mix, `sku.tier` must be set to `null` or not specified. |
+| InvalidParameter | The value of parameter skuProfile is invalid. | Your subscription isn't registered for the Instance Mix feature. Follow the enrollment instructions to register for the Preview. |
+| FleetRPInternalError | An unexpected error occurred while computing the desired sku split. | Instance Mix isn't supported in this region yet. Deploy only in supported regions. |
+
+## FAQs
+### Can I use Spot and Standard VMs with Instance Mix?
+Yes, you can use both Spot and Standard VMs in your scale set deployments using Instance Mix. To do so, use [Spot Priority Mix](./spot-priority-mix.md) to define a percentage split of Spot and Standard VMs.
+
+### My region doesn't support Instance Mix today. Will it support Instance Mix in the future?
+Instance Mix is rolling out to all Azure regions during Public Preview. Instance Mix is currently available in the following regions: West US, West US2, East US, and East US2.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
For a group of virtual machines undergoing an update, the Azure platform will or
**Within an availability set:** - All VMs in a common availability set aren't updated concurrently. - VMs in a common availability set are updated within Update Domain boundaries and VMs across multiple Update Domains aren't updated concurrently.-- In an Update Domain, no more than 20% of the VMs within a resource group will be updated at a time. For resource groups with less than 10 VMs, VMs update one at a time within an Update Domain.
+- In an Update Domain, no more than 20% of the VMs within an availability set will be updated at a time. For availability sets with less than 10 VMs, VMs update one at a time within an Update Domain.
Restricting the number of concurrently patched VMs across regions, within a region, or within an availability set limits the impact of a faulty patch on a given set of VMs. With health monitoring, any potential issues are flagged before they impact the entire workload.
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/resize-vm.md
This article shows you how to change an existing virtual machine's [VM size](../sizes.md).
-After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. Deallocation may be necessary if the new size isn't available on the same hardware cluster that is currently hosting the VM.
+After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. Deallocation may be necessary if the new size isn't available on the same hardware cluster that is currently hosting the VM. It is important to understand that even when deallocation is not necessary, if the virtual machine is currently running, changing its size will cause it to restart. For this reason you should consider changing VM size as a disruptive procedure, especially for stateful workloads that are hosted on the VM.
![A diagram showing a smaller Azure VM icon with a growing arrow pointing to a new larger Azure VM icon.](./media/size-resize-vm.png "Resizing a VM")
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Previously updated : 07/01/2024 Last updated : 08/15/2024 # Public IP addresses
->[!Important]
->On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. For guidance on upgrading, visit [Upgrading a basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md).
- Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate to Internet and public-facing Azure services. You dedicate the address to the resource until you unassign it. A resource without an assigned public IP can still communicate outbound. Azure automatically assigns an available dynamic IP address for outbound communication. This address isn't dedicated to the resource and can change over time. For more information about outbound connections in Azure, see [Understand outbound connections](../../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json). In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) address is a resource that has its own properties.
In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) a
The following resources can be associated with a public IP address: * Virtual machine network interfaces- * Virtual Machine Scale Sets-
-* Public Load Balancers
-
+* Azure Load Balancers (public)
* Virtual Network Gateways (VPN/ER)- * NAT gateways- * Application Gateways- * Azure Firewalls- * Bastion Hosts- * Route Servers- * Api Management For Virtual Machine Scale Sets, use [Public IP Prefixes](public-ip-address-prefix.md).
The following table shows the property a public IP can be associated to a resour
Public IP addresses can be created with an IPv4 or IPv6 address. You may be given the option to create a dual-stack deployment with a IPv4 and IPv6 address. ## SKU
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. For guidance on upgrading, visit [Upgrading a basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md).
+
+Public IP addresses are created with a SKU of **Standard** or **Basic**. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
-Public IP addresses are created with a SKU of **Standard** or **Basic**. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with. Full details are listed in the table below:
+Full details are listed in the table below:
| Public IP address | Standard | Basic | | | | |
Public IP addresses are created with a SKU of **Standard** or **Basic**. The SK
| [Routing preference](routing-preference-overview.md)| Supported to enable more granular control of how traffic is routed between Azure and the Internet. | Not supported.| | Global tier | Supported via [cross-region load balancers](../../load-balancer/cross-region-overview.md).| Not supported. |
-> [!NOTE]
-> Basic SKU IPv4 addresses can be upgraded after creation to Standard SKU. To learn about SKU upgrade, refer to [Public IP upgrade](public-ip-upgrade-portal.md).
->[!IMPORTANT]
-> Virtual machines attached to a backend pool do not need a public IP address to be attached to a public load balancer. But if they do, matching SKUs are required for load balancer and public IP resources. You can't have a mixture of basic SKU resources and standard SKU resources. You can't attach standalone virtual machines, virtual machines in an availability set resource, or a virtual machine scale set resources to both SKUs simultaneously. New designs should consider using Standard SKU resources. For more information about a standard load balancer, see [Standard Load Balancer](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Virtual machines attached to a backend pool do not need a public IP address to be attached to a public load balancer. But if they do, matching SKUs are required for load balancer and public IP resources. You can't have a mixture of basic SKU resources and standard SKU resources. You can't attach standalone virtual machines, virtual machines in an availability set resource, or a virtual machine scale set resources to both SKUs simultaneously. New designs should consider using Standard SKU resources. For more information about a standard load balancer, see [Standard Load Balancer](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## IP address assignment Public IPs have two types of assignments: -- **Static** - The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted. --- **Dynamic** - The IP address **isn't** given to the resource at the time of creation when selecting dynamic. The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource.-
-**Static public IP addresses** are commonly used in the following scenarios:
+- **Dynamic** - The IP address **isn't** given to the resource at the time of creation when selecting dynamic. The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource. Dynamic public IP addresses are commonly used for when there's no dependency on the IP address. For example, a public IP resource is released from a VM upon stop and then start. Any associated IP address is released if the allocation method is **dynamic**. If you don't want the IP address to change, set the allocation method to **static** to ensure the IP address remains the same.
+
+- **Static** - The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted. When you set the allocation method to **static**, you cannot specify the actual IP address assigned to the public IP address resource. Azure assigns the IP address from a pool of available IP addresses in the Azure location the resource is created in.
+Static public IP addresses are commonly used in the following scenarios:
* When you must update firewall rules to communicate with your Azure resources.- * DNS name resolution, where a change in IP address would require updating A records.- * Your Azure resources communicate with other apps or services that use an IP address-based security model.- * You use TLS/SSL certificates linked to an IP address.
-> [!NOTE]
-> Even when you set the allocation method to **static**, you cannot specify the actual IP address assigned to the public IP address resource. Azure assigns the IP address from a pool of available IP addresses in the Azure location the resource is created in.
-
-**Basic public IP addresses** are commonly used for when there's no dependency on the IP address.
-
-For example, a public IP resource is released from a resource named **Resource A**. **Resource A** receives a different IP on start-up if the public IP resource is reassigned. Any associated IP address is released if the allocation method is changed from **static** to **dynamic**. Any associated IP address is unchanged if the allocation method is changed from **dynamic** to **static**. Set the allocation method to **static** to ensure the IP address remains the same.
- | Resource | Static | Dynamic | | | | | | Standard public IPv4 | :white_check_mark: | x |
For instance, creation of a public IP with the following settings:
* **West US** Azure **location**
-The fully qualified domain name (FQDN) **contoso.westus.cloudapp.azure.com** resolves to the public IP address of the resource.
-
-> [!IMPORTANT]
-> Each domain name label created must be unique within its Azure location.
+The fully qualified domain name (FQDN) **contoso.westus.cloudapp.azure.com** resolves to the public IP address of the resource. Each domain name label created must be unique within its Azure location.
If a custom domain is desired for services that use a public IP, you can use [Azure DNS](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address) or an external DNS provider for your DNS Record. ## Domain Name Label Scope (preview)
-Public IPs also have an optional parameter for **Domain Name Label Scope**, which defines what domain label an object with the same name will use. This feature can help to prevent "dangling DNS names" which can be reused by malicious actors. When this option is chosen, the public IP address' DNS name will have an additional string in between the **domainnamelabel** and **location** fields, e.g. **contoso.fjdng2acavhkevd8.westus.cloudapp.Azure.com**. (This string is a hash generated from input specific to your subscription, resource group, domain name label, and other properties.)
+Public IPs also have an optional parameter for **Domain Name Label Scope**, which defines what domain label an object with the same name will use. This feature can help to prevent "dangling DNS names" which can be reused by malicious actors. When this option is chosen, the public IP address' DNS name will have an additional string in between the **domainnamelabel** and **location** fields, e.g. **contoso.fjdng2acavhkevd8.westus.cloudapp.Azure.com**. (This string is a hash generated from input specific to your subscription, resource group, domain name label, and other properties).
+
+The domain name label scope can only be specified at the creation of a public IP address.
>[!Important] > Domain Name Label Scope is currently in public preview. It's provided without a service-level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The value of the **Domain Name Label Scope** must match one of the options below
For example, if **SubscriptionReuse** is selected as the option, and a customer who has the example domain name label **contoso.fjdng2acavhkevd8.westus.cloudapp.Azure.com** deletes and re-deploys a public IP address using the same template as before, the domain name label will remain the same. If the customer deploys a public IP address using this same template under a different subscription, the domain name label would change (e.g. **contoso.c9ghbqhhbxevhzg9.westus.cloudapp.Azure.com**).
-> [!IMPORTANT]
-> The domain name label scope can only be specified at the creation of a public IP address.
- ## Availability Zone
-Public IP addresses with a standard SKU can be created as nonzonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md).
+Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). Basic SKU Public IPs do not have any zones and are created as non-zonal.
+A public IP's availability zone can't be changed after the public IP's creation.
-A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "nonzonal" public IP address is placed into a zone for you by Azure and doesn't give a guarantee of redundancy.
+| Value | Behavior |
+| | |
+| Non-zonal | A non-zonal public IP address is placed into a zone for you by Azure and doesn't give a guarantee of redundancy. |
+| Zonal | A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. |
+| Zone-redundant | A zone-redundant IP is created in all zones for a region and can survive any single zone failure. |
-In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain nonzonal. A public IP's availability zone can't be changed after the public IP's creation.
+In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal.
-> [!NOTE]
-> All basic SKU public IP addresses are created as non-zonal. Any IP that is upgraded from a basic SKU to standard SKU remains non-zonal.
+> [!IMPORTANT]
+> We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following 12 regions, all IPs created (except zonal) are zone-redundant.
+> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2.
## Other public IP address features
-There are other attributes that can be used for a public IP address.
+There are other attributes that can be used for a public IP address (Standard SKU only).
* The Global **Tier** option creates a global anycast IP that can be used with cross-region load balancers. * The Internet **Routing Preference** option minimizes the time that traffic spends on the Microsoft network, lowering the egress data transfer cost.
-> [!NOTE]
-> At this time, both the **Tier** and **Routing Preference** feature are available for standard SKU IPv4 addresses only. They can't be utilized on the same IP address concurrently.
->
-- ## Limits The limits for IP addressing are listed in the full set of [limits for networking](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) in Azure. The limits are per region and per subscription.
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+| Metric| Routing | [New Virtual Hub Metrics](monitor-virtual-wan-reference.md#hub-router-metrics)| There are now two additional Virtual WAN hub metrics that display the virtual hub's capacity and spoke VM utilization: **Routing Infrastructure Units** and **Spoke VM Utilization**.| August 2024 | The **Spoke VM Utilization** metric represents an approximate number of deployed spoke VMs as a percentage of the total number of spoke VMs that the hub's routing infrastructure units can support.
| Feature| Routing | [Routing intent](how-to-routing-policies.md)| Routing intent is the mechanism through which you can configure Virtual WAN to send private or internet traffic via a security solution deployed in the hub.|May 2023|Routing Intent is Generally Available in Azure public cloud. See documentation for [additional limitations](how-to-routing-policies.md#knownlimitations).| |Feature| Routing |[Virtual hub routing preference](about-virtual-hub-routing-preference.md)|Hub routing preference gives you more control over your infrastructure by allowing you to select how your traffic is routed when a virtual hub router learns multiple routes across S2S VPN, ER, and SD-WAN NVA connections. |October 2022| | |Feature| Routing|[Bypass next hop IP for workloads within a spoke VNet connected to the virtual WAN hub generally available](how-to-virtual-hub-routing.md)|Bypassing next hop IP for workloads within a spoke VNet connected to the virtual WAN hub lets you deploy and access other resources in the VNet with your NVA without any additional configuration.|October 2022| |
vpn-gateway About Gateway Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-gateway-skus.md
description: Learn about VPN Gateway SKUs.
Previously updated : 07/23/2024 Last updated : 08/15/2024
When you configure a virtual network gateway SKU, select the SKU that satisfies
| | | |**Basic** (**) | **Route-based VPN**: 10 tunnels for S2S/connections; no RADIUS authentication for P2S; no IKEv2 for P2S<br>**Policy-based VPN**: (IKEv1): 1 S2S/connection tunnel; no P2S| | **All Generation1 and Generation2 SKUs except Basic** | **Route-based VPN**: up to 100 tunnels (*), P2S, BGP, active-active, custom IPsec/IKE policy, ExpressRoute/VPN coexistence |
-| | |
(*) You can configure "PolicyBasedTrafficSelectors" to connect a route-based VPN gateway to multiple on-premises policy-based firewall devices. Refer to [Connect VPN gateways to multiple on-premises policy-based VPN devices using PowerShell](vpn-gateway-connect-multiple-policybased-rm-ps.md) for details.
-(\*\*) The Basic SKU has certain feature and performance limitations and should not be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
+(\*\*) The Basic SKU has certain feature and performance limitations and shouldn't be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
## <a name="workloads"></a>Gateway SKUs - Production vs. Dev-Test workloads Due to the differences in SLAs and feature sets, we recommend the following SKUs for production vs. dev-test:
-| **Workload** | **SKUs** |
-| | |
+| **Workload** | **SKUs** |
+| | |
| **Production, critical workloads** | All Generation1 and Generation2 SKUs, except Basic| | **Dev-test or proof of concept** | Basic (**) |
-| | |
-(\*\*) The Basic SKU has certain feature and performance limitations and should not be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
+
+(\*\*) The Basic SKU has certain feature and performance limitations and shouldn't be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
If you're using the old SKUs (legacy), the production SKU recommendations are Standard and HighPerformance. For information and instructions for old SKUs, see [Gateway SKUs (legacy)](vpn-gateway-about-skus-legacy.md). ## About legacy SKUs
-For information about working with the legacy gateway SKUs (Basic, Standard, and High Performance), including SKU deprecation, see [Managing legacy gateway SKUs](vpn-gateway-about-skus-legacy.md).
+For information about working with the legacy gateway SKUs (Standard and High Performance), including SKU deprecation, see [Managing legacy gateway SKUs](vpn-gateway-about-skus-legacy.md).
## Specify a SKU
vpn-gateway Create Gateway Basic Sku Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-gateway-basic-sku-powershell.md
description: Learn how to create a Basic SKU virtual network gateway for a VPN c
Previously updated : 07/17/2024 Last updated : 08/15/2024
This article helps you create a Basic SKU Azure VPN gateway using PowerShell. The VPN gateway you create can be either RouteBased, or PolicyBased, depending on your connection requirements. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect VNets.
+> [!IMPORTANT]
+> The Basic SKU has certain feature and performance limitations and shouldn't be used for production purposes. For more information about SKUs, see [About gateway SKUs](about-gateway-skus.md).
+ :::image type="content" source="./media/create-gateway-basic-sku/gateway-diagram.png" alt-text="Diagram that shows a virtual network and a VPN gateway." lightbox="./media/create-gateway-basic-sku/gateway-diagram-expand.png"::: * The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article.
This article helps you create a Basic SKU Azure VPN gateway using PowerShell. Th
The steps in this article create a virtual network, a subnet, a gateway subnet, and a VPN gateway (virtual network gateway) using the Basic SKU. The article steps specify a **RouteBased** VPN type. You can also specify a **PolicyBased** VPN type using the steps in this article. Once the gateway creation completes, you can then create connections. If you want to create a gateway using a SKU other than the Basic SKU, see the [Portal article](tutorial-create-gateway-portal.md).
-Basic SKU VPN gateways have limitations. For more information about SKUs and Basic SKU limitations, see [About gateway SKUs](about-gateway-skus.md). A few of the limitations that affect the settings used in this article are:
+The Basic SKU has certain feature and performance limitations and shouldn't be used for production purposes. Some of the limitations of the Basic SKU are:
-* A Basic SKU VPN gateway must use the Dynamic allocation method for public IP address, not Static.
-* A Basic SKU VPN gateway uses a Basic SKU public IP address, not Standard.
-* You can't create a Basic SKU VPN gateway using the Azure portal.
## Before you begin
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Site-to-site connections to an on-premises network require a VPN device. In this
## <a name="CreateConnection"></a>Create VPN connections
-Create a site-to-site VPN connection between your virtual network gateway and your on-premises VPN device. If you're using an active-active mode gateway (recommended), each gateway VM instance has a separate assigned IP address object. To properly configure [highly available connectivity](vpn-gateway-highlyavailable.md), you must connect each VM instance to your VPN device.
+Create a site-to-site VPN connection between your virtual network gateway and your on-premises VPN device. If you're using an active-active mode gateway (recommended), each gateway VM instance has a separate IP address. To properly configure [highly available connectivity](vpn-gateway-highlyavailable.md), you must establish a tunnel between each VM instance and your VPN device. Both tunnels are part of the same connection.
Create a connection by using the following values:
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
For SKU deprecation, see the [SKU deprecation](#sku-deprecation) and SKU depreca
## <a name="agg"></a>Estimated aggregate throughput by SKU
+The following table shows the gateway types and the estimated aggregate throughput by gateway SKU. This table applies to the Resource Manager and classic deployment models.
+
+Pricing differs between gateway SKUs. For more information, see [VPN Gateway Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway).
+
+The UltraPerformance gateway SKU isn't represented in this table. For information about the UltraPerformance SKU, see the [ExpressRoute](../expressroute/expressroute-about-virtual-network-gateways.md) documentation.
+
+| | **VPN Gateway throughput (1)** | **VPN Gateway max IPsec tunnels (2)** | **ExpressRoute Gateway throughput** | **VPN Gateway and ExpressRoute coexist** |
+| | | | | |
+| **Standard SKU (3)(4)** |100 Mbps |10 |1000 Mbps |Yes |
+| **High Performance SKU (3)** |200 Mbps |30 |2000 Mbps |Yes |
+
+(1) The VPN throughput is a rough estimate based on the measurements between VNets in the same Azure region. It isn't a guaranteed throughput for cross-premises connections across the Internet. It's the maximum possible throughput measurement.
+
+(2) The number of tunnels refer to RouteBased VPNs. A PolicyBased VPN can only support one Site-to-Site VPN tunnel.
+
+(3) PolicyBased VPNs aren't supported for this SKU. They're supported for the Basic SKU.
+
+(4) Active-active S2S VPN Gateway connections aren't supported for this SKU. Active-active is supported on the HighPerformance SKU.
## <a name="config"></a>Supported configurations by SKU and VPN type
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
You pay for two things: the hourly compute costs for the virtual network gateway
* If you're sending traffic between virtual networks in different regions, the pricing is based on the region. * If you're sending traffic only between virtual networks that are in the same region, there are no data costs. Traffic between VNets in the same region is free.
-## <a name="new"></a>What's new?
+## <a name="new"></a>What's new in VPN Gateway?
Azure VPN Gateway is updated regularly. To stay current with the latest announcements, see the [What's new?](whats-new.md) article. The article highlights the following points of interest:
vpn-gateway Vpn Gateway Classic Resource Manager Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md
- Title: Migrate VPN gateways from Classic to Resource Manager-
-description: Learn about migrating VPN Gateway resources from the classic deployment model to the Resource Manager deployment model.
---- Previously updated : 11/02/2023---
-# VPN Gateway classic to Resource Manager migration
-
-VPN gateways can now be migrated from the classic deployment model to [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). For more information, see [Resource Manager deployment model](../azure-resource-manager/management/overview.md). In this article, we discuss how to migrate from classic deployments to the Resource Manager model.
-
-> [!IMPORTANT]
-> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
-
-VPN gateways are migrated as part of VNet migration from classic to Resource Manager. This migration is done one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to migrate. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](../virtual-machines/migration-classic-resource-manager-ps.md).
-
-There isn't a data path downtime during migration and thus existing workloads continue to function without the loss of on-premises connectivity during migration. The public IP address associated with the VPN gateway doesn't change during the migration process. This implies that you won't need to reconfigure your on-premises router once the migration is completed.
-
-The Resource Manager model is different from the classic model and is composed of virtual network gateways, local network gateways and connection resources. These represent the VPN gateway itself, the local-site representing on premises address space and connectivity between the two respectively. Once migration is completed, your gateways won't be available in the classic model and all management operations on virtual network gateways, local network gateways, and connection objects must be performed using the Resource Manager model.
-
-## Supported scenarios
-
-Most common VPN connectivity scenarios are covered by classic to Resource Manager migration. The supported scenarios include:
-
-* Point-to-site connectivity
-* Site-to-site connectivity with VPN Gateway connected to on premises location
-* VNet-to-VNet connectivity between two VNets using VPN gateways
-* Multiple VNets connected to same on-premises location
-* Multi-site connectivity
-* Forced tunneling enabled VNets
-
-Scenarios that aren't supported include:
-
-* VNet with both an ExpressRoute gateway and a VPN gateway isn't currently supported.
-* Transit scenarios where VM extensions are connected to on-premises servers. Transit VPN connectivity limitations are detailed in the next sections.
-
-> [!NOTE]
-> CIDR validation in the Resource Manager model is stricter than the one in the classic model. Before migrating, ensure that classic address ranges given conform to valid CIDR format before beginning the migration. CIDR can be validated using any common CIDR validators. VNet or local sites with invalid CIDR ranges when migrated result in a failed state.
->
-
-## VNet-to-VNet connectivity migration
-
-VNet-to-VNet connectivity in the classic deployment model was achieved by creating a local site representation of the connected VNet. Customers were required to create two local sites that represented the two VNets which needed to be connected together. These were then connected to the corresponding VNets using IPsec tunnel to establish connectivity between the two VNets. This model has manageability challenges, since any address range changes in one VNet must also be maintained in the corresponding local site representation. In the Resource Manager model, this workaround is no longer needed. The connection between the two VNets can be directly achieved using 'Vnet2Vnet' connection type in the Connection resource.
--
-During VNet migration, we detect that the connected entity to the current VNet's VPN gateway is another VNet. We ensure that once migration of both VNets is completed, you no longer see two local sites representing the other VNet. The classic model of two VPN gateways, two local sites, and two connections between them is transformed to the Resource Manager model with two VPN gateways and two connections of type Vnet2Vnet.
-
-## Transit VPN connectivity
-
-You can configure VPN gateways in a topology such that on-premises connectivity for a VNet is achieved by connecting to another VNet that is directly connected to on-premises. This is transit VPN connectivity, where instances in first VNet are connected to on-premises resources via transit to the VPN gateway in the connected VNet that's directly connected to on-premises. To achieve this configuration in classic deployment model, you need to create a local site that has aggregated prefixes representing both the connected VNet and on-premises address space. This representational local site is then connected to the VNet to achieve transit connectivity. The classic model also has similar manageability challenges since any change in the on-premises address range must also be maintained on the local site representing the aggregate of VNet and on-premises. Introduction of BGP support in Resource Manager supported gateways simplifies manageability, since the connected gateways can learn routes from on-premises without manual modification to prefixes.
--
-Since we transform VNet-to-VNet connectivity without requiring local sites, the transit scenario loses on-premises connectivity for the VNet that is indirectly connected to on-premises. The loss of connectivity can be mitigated in the following two ways, after migration is completed:
-
-* Enable BGP on VPN gateways that are connected together and to the on-premises location. Enabling BGP restores connectivity without any other configuration changes since routes are learned and advertised between VNet gateways. Note that the BGP option is only available on Standard and higher SKUs.
-* Establish an explicit connection from affected VNet to the local network gateway that represents the on-premises location. This would also require changing configuration on the on-premises router to create and configure the IPsec tunnel.
-
-## Next steps
-
-After learning about VPN gateway migration support, go to [platform-supported migration of IaaS resources from classic to Resource Manager](../virtual-machines/migration-classic-resource-manager-ps.md) to get started.
vpn-gateway Vpn Gateway Delete Vnet Gateway Classic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md
- Title: 'Delete a virtual network gateway: Azure classic'
-description: Learn how to delete a virtual network gateway using PowerShell in the classic deployment model.
----- Previously updated : 10/31/2023--
-# Delete a virtual network gateway using PowerShell (classic)
-
-This article helps you delete a VPN gateway in the classic (legacy) deployment model by using PowerShell. After the virtual network gateway is deleted, modify the network configuration file to remove elements that you're no longer using.
-
-The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md)**.
-
-> [!IMPORTANT]
-> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
-
-## <a name="connect"></a>Step 1: Connect to Azure
-
-### 1. Install the latest PowerShell cmdlets.
--
-### 2. Connect to your Azure account.
-
-Open your PowerShell console with elevated rights and connect to your account. Use the following example to help you connect:
-
-1. Open your PowerShell console with elevated rights.
-2. Connect to your account. Use the following example to help you connect:
-
- ```powershell
- Add-AzureAccount
- ```
-
-## <a name="export"></a>Step 2: Export and view the network configuration file
-
-Create a directory on your computer and then export the network configuration file to the directory. You use this file to both view the current configuration information, and also to modify the network configuration.
-
-In this example, the network configuration file is exported to C:\AzureNet.
-
-```powershell
-Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml
-```
-
-Open the file with a text editor and view the name for your classic VNet. When you create a VNet in the Azure portal, the full name that Azure uses isn't visible in the portal. For example, a VNet that appears to be named 'ClassicVNet1' in the Azure portal, might have a longer name in the network configuration file. The name might look something like: 'Group ClassicRG1 ClassicVNet1'. Virtual network names are listed as **'VirtualNetworkSite name ='**. Use the names in the network configuration file when running your PowerShell cmdlets.
-
-## <a name="delete"></a>Step 3: Delete the virtual network gateway
-
-When you delete a virtual network gateway, all connections to the VNet through the gateway are disconnected. If you have P2S clients connected to the VNet, they'll be disconnected without warning.
-
-This example deletes the virtual network gateway. Make sure to use the full name of the virtual network from the network configuration file.
-
-```powershell
-Remove-AzureVNetGateway -VNetName "Group ClassicRG1 ClassicVNet1"
-```
-
-If successful, the return shows:
-
-```
-Status : Successful
-```
-
-## <a name="modify"></a>Step 4: Modify the network configuration file
-
-When you delete a virtual network gateway, the cmdlet doesn't modify the network configuration file. You need to modify the file to remove the elements that are no longer being used. The following sections help you modify the network configuration file that you downloaded.
-
-### <a name="lnsref"></a>Local Network Site References
-
-To remove site reference information, make configuration changes to **ConnectionsToLocalNetwork/LocalNetworkSiteRef**. Removing a local site reference triggers Azure to delete a tunnel. Depending on the configuration that you created, you might not have a **LocalNetworkSiteRef** listed.
-
-```
-<Gateway>
- <ConnectionsToLocalNetwork>
- <LocalNetworkSiteRef name="D1BFC9CB_Site2">
- <Connection type="IPsec" />
- </LocalNetworkSiteRef>
- </ConnectionsToLocalNetwork>
- </Gateway>
-```
-
-Example:
-
-```
-<Gateway>
- <ConnectionsToLocalNetwork>
- </ConnectionsToLocalNetwork>
- </Gateway>
-```
-
-### <a name="lns"></a>Local Network Sites
-
-Remove any local sites that you're no longer using. Depending on the configuration you created, it's possible that you don't have a **LocalNetworkSite** listed.
-
-```
-<LocalNetworkSites>
- <LocalNetworkSite name="Site1">
- <AddressSpace>
- <AddressPrefix>192.168.0.0/16</AddressPrefix>
- </AddressSpace>
- <VPNGatewayAddress>5.4.3.2</VPNGatewayAddress>
- </LocalNetworkSite>
- <LocalNetworkSite name="Site3">
- <AddressSpace>
- <AddressPrefix>192.168.0.0/16</AddressPrefix>
- </AddressSpace>
- <VPNGatewayAddress>57.179.18.164</VPNGatewayAddress>
- </LocalNetworkSite>
- </LocalNetworkSites>
-```
-
-In this example, we removed only Site3.
-
-```
-<LocalNetworkSites>
- <LocalNetworkSite name="Site1">
- <AddressSpace>
- <AddressPrefix>192.168.0.0/16</AddressPrefix>
- </AddressSpace>
- <VPNGatewayAddress>5.4.3.2</VPNGatewayAddress>
- </LocalNetworkSite>
- </LocalNetworkSites>
-```
-
-### <a name="clientaddresss"></a>Client AddressPool
-
-If you had a P2S connection to your VNet, you'll have a **VPNClientAddressPool**. Remove the client address pools that correspond to the virtual network gateway that you deleted.
-
-```
-<Gateway>
- <VPNClientAddressPool>
- <AddressPrefix>10.1.0.0/24</AddressPrefix>
- </VPNClientAddressPool>
- <ConnectionsToLocalNetwork />
- </Gateway>
-```
-
-Example:
-
-```
-<Gateway>
- <ConnectionsToLocalNetwork />
- </Gateway>
-```
-
-### <a name="gwsub"></a>GatewaySubnet
-
-Delete the **GatewaySubnet** that corresponds to the VNet.
-
-```
-<Subnets>
- <Subnet name="FrontEnd">
- <AddressPrefix>10.11.0.0/24</AddressPrefix>
- </Subnet>
- <Subnet name="GatewaySubnet">
- <AddressPrefix>10.11.1.0/29</AddressPrefix>
- </Subnet>
- </Subnets>
-```
-
-Example:
-
-```
-<Subnets>
- <Subnet name="FrontEnd">
- <AddressPrefix>10.11.0.0/24</AddressPrefix>
- </Subnet>
- </Subnets>
-```
-
-## <a name="upload"></a>Step 5: Upload the network configuration file
-
-Save your changes and upload the network configuration file to Azure. Make sure you change the file path as necessary for your environment.
-
-```powershell
-Set-AzureVNetConfig -ConfigurationPath C:\AzureNet\NetworkConfig.xml
-```
-
-If successful, the return shows something similar to this example:
-
-```
-OperationDescription OperationId OperationStatus
--
-Set-AzureVNetConfig e0ee6e66-9167-cfa7-a746-7casb9 Succeeded
-```
vpn-gateway Vpn Gateway Howto Point To Site Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md
- Title: 'Connect a computer to a virtual network using P2S: certificate authentication: Azure portal classic'-
-description: Learn how to create a classic Point-to-Site VPN Gateway connection using the Azure portal.
--- Previously updated : 10/31/2023---
-# Configure a Point-to-Site connection by using certificate authentication (classic)
-
-This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. These instructions are for the classic deployment model. You can no longer create a gateway using the classic deployment model. See the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md) instead.
-
-> [!IMPORTANT]
-> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
-
-You use a Point-to-Site (P2S) VPN gateway to create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location. When you have only a few clients that need to connect to a VNet, a P2S VPN is a useful solution to use instead of a Site-to-Site VPN. A P2S VPN connection is established by starting it from the client computer.
-
-> [!IMPORTANT]
-> The classic deployment model supports Windows VPN clients only and uses the Secure Socket Tunneling Protocol (SSTP), an SSL-based VPN protocol. To support non-Windows VPN clients, you must create your VNet with the Resource Manager deployment model. The Resource Manager deployment model supports IKEv2 VPN in addition to SSTP. For more information, see [About P2S connections](point-to-site-about.md).
->
---
-## Settings and requirements
-
-### Requirements
-
-Point-to-Site certificate authentication connections require the following items. There are steps in this article that will help you create them.
-
-* A Dynamic VPN gateway.
-* The public key (.cer file) for a root certificate, which is uploaded to Azure. This key is considered a trusted certificate and is used for authentication.
-* A client certificate generated from the root certificate, and installed on each client computer that will connect. This certificate is used for client authentication.
-* A VPN client configuration package must be generated and installed on every client computer that connects. The client configuration package configures the native VPN client that's already on the operating system with the necessary information to connect to the VNet.
-
-Point-to-Site connections don't require a VPN device or an on-premises public-facing IP address. The VPN connection is created over SSTP (Secure Socket Tunneling Protocol). On the server side, we support SSTP versions 1.0, 1.1, and 1.2. The client decides which version to use. For Windows 8.1 and above, SSTP uses 1.2 by default.
-
-For more information, see [About Point-to-Site connections](point-to-site-about.md) and the [FAQ](#faq).
-
-### Example settings
-
-Use the following values to create a test environment, or refer to these values to better understand the examples in this article:
-
-* **Resource Group:** TestRG
-* **VNet Name:** VNet1
-* **Address space:** 192.168.0.0/16 <br>For this example, we use only one address space. You can have more than one address space for your VNet.
-* **Subnet name:** FrontEnd
-* **Subnet address range:** 192.168.1.0/24
-* **GatewaySubnet:** 192.168.200.0/24
-* **Region:** (US) East US
-* **Client address space:** 172.16.201.0/24 <br> VPN clients that connect to the VNet by using this Point-to-Site connection receive an IP address from the specified pool.
-* **Connection type**: Select **Point-to-site**.
-
-Before you begin, verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
-
-## <a name="vnet"></a>Create a virtual network
-
-If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that might overlap with other networks.
---
-## <a name="gateway"></a>Create a VPN gateway
-
-1. Navigate to the VNet that you created.
-1. On the VNet page, under Settings, select **Gateway**. On the **Gateway** page, you can view the gateway for your virtual network. This virtual network doesn't yet have a gateway. Click the note that says **Click here to add a connection and a gateway**.
-1. On the **Configure a VPN connection and gateway** page, select the following settings:
-
- * Connection type: Point-to-site
- * Client address space: Add the IP address range from which the VPN clients receive an IP address when connecting. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or with the VNet that you connect to.
-1. Leave the checkbox for **Do not configure a gateway at this time** unselected. We will create a gateway.
-1. At the bottom of the page, select **Next: Gateway >**.
-1. On the **Gateway** tab, select the following values:
-
- * **Size:** The size is the gateway SKU for your virtual network gateway. In the Azure portal, the default SKU is **Default**. For more information about gateway SKUs, see [About VPN gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
- * **Routing Type:** You must select **Dynamic** for a point-to-site configuration. Static routing won't work.
- * **Gateway subnet:** This field is already autofilled. You can't change the name. If you try to change the name using PowerShell or any other means, the gateway won't work properly.
- * **Address range (CIDR block):** While it's possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting at least /28 or /27. Doing so will allow for enough addresses to accommodate possible additional configurations that you might want in the future. When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a network security group to this subnet might cause your VPN gateway to not function as expected.
-1. Select **Review + create** to validate your settings.
-1. Once validation passes, select **Create**. A VPN gateway can take up to 45 minutes to complete, depending on the gateway SKU that you select.
-
-## <a name="generatecerts"></a>Create certificates
-
-Azure uses certificates to authenticate VPN clients for Point-to-Site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered *trusted*. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User\Personal\Certificates certificate store. The certificate is used to authenticate the client when it connects to the VNet.
-
-If you use self-signed certificates, they must be created by using specific parameters. You can create a self-signed certificate by using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important to follow the steps in these instructions when you use self-signed root certificates and generate client certificates from the self-signed root certificate. Otherwise, the certificates you create won't be compatible with P2S connections and you'll receive a connection error.
-
-### Acquire the public key (.cer) for the root certificate
--
-### Generate a client certificate
--
-## Upload the root certificate .cer file
-
-After the gateway has been created, upload the .cer file (which contains the public key information) for a trusted root certificate to the Azure server. Don't upload the private key for the root certificate. After you upload the certificate, Azure uses it to authenticate clients that have installed a client certificate generated from the trusted root certificate. You can later upload additional trusted root certificate files (up to 20), if needed.
-
-1. Navigate to the virtual network you created.
-1. Under **Settings**, select **Point-to-site connections**.
-1. Select **Manage certificate**.
-1. Select **Upload**.
-1. On the **Upload a certificate** pane, select the folder icon and navigate to the certificate you want to upload.
-1. Select **Upload**.
-1. After the certificate has uploaded successfully, you can view it on the Manage certificate page. You might need to select **Refresh** to view the certificate you just uploaded.
-
-## Configure the client
-
-To connect to a VNet by using a Point-to-Site VPN, each client must install a package to configure the native Windows VPN client. The configuration package configures the native Windows VPN client with the settings necessary to connect to the virtual network.
-
-You can use the same VPN client configuration package on each client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see [About Point-to-Site connections](point-to-site-about.md) and the [FAQ](#faq).
-
-### Generate and install a VPN client configuration package
-
-1. Navigate to the **Point-to-site connections** settings for your VNet.
-1. At the top of the page, select the download package that corresponds to the client operating system where it will be installed:
-
- * For 64-bit clients, select **VPN client (64-bit)**.
- * For 32-bit clients, select **VPN client (32-bit)**.
-
-1. Azure generates a package with the specific settings that the client requires. Each time you make changes to the VNet or gateway, you need to download a new client configuration package and install them on your client computers.
-1. After the package generates, select **Download**.
-1. Install the client configuration package on your client computer. When installing, if you see a SmartScreen popup saying Windows protected your PC, select **More info**, then select **Run anyway**. You can also save the package to install on other client computers.
-
-### Install a client certificate
-
-For this exercise, when you generated the client certificate, it was automatically installed on your computer. To create a P2S connection from a different client computer than the one used to generate the client certificates, you must install the generated client certificate on that computer.
-
-When you install a client certificate, you need the password that was created when the client certificate was exported. Typically, you can install the certificate by just double-clicking it. For more information, see [Install an exported client certificate](vpn-gateway-certificates-point-to-site.md#install).
-
-## Connect to your VNet
-
->[!NOTE]
->You must have Administrator rights on the client computer from which you are connecting.
->
-
-1. On the client computer, go to VPN settings.
-1. Select the VPN that you created. If you used the example settings, the connection will be labeled **Group TestRG VNet1**.
-1. Select **Connect**.
-1. In the Windows Azure Virtual Network box, select **Connect**. If a pop-up message about the certificate appears, select **Continue** to use elevated privileges and **Yes** to accept configuration changes.
-1. When your connection succeeds, you'll see a **Connected** notification.
--
-## Verify the VPN connection
-
-1. Verify that your VPN connection is active. Open an elevated command prompt on your client computer, and run **ipconfig/all**.
-1. View the results. Notice that the IP address you received is one of the addresses within the Point-to-Site connectivity address range that you specified when you created your VNet. The results should be similar to this example:
-
- ```
- PPP adapter VNet1:
- Connection-specific DNS Suffix .:
- Description.....................: VNet1
- Physical Address................:
- DHCP Enabled....................: No
- Autoconfiguration Enabled.......: Yes
- IPv4 Address....................: 172.16.201.11 (Preferred)
- Subnet Mask.....................: 255.255.255.255
- Default Gateway.................:
- NetBIOS over Tcpip..............: Enabled
- ```
-
-## To connect to a virtual machine
--
-## To add or remove trusted root certificates
-
-You can add and remove trusted root certificates from Azure. When you remove a root certificate, clients that have a certificate generated from that root can no longer authenticate and connect. For those clients to authenticate and connect again, you must install a new client certificate generated from a root certificate that's trusted by Azure.
-
-### Add a trusted root certificate
-
-You can add up to 20 trusted root certificate .cer files to Azure by using the same process that you used to add the first trusted root certificate.
-
-### Remove a trusted root certificate
-
-1. On the **Point-to-site connections** section of the page for your VNet, select **Manage certificate**.
-1. Select the ellipsis next to the certificate that you want to remove, then select **Delete**.
-
-## To revoke a client certificate
-
-If necessary, you can revoke a client certificate. The certificate revocation list allows you to selectively deny Point-to-Site connectivity based on individual client certificates. This method differs from removing a trusted root certificate. If you remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates that were generated from the root certificate to continue to be used for authentication for the Point-to-Site connection.
-
-The common practice is to use the root certificate to manage access at team or organization levels, while using revoked client certificates for fine-grained access control on individual users.
-
-You can revoke a client certificate by adding the thumbprint to the revocation list.
-
-1. Retrieve the client certificate thumbprint. For more information, see [How to: Retrieve the Thumbprint of a Certificate](/dotnet/framework/wcf/feature-details/how-to-retrieve-the-thumbprint-of-a-certificate).
-1. Copy the information to a text editor and remove its spaces so that it's a continuous string.
-1. Navigate to **Point-to-site VPN connection**, then select **Manage certificate**.
-1. Select **Revocation list** to open the **Revocation list** page.
-1. In **Thumbprint**, paste the certificate thumbprint as one continuous line of text, with no spaces.
-1. Select **+ Add to list** to add the thumbprint to the certificate revocation list (CRL).
-
-After updating completes, the certificate can no longer be used to connect. Clients that try to connect by using this certificate receive a message saying that the certificate is no longer valid.
-
-## <a name="faq"></a>FAQ
--
-## Next steps
-
-* After your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml).
-
-* To understand more about networking and Linux virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
-
-* For P2S troubleshooting information, [Troubleshoot Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
- Title: 'Connect your on-premises network to a VNet: Site-to-Site VPN (classic): Portal'-
-description: Learn how to create an IPsec connection between your on-premises network and a classic Azure virtual network over the public Internet.
---- Previously updated : 10/31/2023--
-# Create a Site-to-Site connection using the Azure portal (classic)
-
-This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the **classic (legacy) deployment model** and don't apply to the current deployment model, Resource Manager. See the [Resource Manager version of this article](./tutorial-site-to-site-portal.md) instead.
-
-> [!IMPORTANT]
-> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
-
-A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
---
-## <a name="before"></a>Before you begin
-
-Verify that you have met the following criteria before beginning configuration:
-
-* Verify that you want to work in the classic deployment model. If you want to work in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), see [Create a Site-to-Site connection (Resource Manager)](./tutorial-site-to-site-portal.md). We recommend that you use the Resource Manager deployment model, as the classic model is legacy.
-* Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md).
-* Verify that you have an externally facing public IPv4 address for your VPN device.
-* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
-* PowerShell is required in order to specify the shared key and create the VPN gateway connection. [!INCLUDE [vpn-gateway-classic-powershell](../../includes/vpn-gateway-powershell-classic-locally.md)]
-
-### <a name="values"></a>Sample configuration values for this exercise
-
-The examples in this article use the following values. You can use these values to create a test environment, or refer to them to better understand the examples in this article. Typically, when working with IP address values for Address space, you want to coordinate with your network administrator in order to avoid overlapping address spaces, which can affect routing. In this case, replace the IP address values with your own if you want to create a working connection.
-
-* **Resource Group:** TestRG1
-* **VNet Name:** TestVNet1
-* **Address space:** 10.11.0.0/16
-* **Subnet name:** FrontEnd
-* **Subnet address range:** 10.11.0.0/24
-* **GatewaySubnet:** 10.11.255.0/27
-* **Region:** (US) East US
-* **Local site name:** Site2
-* **Client address space:** The address space that is located on your on-premises site.
-
-## <a name="CreatVNet"></a>Create a virtual network
-
-When you create a virtual network to use for a S2S connection, you need to make sure that the address spaces that you specify don't overlap with any of the client address spaces for the local sites that you want to connect to. If you have overlapping subnets, your connection won't work properly.
-
-* If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that might overlap with other networks.
-
-* If you don't already have a virtual network, create one. Screenshots are provided as examples. Be sure to replace the values with your own.
-
-### To create a virtual network
---
-## <a name="localsite"></a>Configure the site and gateway
-
-### To configure the site
-
-The local site typically refers to your on-premises location. It contains the IP address of the VPN device to which you'll create a connection, and the IP address ranges that will be routed through the VPN gateway to the VPN device.
-
-1. On the page for your VNet, under **Settings**, select **Site-to-site connections**.
-1. On the Site-to-site connections page, select **+ Add**.
-1. On the **Configure a VPN connection and gateway** page, for **Connection type**, leave **Site-to-site** selected. For this exercise, you'll need to use a combination of the [example values](#values) and your own values.
-
- * **VPN gateway IP address:** This is the public IP address of the VPN device for your on-premises network. The VPN device requires an IPv4 public IP address. Specify a valid public IP address for the VPN device to which you want to connect. It must be reachable by Azure. If you don't know the IP address of your VPN device, you can always put in a placeholder value (as long as it is in the format of a valid public IP address) and then change it later.
-
- * **Client Address space:** List the IP address ranges that you want routed to the local on-premises network through this gateway. You can add multiple address space ranges. Make sure that the ranges you specify here don't overlap with ranges of other networks your virtual network connects to, or with the address ranges of the virtual network itself.
-1. At the bottom of the page, DO NOT select Review + create. Instead, select **Next: Gateway>**.
-
-### <a name="sku"></a>To configure the virtual network gateway
-
-1. On the **Gateway** page, select the following values:
-
- * **Size:** This is the gateway SKU that you use to create your virtual network gateway. Classic VPN gateways use the old (legacy) gateway SKUs. For more information about the legacy gateway SKUs, see [Working with virtual network gateway SKUs (old SKUs)](vpn-gateway-about-skus-legacy.md). You can select **Standard** for this exercise.
-
- * **Gateway subnet:** The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it's possible to create a gateway subnet as small as /29, we recommend that you use /27 or /28. This creates a larger subnet that includes more addresses. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations.
-
-1. Select **Review + create** at the bottom of the page to validate your settings. Select **Create** to deploy. It can take up to 45 minutes to create a virtual network gateway, depending on the gateway SKU that you selected.
-
-## <a name="vpndevice"></a>Configure your VPN device
-
-Site-to-Site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values:
-
-* A shared key. This is the same shared key that you specify when creating your Site-to-Site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
-* The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI.
--
-## <a name="getvalues"></a>Retrieve values
--
-## <a name="CreateConnection"></a>Create the connection
-
-> [!NOTE]
-> For the classic deployment model, this step is not available in the Azure portal or via Azure Cloud Shell. You must use the Service Management (SM) version of the Azure PowerShell cmdlets locally from your desktop.
->
-
-In this step, using the values from the previous steps, you set the shared key and create the connection. The key you set must be the same key that was used in your VPN device configuration.
-
-1. Set the shared key and create the connection.
-
- * Change the -VNetName value and the -LocalNetworkSiteName value. When specifying a name that contains spaces, use single quotation marks around the value.
- * The '-SharedKey' is a value that you generate, and then specify. In the example, we used 'abc123', but you can (and should) generate something more complex. The important thing is that the value you specify here must be the same value that you specified when configuring your VPN device.
-
- ```powershell
- Set-AzureVNetGatewayKey -VNetName 'Group TestRG1 TestVNet1' `
- -LocalNetworkSiteName '6C74F6E6_Site2' -SharedKey abc123
- ```
-
-1. When the connection is created, the result is: **Status: Successful**.
-
-## <a name="verify"></a>Verify your connection
--
-If you're having trouble connecting, see the **Troubleshoot** section of the table of contents in the left pane.
-
-## <a name="reset"></a>How to reset a VPN gateway
-
-Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways.
-
-The cmdlet for resetting a classic gateway is **Reset-AzureVNetGateway**. The Azure PowerShell cmdlets for Service Management must be installed locally on your desktop. You can't use Azure Cloud Shell. Before performing a reset, make sure you have the latest version of the [Service Management (SM) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps#azure-service-management-cmdlets).
-
-When using this command, make sure you're using the full name of the virtual network. Classic VNets that were created using the portal have a long name that is required for PowerShell. You can view the long name by using `Get-AzureVNetConfig -ExportToFile C:\Myfoldername\NetworkConfig.xml`.
-
-The following example resets the gateway for a virtual network named "Group TestRG1 TestVNet1" (which shows as simply "TestVNet1" in the portal):
-
-```powershell
-Reset-AzureVNetGateway ΓÇôVnetName 'Group TestRG1 TestVNet1'
-```
-
-Result:
-
-```powershell
-Error :
-HttpStatusCode : OK
-Id : f1600632-c819-4b2f-ac0e-f4126bec1ff8
-Status : Successful
-RequestId : 9ca273de2c4d01e986480ce1ffa4d6d9
-StatusCode : OK
-```
-
-## <a name="changesku"></a>How to resize a gateway SKU
-
-To resize a gateway for the [classic deployment model](../azure-resource-manager/management/deployment-models.md), you must use the Service Management PowerShell cmdlets. Use the following command:
-
-```powershell
-Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerformance
-```
-
-## Next steps
-
-* Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml).
-* For information about Forced Tunneling, see [About Forced Tunneling](vpn-gateway-about-forced-tunneling.md).
vpn-gateway Vpn Gateway Howto Vnet Vnet Portal Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md
- Title: 'Create a connection between VNets: classic: Azure portal'
-description: Learn how to connect classic Azure virtual networks together using PowerShell and the Azure portal.
----- Previously updated : 10/31/2023--
-# Configure a VNet-to-VNet connection (classic)
-
-This article helps you create a VPN gateway connection between virtual networks. The virtual networks can be in the same or different regions, and from the same or different subscriptions.
-
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. You can no longer create a gateway using the classic deployment model. See the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) instead.
-
-> [!IMPORTANT]
-> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
---
-## About VNet-to-VNet connections
-
-Connecting a virtual network to another virtual network (VNet-to-VNet) in the classic deployment model using a VPN gateway is similar to connecting a virtual network to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.
-
-The VNets you connect can be in different subscriptions and different regions. You can combine VNet to VNet communication with multi-site configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity.
--
-### <a name="why"></a>Why connect virtual networks?
-
-You might want to connect virtual networks for the following reasons:
-
-* **Cross region geo-redundancy and geo-presence**
-
- * You can set up your own geo-replication or synchronization with secure connectivity without going over Internet-facing endpoints.
- * With Azure Load Balancer and Microsoft or third-party clustering technology, you can set up highly available workload with geo-redundancy across multiple Azure regions. One important example is to set up SQL Always On with Availability Groups spreading across multiple Azure regions.
-* **Regional multi-tier applications with strong isolation boundary**
-
- * Within the same region, you can set up multi-tier applications with multiple VNets connected together with strong isolation and secure inter-tier communication.
-* **Cross subscription, inter-organization communication in Azure**
-
- * If you have multiple Azure subscriptions, you can connect workloads from different subscriptions together securely between virtual networks.
- * For enterprises or service providers, you can enable cross-organization communication with secure VPN technology within Azure.
-
-For more information about VNet-to-VNet connections, see [VNet-to-VNet considerations](#faq) at the end of this article.
-
-## Prerequisites
-
-We use the portal for most of the steps, but you must use PowerShell to create the connections between the VNets. You can't create the connections using the Azure portal because there's no way to specify the shared key in the portal. [!INCLUDE [vpn-gateway-classic-powershell](../../includes/vpn-gateway-powershell-classic-locally.md)]
-
-## <a name="planning"></a>Planning
-
-ItΓÇÖs important to decide the ranges that youΓÇÖll use to configure your virtual networks. For this configuration, you must make sure that none of your VNet ranges overlap with each other, or with any of the local networks that they connect to.
-
-### <a name="vnet"></a>VNets
-
-For this exercise, we use the following example values:
-
-**Values for TestVNet1**
-
-Name: TestVNet1<br>
-Address space: 10.11.0.0/16, 10.12.0.0/16 (optional)<br>
-Subnet name: default<br>
-Subnet address range: 10.11.0.0/24<br>
-Resource group: ClassicRG<br>
-Location: East US<br>
-GatewaySubnet: 10.11.1.0/27
-
-**Values for TestVNet4**
-
-Name: TestVNet4<br>
-Address space: 10.41.0.0/16, 10.42.0.0/16 (optional)<br>
-Subnet name: default<br>
-Subnet address range: 10.41.0.0/24<br>
-Resource group: ClassicRG<br>
-Location: West US<br>
-GatewaySubnet: 10.41.1.0/27
-
-### <a name="plan"></a>Connections
-
-The following table shows an example of how you connect your VNets. Use the ranges as a guideline only. Write down the ranges for your virtual networks. You need this information for later steps.
-
-In this example, TestVNet1 connects to a local network site that you create named 'VNet4Local'. The settings for VNet4Local contain the address prefixes for TestVNet4.
-The local site for each VNet is the other VNet. The following example values are used for our configuration:
-
-**Example**
-
-| Virtual Network | Address Space | Location | Connects to local network site |
-|: |: |: |: |
-| TestVNet1 |TestVNet1<br>(10.11.0.0/16)<br>(10.12.0.0/16) |East US |SiteVNet4<br>(10.41.0.0/16)<br>(10.42.0.0/16) |
-| TestVNet4 |TestVNet4<br>(10.41.0.0/16)<br>(10.42.0.0/16) |West US |SiteVNet1<br>(10.11.0.0/16)<br>(10.12.0.0/16) |
-
-## <a name="vnetvalues"></a>Create virtual networks
-
-In this step, you create two classic virtual networks, TestVNet1 and TestVNet4. If you're using this article as an exercise, use the [example values](#vnet).
-
-**When creating your VNets, keep in mind the following settings:**
-
-* **Virtual Network Address Spaces** ΓÇô On the Virtual Network Address Spaces page, specify the address range that you want to use for your virtual network. These are the dynamic IP addresses that will be assigned to the VMs and other role instances that you deploy to this virtual network.<br>The address spaces you select can't overlap with the address spaces for any of the other VNets or on-premises locations that this VNet will connect to.
-
-* **Location** ΓÇô When you create a virtual network, you associate it with an Azure location (region). For example, if you want your VMs that are deployed to your virtual network to be physically located in West US, select that location. You canΓÇÖt change the location associated with your virtual network after you create it.
-
-**After creating your VNets, you can add the following settings:**
-
-* **Address space** ΓÇô Additional address space isn't required for this configuration, but you can add additional address space after creating the VNet.
-
-* **Subnets** ΓÇô Additional subnets aren't required for this configuration, but you might want to have your VMs in a subnet that is separate from your other role instances.
-
-* **DNS servers** ΓÇô Enter the DNS server name and IP address. This setting doesn't create a DNS server. It allows you to specify the DNS servers that you want to use for name resolution for this virtual network.
-
-### To create a classic virtual network
---
-## <a name="localsite"></a>Configure sites and gateways
-
-Azure uses the settings specified in each local network site to determine how to route traffic between the VNets. Each VNet must point to the respective local network that you want to route traffic to. You determine the name you want to use to refer to each local network site. It's best to use something descriptive.
-
-For example, TestVNet1 connects to a local network site that you create named 'VNet4Local'. The settings for VNet4Local contain the address prefixes for TestVNet4.
-
-Keep in mind, the local site for each VNet is the other VNet.
-
-| Virtual Network | Address Space | Location | Connects to local network site |
-|: |: |: |: |
-| TestVNet1 |TestVNet1<br>(10.11.0.0/16)<br>(10.12.0.0/16) |East US |SiteVNet4<br>(10.41.0.0/16)<br>(10.42.0.0/16) |
-| TestVNet4 |TestVNet4<br>(10.41.0.0/16)<br>(10.42.0.0/16) |West US |SiteVNet1<br>(10.11.0.0/16)<br>(10.12.0.0/16) |
-
-### <a name="site"></a>To configure a site
-
-The local site typically refers to your on-premises location. It contains the IP address of the VPN device to which you'll create a connection, and the IP address ranges that are routed through the VPN gateway to the VPN device.
-
-1. On the page for your VNet, under **Settings**, select **Site-to-site connections**.
-1. On the Site-to-site connections page, select **+ Add**.
-1. On the **Configure a VPN connection and gateway** page, for **Connection type**, leave **Site-to-site** selected.
-
- * **VPN gateway IP address:** This is the public IP address of the VPN device for your on-premises network. For this exercise, you can put in a dummy address because you don't yet have the IP address for the VPN gateway for the other site. For example, 5.4.3.2. Later, once you have configured the gateway for the other VNet, you can adjust this value.
-
- * **Client Address space:** List the IP address ranges that you want routed to the other VNet through this gateway. You can add multiple address space ranges. Make sure that the ranges you specify here don't overlap with ranges of other networks your virtual network connects to, or with the address ranges of the virtual network itself.
-1. At the bottom of the page, DO NOT select Review + create. Instead, select **Next: Gateway>**.
-
-### <a name="sku"></a>To configure a virtual network gateway
-
-1. On the **Gateway** page, select the following values:
-
- * **Size:** This is the gateway SKU that you use to create your virtual network gateway. Classic VPN gateways use the old (legacy) gateway SKUs. For more information about the legacy gateway SKUs, see [Working with virtual network gateway SKUs (old SKUs)](vpn-gateway-about-skus-legacy.md). You can select **Standard** for this exercise.
-
- * **Gateway subnet:** The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it's possible to create a gateway subnet as small as /29, we recommend that you use /27 or /28. This creates a larger subnet that includes more addresses. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations.
-
-1. Select **Review + create** at the bottom of the page to validate your settings. Select **Create** to deploy. It can take up to 45 minutes to create a virtual network gateway, depending on the gateway SKU that you selected.
-1. You can start proceed to the next step while this gateway is creating.
-
-### Configure TestVNet4 settings
-
-Repeat the steps for [Create a site and gateway](#localsite) to configure TestVNet4, substituting the values when necessary. If you're doing this as an exercise, use the [example values](#planning).
-
-## <a name="updatelocal"></a>Update local sites
-
-After your virtual network gateways have been created for both VNets, you must adjust the local site properties for **VPN gateway IP address**.
-
-|VNet name|Connected site|Gateway IP address|
-|: |: |: |
-|TestVNet1|VNet4Local|VPN gateway IP address for TestVNet4|
-|TestVNet4|VNet1Local|VPN gateway IP address for TestVNet1|
-
-### Part 1 - Get the virtual network gateway public IP address
-
-1. Navigate to your VNet by going to the **Resource group** and selecting the virtual network.
-1. On the page for your virtual network, in the **Essentials** pane on the right, locate the **Gateway IP address** and copy to clipboard.
-
-### Part 2 - Modify the local site properties
-
-1. Under Site-to-site connections, select the connection. For example, SiteVNet4.
-1. On the **Properties** page for the Site-to-site connection, select **Edit local site**.
-1. In the **VPN gateway IP address** field, paste the VPN gateway IP address you copied in the previous section.
-1. Select **OK**.
-1. The field is updated in the system. You can also use this method to add additional IP address that you want to route to this site.
-
-### Part 3 - Repeat steps for the other VNet
-
-Repeat the steps for TestVNet4.
-
-## <a name="getvalues"></a>Retrieve configuration values
--
-## <a name="createconnections"></a>Create connections
-
-When all the previous steps have been completed, you can set the IPsec/IKE preshared keys and create the connection. This set of steps uses PowerShell. VNet-to-VNet connections for the classic deployment model can't be configured in the Azure portal because the shared key can't be specified in the portal.
-
-In the examples, notice that the shared key is exactly the same. The shared key must always match. Be sure to replace the values in these examples with the exact names for your VNets and Local Network Sites.
-
-1. Create the TestVNet1 to TestVNet4 connection. Make sure to change the values.
-
- ```powershell
- Set-AzureVNetGatewayKey -VNetName 'Group ClassicRG TestVNet1' `
- -LocalNetworkSiteName 'value for _VNet4Local' -SharedKey A1b2C3D4
- ```
-2. Create the TestVNet4 to TestVNet1 connection.
-
- ```powershell
- Set-AzureVNetGatewayKey -VNetName 'Group ClassicRG TestVNet4' `
- -LocalNetworkSiteName 'value for _VNet1Local' -SharedKey A1b2C3D4
- ```
-3. Wait for the connections to initialize. Once the gateway has initialized, the Status is 'Successful'.
-
- ```
- Error :
- HttpStatusCode : OK
- Id :
- Status : Successful
- RequestId :
- StatusCode : OK
- ```
-
-## <a name="faq"></a>FAQ and considerations
-
-These considerations apply to classic virtual networks and classic virtual network gateways.
-
-* The virtual networks can be in the same or different subscriptions.
-* The virtual networks can be in the same or different Azure regions (locations).
-* A cloud service or a load-balancing endpoint can't span across virtual networks, even if they're connected together.
-* Connecting multiple virtual networks together doesn't require any VPN devices.
-* VNet-to-VNet supports connecting Azure Virtual Networks. It doesn't support connecting virtual machines or cloud services that aren't deployed to a virtual network.
-* VNet-to-VNet requires dynamic routing gateways. Azure static routing gateways aren't supported.
-* Virtual network connectivity can be used simultaneously with multi-site VPNs. There is a maximum of 10 VPN tunnels for a virtual network VPN gateway connecting to either other virtual networks, or on-premises sites.
-* The address spaces of the virtual networks and on-premises local network sites must not overlap. Overlapping address spaces cause the creation of virtual networks or uploading netcfg configuration files to fail.
-* Redundant tunnels between a pair of virtual networks aren't supported.
-* All VPN tunnels for the VNet, including P2S VPNs, share the available bandwidth for the VPN gateway, and the same VPN gateway uptime SLA in Azure.
-* VNet-to-VNet traffic travels across the Azure backbone.
-
-## Next steps
-
-Verify your connections. See [Verify a VPN Gateway connection](vpn-gateway-verify-connection-resource-manager.md).
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md
- Title: 'Connect a VNet to multiple sites using VPN Gateway: Classic'
-description: Learn how to connect multiple on-premises sites to a classic virtual network using a VPN gateway.
----- Previously updated : 08/21/2023--
-# Add a Site-to-Site connection to a VNet with an existing VPN gateway connection (classic)
-
-This article walks you through using PowerShell to add Site-to-Site (S2S) connections to a VPN gateway that has an existing connection using the classic (legacy) deployment model. This type of connection is sometimes referred to as a "multi-site" configuration. These steps don't apply to ExpressRoute/Site-to-Site coexisting connection configurations.
-
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](add-remove-site-to-site-connections.md)**.
--
-## About connecting
-
-You can connect multiple on-premises sites to a single virtual network. This is especially attractive for building hybrid cloud solutions. Creating a multi-site connection to your Azure virtual network gateway is similar to creating other Site-to-Site connections. In fact, you can use an existing Azure VPN gateway, as long as the gateway is dynamic (route-based).
-
-If you already have a static gateway connected to your virtual network, you can change the gateway type to dynamic without needing to rebuild the virtual network in order to accommodate multi-site. Before changing the routing type, make sure that your on-premises VPN gateway supports route-based VPN configurations.
--
-## Points to consider
-
-**You won't be able to use the portal to make changes to this virtual network.** You need to make changes to the network configuration file instead of using the portal. If you make changes in the portal, they'll overwrite your multi-site reference settings for this virtual network.
-
-You should feel comfortable using the network configuration file by the time you've completed the multi-site procedure. However, if you have multiple people working on your network configuration, you'll need to make sure that everyone knows about this limitation. This doesn't mean that you can't use the portal at all. You can use it for everything else, except making configuration changes to this particular virtual network.
-
-## Before you begin
-
-Before you begin configuration, verify that you have the following:
-
-* Compatible VPN hardware for each on-premises location. Check [About VPN Devices for Virtual Network Connectivity](vpn-gateway-about-vpn-devices.md) to verify if the device that you want to use is something that is known to be compatible.
-* An externally facing public IPv4 IP address for each VPN device. The IP address can't be located behind a NAT. This is a requirement.
-* Someone who is proficient at configuring your VPN hardware. You'll have to have a strong understanding of how to configure your VPN device, or work with someone who does.
-* The IP address ranges that you want to use for your virtual network (if you haven't already created one).
-* The IP address ranges for each of the local network sites that you'll be connecting to. You'll need to make sure that the IP address ranges for each of the local network sites that you want to connect to don't overlap. Otherwise, the portal or the REST API rejects the configuration being uploaded.<br>For example, if you have two local network sites that both contain the IP address range 10.2.3.0/24 and you have a package with a destination address 10.2.3.3, Azure wouldn't know which site you want to send the package to because the address ranges are overlapping. To prevent routing issues, Azure doesn't allow you to upload a configuration file that has overlapping ranges.
-
-### Working with Azure PowerShell
--
-## 1. Create a Site-to-Site VPN
-If you already have a Site-to-Site VPN with a dynamic routing gateway, great! You can proceed to [Export the virtual network configuration settings](#export). If not, do the following:
-
-### If you already have a Site-to-Site virtual network, but it has a static (policy-based) routing gateway:
-1. Change your gateway type to dynamic routing. A multi-site VPN requires a dynamic (also known as route-based) routing gateway. To change your gateway type, you'll need to first delete the existing gateway, then create a new one.
-2. Configure your new gateway and create your VPN tunnel. For instructions, For instructions, see [Specify the SKU and VPN type](vpn-gateway-howto-site-to-site-classic-portal.md#sku). Make sure you specify the Routing Type as 'Dynamic'.
-
-### If you don't have a Site-to-Site virtual network:
-1. Create your Site-to-Site virtual network using these instructions: [Create a Virtual Network with a Site-to-Site VPN Connection](./vpn-gateway-howto-site-to-site-classic-portal.md).
-2. Configure a dynamic routing gateway using these instructions: [Configure a VPN Gateway](./vpn-gateway-howto-site-to-site-classic-portal.md). Be sure to select **dynamic routing** for your gateway type.
-
-## <a name="export"></a>2. Export the network configuration file
-
-Open your PowerShell console with elevated rights. To switch to service management, use this command:
-
-```powershell
-azure config mode asm
-```
-
-Connect to your account. Use the following example to help you connect:
-
-```powershell
-Add-AzureAccount
-```
-
-Export your Azure network configuration file by running the following command. You can change the location of the file to export to a different location if necessary.
-
-```powershell
-Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml
-```
-
-## 3. Open the network configuration file
-Open the network configuration file that you downloaded in the last step. Use any xml editor that you like. The file should look similar to the following:
-
-```xml
-<NetworkConfiguration xmlns:xsd="https://www.w3.org/2001/XMLSchema" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
- <VirtualNetworkConfiguration>
- <LocalNetworkSites>
- <LocalNetworkSite name="Site1">
- <AddressSpace>
- <AddressPrefix>10.0.0.0/16</AddressPrefix>
- <AddressPrefix>10.1.0.0/16</AddressPrefix>
- </AddressSpace>
- <VPNGatewayAddress>131.2.3.4</VPNGatewayAddress>
- </LocalNetworkSite>
- <LocalNetworkSite name="Site2">
- <AddressSpace>
- <AddressPrefix>10.2.0.0/16</AddressPrefix>
- <AddressPrefix>10.3.0.0/16</AddressPrefix>
- </AddressSpace>
- <VPNGatewayAddress>131.4.5.6</VPNGatewayAddress>
- </LocalNetworkSite>
- </LocalNetworkSites>
- <VirtualNetworkSites>
- <VirtualNetworkSite name="VNet1" AffinityGroup="USWest">
- <AddressSpace>
- <AddressPrefix>10.20.0.0/16</AddressPrefix>
- <AddressPrefix>10.21.0.0/16</AddressPrefix>
- </AddressSpace>
- <Subnets>
- <Subnet name="FE">
- <AddressPrefix>10.20.0.0/24</AddressPrefix>
- </Subnet>
- <Subnet name="BE">
- <AddressPrefix>10.20.1.0/24</AddressPrefix>
- </Subnet>
- <Subnet name="GatewaySubnet">
- <AddressPrefix>10.20.2.0/29</AddressPrefix>
- </Subnet>
- </Subnets>
- <Gateway>
- <ConnectionsToLocalNetwork>
- <LocalNetworkSiteRef name="Site1">
- <Connection type="IPsec" />
- </LocalNetworkSiteRef>
- </ConnectionsToLocalNetwork>
- </Gateway>
- </VirtualNetworkSite>
- </VirtualNetworkSites>
- </VirtualNetworkConfiguration>
-</NetworkConfiguration>
-```
-
-## 4. Add multiple site references
-When you add or remove site reference information, you'll make configuration changes to the ConnectionsToLocalNetwork/LocalNetworkSiteRef. Adding a new local site reference triggers Azure to create a new tunnel. In the example below, the network configuration is for a single-site connection. Save the file once you have finished making your changes.
-
-```xml
- <Gateway>
- <ConnectionsToLocalNetwork>
- <LocalNetworkSiteRef name="Site1"><Connection type="IPsec" /></LocalNetworkSiteRef>
- </ConnectionsToLocalNetwork>
- </Gateway>
-```
-
-To add additional site references (create a multi-site configuration), simply add additional "LocalNetworkSiteRef" lines, as shown in the example below:
-
-```xml
- <Gateway>
- <ConnectionsToLocalNetwork>
- <LocalNetworkSiteRef name="Site1"><Connection type="IPsec" /></LocalNetworkSiteRef>
- <LocalNetworkSiteRef name="Site2"><Connection type="IPsec" /></LocalNetworkSiteRef>
- </ConnectionsToLocalNetwork>
- </Gateway>
-```
-
-## 5. Import the network configuration file
-Import the network configuration file. When you import this file with the changes, the new tunnels are added. The tunnels use the dynamic gateway that you created earlier. You can use PowerShell to import the file.
-
-## 6. Download keys
-Once your new tunnels have been added, use the PowerShell cmdlet 'Get-AzureVNetGatewayKey' to get the IPsec/IKE preshared keys for each tunnel.
-
-For example:
-
-```powershell
-Get-AzureVNetGatewayKey ΓÇôVNetName "VNet1" ΓÇôLocalNetworkSiteName "Site1"
-Get-AzureVNetGatewayKey ΓÇôVNetName "VNet1" ΓÇôLocalNetworkSiteName "Site2"
-```
-
-If you prefer, you can also use the *Get Virtual Network Gateway Shared Key* REST API to get the preshared keys.
-
-## 7. Verify your connections
-Check the multi-site tunnel status. After downloading the keys for each tunnel, you'll want to verify connections. Use 'Get-AzureVnetConnection' to get a list of virtual network tunnels, as shown in the following example. VNet1 is the name of the VNet.
-
-```powershell
-Get-AzureVnetConnection -VNetName VNET1
-```
-
-Example return:
-
-```
- ConnectivityState : Connected
- EgressBytesTransferred : 661530
- IngressBytesTransferred : 519207
- LastConnectionEstablished : 5/2/2014 2:51:40 PM
- LastEventID : 23401
- LastEventMessage : The connectivity state for the local network site 'Site1' changed from Not Connected to Connected.
- LastEventTimeStamp : 5/2/2014 2:51:40 PM
- LocalNetworkSiteName : Site1
- OperationDescription : Get-AzureVNetConnection
- OperationId : 7f68a8e6-51e9-9db4-88c2-16b8067fed7f
- OperationStatus : Succeeded
-
- ConnectivityState : Connected
- EgressBytesTransferred : 789398
- IngressBytesTransferred : 143908
- LastConnectionEstablished : 5/2/2014 3:20:40 PM
- LastEventID : 23401
- LastEventMessage : The connectivity state for the local network site 'Site2' changed from Not Connected to Connected.
- LastEventTimeStamp : 5/2/2014 2:51:40 PM
- LocalNetworkSiteName : Site2
- OperationDescription : Get-AzureVNetConnection
- OperationId : 7893b329-51e9-9db4-88c2-16b8067fed7f
- OperationStatus : Succeeded
-```
-
-## Next steps
-
-To learn more about VPN Gateways, see [About VPN Gateways](vpn-gateway-about-vpngateways.md).
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
For non-zone-redundant and non-zonal gateways that were previously created (gate
### How does the retirement of Basic SKU public IP addresses affect my VPN gateways?
-We're taking action to ensure the continued operation of deployed VPN gateways that use Basic SKU public IP addresses. If you already have VPN gateways with Basic SKU public IP addresses, there's no need for you to take any action.
+We're taking action to ensure the continued operation of deployed VPN gateways that use Basic SKU public IP addresses until the retirement of Basic IP in September 2025. Before this retirement, we will provide customers with a migration path from Basic to Standard IP.
However, Basic SKU public IP addresses are being phased out. Going forward, when you create a VPN gateway, you must use the Standard SKU public IP address. You can find details on the retirement of Basic SKU public IP addresses in the [Azure Updates announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired).
web-application-firewall Waf Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-sentinel.md
WAF log analytics are broken down into the following categories:
The following WAF workbook examples show sample data: ## Launch a WAF workbook