Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-domain-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md | Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/15/2023 Last updated : 09/19/2023 |
ai-services | Generate Thumbnail | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/generate-thumbnail.md | To call the API, do the following steps: 1. Replace the value of `<thumbnailFile>` with the path and name of the file in which to save the returned thumbnail image. 1. Replace the first part of the request URL (`westcentralus`) with the text in your own endpoint URL. [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]- 1. Optionally, change the image URL in the request body (`https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\`) to the URL of a different image from which to generate a thumbnail. + 1. Optionally, change the image URL in the request body (`https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png`) to the URL of a different image from which to generate a thumbnail. 1. Open a command prompt window. 1. Paste the command from the text editor into the command prompt window. 1. Press enter to run the program. ```bash- curl -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -o <thumbnailFile> -H "Content-Type: application/json" "https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true" -d "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\"}" + curl -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -o <thumbnailFile> -H "Content-Type: application/json" "https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true" -d "{\"url\":\"https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png\"}" ``` ## Examine the response |
ai-services | Manage Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/manage-resources.md | This article provides instructions on how to recover an Azure AI services resour Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Cognitive Services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). -## Recover a deleted resource +When using `Contributor` to purge a resource the role must be assigned at the subscription level. If the role assignment is only present at the resource or resource group level you will be unable to access the purge functionality. ++## Recover a deleted resource To recover a deleted Azure AI services resource, use the following commands. Where applicable, replace: |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | There is an [upload limit](../quotas-limits.md), and there are some caveats abou ### Azure OpenAI resources You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service.+> [!NOTE] +> If you disable public network access for your Azure OpenAI resources, you can call the `/extensions/chat/completions` API or chat with your existing index in Azure OpenAI Studio. However, vector search and blob/file ingestion in the studio is not supported. ### Azure Cognitive Search resources -If you have an Azure Cognitive Search resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in five business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request. +If you have an Azure Cognitive Search resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in ten business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request. :::image type="content" source="../media/use-your-data/approve-private-endpoint.png" alt-text="A screenshot showing private endpoint approval screen." lightbox="../media/use-your-data/approve-private-endpoint.png"::: To add a new data source to your Azure OpenAI resource, you need the following A ||| |[Cognitive Services Contributor](../how-to/role-based-access-control.md#cognitive-services-contributor) | You want to use Azure OpenAI on your data. | |[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. |+|[Search Service Contributor](/azure/role-based-access-control/built-in-roles#search-service-contributor) | You plan to create a new Azure Cognitive Search index. | |[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | ## Document-level access control Azure OpenAI on your data provides several search options you can use when you a > [!IMPORTANT] > * [Semantic search](/azure/search/semantic-search-overview#availability-and-pricing) and [vector search](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) are subject to additional pricing. You need to choose **Basic or higher SKU** to enable semantic search or vector search. See [pricing tier difference](/azure/search/search-sku-tier) and [service limits](/azure/search/search-limits-quotas-capacity) for more information. > * To help improve the quality of the information retrieval and model response, we recommend enabling [semantic search](/azure/search/semantic-search-overview) for the following languages: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, Arabic+> * If you enable vector search, you need to enable public network access for your Azure OpenAI resources. | Search option | Retrieval type | Additional pricing? |Benefits| |||| -- | |
ai-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md | In this quickstart you can use your own data with Azure OpenAI models. Using Azu - An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md). - - Your chat model must use version `0301`. You can view or change your model version in [Azure OpenAI Studio](./concepts/models.md#model-updates). + - Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./concepts/models.md#model-updates). - Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource. |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
ai-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md | var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md). > [!NOTE]-> Speech translation with language identification is only supported with Speech SDKs in C#, C++, and Python. +> Speech translation with language identification is only supported with Speech SDKs in C#, C++, JavaScript, and Python. > Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp" |
ai-services | Rest Api Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/rest-api-guide.md | -Text Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Text Translation API translates text between language pairs across all [supported languages and dialects](../../language-support.md). The available methods are listed in the table below: +Text Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Text Translation API translates text between language pairs across all [supported languages and dialects](../../language-support.md). The available methods are listed in the following table: | Request| Method| Description| ||--|| |
ai-services | V3 0 Break Sentence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-break-sentence.md | Send a `POST` request to: ```HTTP https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0+ ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: Request parameters passed on the query string are: | Query Parameter | Description | | -| -- | | api-version <img width=200/> | **Required query parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. |-| language | **Optional query parameter**.<br/>Language tag identifying the language of the input text. If a code isn't specified, automatic language detection will be applied. | -| script | **Optional query parameter**.<br/>Script tag identifying the script used by the input text. If a script isn't specified, the default script of the language will be assumed. | +| language | **Optional query parameter**.<br/>Language tag identifying the language of the input text. If a code isn't specified, automatic language detection is applied. | +| script | **Optional query parameter**.<br/>Script tag identifying the script used by the input text. If a script isn't specified, the default script of the language is assumed. | Request headers include: Request headers include: | - | -- | | Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. | | Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |-| Content-Length | **Required request header**.<br/>The length of the request body. | -| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | +| Content-Length | **Required request header**.<br/>The length of the request body. | +| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | ## Request body The following limitations apply: * The array can have at most 100 elements. * The text value of an array element can't exceed 50,000 characters including spaces. * The entire text included in the request can't exceed 50,000 characters including spaces.-* If the `language` query parameter is specified, then all array elements must be in the same language. Otherwise, language auto-detection is applied to each array element independently. +* If the `language` query parameter is specified, then all array elements must be in the same language. Otherwise, language autodetection is applied to each array element independently. ## Response body A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties: -* `sentLen`: An array of integers representing the lengths of the sentences in the text element. The length of the array is the number of sentences, and the values are the length of each sentence. +* `sentLen`: An array of integers representing the lengths of the sentences in the text element. The length of the array is the number of sentences, and the values are the length of each sentence. * `detectedLanguage`: An object describing the detected language through the following properties: A successful response is a JSON array with one result for each string in the inp * `score`: A float value indicating the confidence in the result. The score is between zero (0) and one (1.0). A low score (<= 0.4) indicates a low confidence. -The `detectedLanguage` property is only present in the result object when language auto-detection is requested. +The `detectedLanguage` property is only present in the result object when language autodetection is requested. An example JSON response is: An example JSON response is: ## Response headers -<table width="100%"> - <th width="20%">Headers</th> - <th>Description</th> - <tr> - <td>X-RequestId</td> - <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td> - </tr> -</table> +|Headers|Description| +| | | +|X-RequestId|Value generated by the service to identify the request. It's used for troubleshooting purposes.| ## Response status codes -The following are the possible HTTP status codes that a request returns. --<table width="100%"> - <th width="20%">Status Code</th> - <th>Description</th> - <tr> - <td>200</td> - <td>Success.</td> - </tr> - <tr> - <td>400</td> - <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td> - </tr> - <tr> - <td>401</td> - <td>The request couldn't be authenticated. Check that credentials are specified and valid.</td> - </tr> - <tr> - <td>403</td> - <td>The request isn't authorized. Check the details error message. This response code often indicates that all free translations provided with a trial subscription have been used up.</td> - </tr> - <tr> - <td>429</td> - <td>The server rejected the request because the client has exceeded request limits.</td> - </tr> - <tr> - <td>500</td> - <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> - </tr> - <tr> - <td>503</td> - <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> - </tr> -</table> --If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). +The following are the possible HTTP status codes that a request returns. ++|Status Code|Description| +| | | +|200|Success.| +|400|One of the query parameters is missing or not valid. Correct request parameters before retrying.| +|401|The request couldn't be authenticated. Check that credentials are specified and valid.| +|403|The request isn't authorized. Check the details error message. This response code often indicates that all free translations provided with a trial subscription have been used up.| +|429|The server rejected the request because the client has exceeded request limits.| +|500|An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.| +|503|Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.| +++If an error occurs, the request returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). ## Examples -The following example shows how to obtain sentence boundaries for a single sentence. The language of the sentence is automatically detected by the service. +The following example shows how to obtain sentence boundaries for a single sentence. The service automatically detects the sentence language. ```curl curl -X POST "https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'How are you? I am fine. What did you do today?'}]" |
ai-services | V3 0 Detect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-detect.md | +<!-- markdownlint-disable MD033 --> # Translator 3.0: Detect Identifies the language of a piece of text. Send a `POST` request to: ```HTTP https://api.cognitive.microsofttranslator.com/detect?api-version=3.0+ ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: | Query parameter | Description | | | |-| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. | +| api-version | *Required parameter*.<br>Version of the API requested by the client. Value must be `3.0`. | Request headers include: | Headers | Description | | | |-| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication)</a>. | -| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`. | -| Content-Length | *Required request header*.<br/>The length of the request body. | -| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | +| Authentication header(s) | <em>Required request header</em>.<br>See [available options for authentication](./v3-0-reference.md#authentication)</a>. | +| Content-Type | *Required request header*.<br>Specifies the content type of the payload. Possible values are: `application/json`. | +| Content-Length | *Required request header*.<br>The length of the request body. | +| X-ClientTraceId | *Optional*.<br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | ## Request body -The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Language detection is applied to the value of the `Text` property. The language auto-detection works better with longer input text. A sample request body looks like that: +The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Language detection is applied to the value of the `Text` property. The language autodetection works better with longer input text. A sample request body looks like that: ```json [ The body of the request is a JSON array. Each array element is a JSON object wit The following limitations apply: * The array can have at most 100 elements.-* The entire text included in the request cannot exceed 50,000 characters including spaces. +* The entire text included in the request can't exceed 50,000 characters including spaces. ## Response body A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties: - * `language`: Code of the detected language. +* `language`: Code of the detected language. ++* `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence. - * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence. +* `isTranslationSupported`: A boolean value that is true if the detected language is one of the languages supported for text translation. - * `isTranslationSupported`: A boolean value which is true if the detected language is one of the languages supported for text translation. +* `isTransliterationSupported`: A boolean value that is true if the detected language is one of the languages supported for transliteration. - * `isTransliterationSupported`: A boolean value which is true if the detected language is one of the languages supported for transliteration. - - * `alternatives`: An array of other possible languages. Each element of the array is another object with the same properties listed above: `language`, `score`, `isTranslationSupported` and `isTransliterationSupported`. +* `alternatives`: An array of other possible languages. Each element of the array is another object the following properties: `language`, `score`, `isTranslationSupported` and `isTransliterationSupported`. An example JSON response is: An example JSON response is: | Headers | Description | | | |-| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. | +| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. | ## Response status codes -The following are the possible HTTP status codes that a request returns. +The following are the possible HTTP status codes that a request returns. | Status Code | Description | | | | | 200 | Success. | | 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |-| 401 | The request could not be authenticated. Check that credentials are specified and valid. | -| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. | +| 401 | The request couldn't be authenticated. Check that credentials are specified and valid. | +| 403 | The request isn't authorized. Check the details error message. This code often indicates that all free translations provided with a trial subscription have been used up. | | 429 | The server rejected the request because the client has exceeded request limits. | | 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. | | 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. | -If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). +If an error occurs, the request returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). ## Examples |
ai-services | V3 0 Dictionary Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-dictionary-examples.md | +<!-- markdownlint-disable MD033 --> # Translator 3.0: Dictionary Examples Send a `POST` request to: https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0 ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: | Query Parameter | Description | | | -- |-| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. | -| from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. | -| to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. | +| api-version <img width=200/> | **Required parameter**.<br>Version of the API requested by the client. Value must be `3.0`. | +| from | **Required parameter**.<br>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. | +| to | **Required parameter**.<br>Specifies the language of the output text. The target language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. | Request headers include: | Headers | Description | | | -- |-| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. | -| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. | -| Content-Length | **Required request header**.<br/>The length of the request body. | -| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | +| Authentication header(s) <img width=200/> | **Required request header**.<br>See [Authentication](v3-0-reference.md#authentication)>available options for authentication</a>. | +| Content-Type | **Required request header**.<br>Specifies the content type of the payload. Possible values are: `application/json`. | +| Content-Length | **Required request header**.<br>The length of the request body. | +| X-ClientTraceId | **Optional**.<br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | ## Request body The body of the request is a JSON array. Each array element is a JSON object with the following properties: - * `Text`: A string specifying the term to lookup. This should be the value of a `normalizedText` field from the back-translations of a previous [Dictionary lookup](./v3-0-dictionary-lookup.md) request. It can also be the value of the `normalizedSource` field. +* `Text`: A string specifying the term to look up. This property should be the value of a `normalizedText` field from the back-translations of a previous [Dictionary lookup](./v3-0-dictionary-lookup.md) request. It can also be the value of the `normalizedSource` field. - * `Translation`: A string specifying the translated text previously returned by the [Dictionary lookup](./v3-0-dictionary-lookup.md) operation. This should be the value from the `normalizedTarget` field in the `translations` list of the [Dictionary lookup](./v3-0-dictionary-lookup.md) response. The service will return examples for the specific source-target word-pair. +* `Translation`: A string specifying the translated text previously returned by the [Dictionary lookup](./v3-0-dictionary-lookup.md) operation. This property should be the value from the `normalizedTarget` field in the `translations` list of the [Dictionary lookup](./v3-0-dictionary-lookup.md) response. The service returns examples for the specific source-target word-pair. An example is: An example is: The following limitations apply: * The array can have at most 10 elements.-* The text value of an array element cannot exceed 100 characters including spaces. +* The text value of an array element can't exceed 100 characters including spaces. ## Response body A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties: - * `normalizedSource`: A string giving the normalized form of the source term. Generally, this should be identical to the value of the `Text` field at the matching list index in the body of the request. - - * `normalizedTarget`: A string giving the normalized form of the target term. Generally, this should be identical to the value of the `Translation` field at the matching list index in the body of the request. - - * `examples`: A list of examples for the (source term, target term) pair. Each element of the list is an object with the following properties: +* `normalizedSource`: A string giving the normalized form of the source term. Generally, this property should be identical to the value of the `Text` field at the matching list index in the body of the request. ++* `normalizedTarget`: A string giving the normalized form of the target term. Generally, this property should be identical to the value of the `Translation` field at the matching list index in the body of the request. ++* `examples`: A list of examples for the (source term, target term) pair. Each element of the list is an object with the following properties: - * `sourcePrefix`: The string to concatenate _before_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string. +* `sourcePrefix`: The string to concatenate _before_ the value of `sourceTerm` to form a complete example. Don't add a space character, since it's already there when it should be. This value may be an empty string. - * `sourceTerm`: A string equal to the actual term looked up. The string is added with `sourcePrefix` and `sourceSuffix` to form the complete example. Its value is separated so it can be marked in a user interface, e.g., by bolding it. +* `sourceTerm`: A string equal to the actual term looked up. The string is added with `sourcePrefix` and `sourceSuffix` to form the complete example. Its value is separated so it can be marked in a user interface, for example, by bolding it. - * `sourceSuffix`: The string to concatenate _after_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string. + * `sourceSuffix`: The string to concatenate _after_ the value of `sourceTerm` to form a complete example. Don't add a space character, since it's already there when it should be. This value may be an empty string. - * `targetPrefix`: A string similar to `sourcePrefix` but for the target. + * `targetPrefix`: A string similar to `sourcePrefix` but for the target. - * `targetTerm`: A string similar to `sourceTerm` but for the target. + * `targetTerm`: A string similar to `sourceTerm` but for the target. - * `targetSuffix`: A string similar to `sourceSuffix` but for the target. + * `targetSuffix`: A string similar to `sourceSuffix` but for the target. > [!NOTE] > If there are no examples in the dictionary, the response is 200 (OK) but the `examples` list is an empty list. ## Examples -This example shows how to lookup examples for the pair made up of the English term `fly` and its Spanish translation `volar`. +This example shows how to look up examples for the pair made up of the English term `fly` and its Spanish translation `volar`. ```curl curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly', 'Translation':'volar'}]" curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/examples? The response body (abbreviated for clarity) is: -``` +```json [ { "normalizedSource":"fly", The response body (abbreviated for clarity) is: "targetPrefix":"Necesitan máquinas para ", "targetTerm":"volar", "targetSuffix":"."- }, + }, { "sourcePrefix":"That should really ", "sourceTerm":"fly", |
ai-services | V3 0 Dictionary Lookup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-dictionary-lookup.md | Title: Translator Dictionary Lookup Method -description: The Dictionary Lookup method provides alternative translations for a word and a small number of idiomatic phrases. +description: The Dictionary Lookup method provides alternative translations for a word and a few idiomatic phrases. +<!-- markdownlint-disable MD033 --> # Translator 3.0: Dictionary Lookup -Provides alternative translations for a word and a small number of idiomatic phrases. Each translation has a part-of-speech and a list of back-translations. The back-translations enable a user to understand the translation in context. The [Dictionary Example](./v3-0-dictionary-examples.md) operation allows further drill down to see example uses of each translation pair. +Provides alternative translations for a word and a few idiomatic phrases. Each translation has a part-of-speech and a list of back-translations. The back-translations enable a user to understand the translation in context. The [Dictionary Example](./v3-0-dictionary-examples.md) operation allows further drill down to see example uses of each translation pair. ## Request URL Send a `POST` request to: https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0 ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: | Query Parameter | Description | | | -- |-| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0` | +| api-version| **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0` | | from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. | | to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](v3-0-languages.md) included in the `dictionary` scope. | Request headers include: | Headers | Description | | | -- |-| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. | -| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. | -| Content-Length | **Required request header**.<br/>The length of the request body. | +| Authentication header(s) | **Required request header**.<br/>See [Authentication](v3-0-reference.md#authentication).| +| Content-Type | **Required request header**.<br>Specifies the content type of the payload. Possible values are: `application/json`. | +| Content-Length | **Required request header**.<br>The length of the request body. | | X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | ## Request body -The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the term to lookup. +The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the term to look up. ```json [ The body of the request is a JSON array. Each array element is a JSON object wit The following limitations apply: * The array can have at most 10 elements.-* The text value of an array element cannot exceed 100 characters including spaces. +* The text value of an array element can't exceed 100 characters including spaces. ## Response body A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties: - * `normalizedSource`: A string giving the normalized form of the source term. For example, if the request is "JOHN", the normalized form will be "john". The content of this field becomes the input to [lookup examples](./v3-0-dictionary-examples.md). - - * `displaySource`: A string giving the source term in a form best suited for end-user display. For example, if the input is "JOHN", the display form will reflect the usual spelling of the name: "John". +* `normalizedSource`: A string giving the normalized form of the source term. For example, if the request is `JOHN`, the normalized form is `john`. The content of this field becomes the input to [lookup examples](./v3-0-dictionary-examples.md). ++* `displaySource`: A string giving the source term in a form best suited for end-user display. For example, if the input is `JOHN`, the display form reflects the usual spelling of the name: `John`. ++* `translations`: A list of translations for the source term. Each element of the list is an object with the following properties: ++* `normalizedTarget`: A string giving the normalized form of this term in the target language. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md). - * `translations`: A list of translations for the source term. Each element of the list is an object with the following properties: +* `displayTarget`: A string giving the term in the target language and in a form best suited for end-user display. Generally, this property only differs from the `normalizedTarget` in terms of capitalization. For example, a proper noun like `Juan` has `normalizedTarget = "juan"` and `displayTarget = "Juan"`. - * `normalizedTarget`: A string giving the normalized form of this term in the target language. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md). +* `posTag`: A string associating this term with a part-of-speech tag. - * `displayTarget`: A string giving the term in the target language and in a form best suited for end-user display. Generally, this will only differ from the `normalizedTarget` in terms of capitalization. For example, a proper noun like "Juan" will have `normalizedTarget = "juan"` and `displayTarget = "Juan"`. + | Tag name | Description | + |-|--| + | ADJ | Adjectives | + | ADV | Adverbs | + | CONJ | Conjunctions | + | DET | Determiners | + | MODAL | Verbs | + | NOUN | Nouns | + | PREP | Prepositions | + | PRON | Pronouns | + | VERB | Verbs | + | OTHER | Other | - * `posTag`: A string associating this term with a part-of-speech tag. + As an implementation note, these tags are part-of-speech tagging the English side, and then taking the most frequent tag for each source/target pair. So if people frequently translate a Spanish word to a different part-of-speech tag in English, tags may end up being wrong (with respect to the Spanish word). - | Tag name | Description | - |-|--| - | ADJ | Adjectives | - | ADV | Adverbs | - | CONJ | Conjunctions | - | DET | Determiners | - | MODAL | Verbs | - | NOUN | Nouns | - | PREP | Prepositions | - | PRON | Pronouns | - | VERB | Verbs | - | OTHER | Other | +* `confidence`: A value between 0.0 and 1.0 that represents the "confidence" (or more accurately, "probability in the training data") of that translation pair. The sum of confidence scores for one source word may or may not sum to 1.0. - As an implementation note, these tags were determined by part-of-speech tagging the English side, and then taking the most frequent tag for each source/target pair. So if people frequently translate a Spanish word to a different part-of-speech tag in English, tags may end up being wrong (with respect to the Spanish word). +* `prefixWord`: A string giving the word to display as a prefix of the translation. Currently, this property is the gendered determiner of nouns, in languages that have gendered determiners. For example, the prefix of the Spanish word `mosca` is `la`, since `mosca` is a feminine noun in Spanish. This value is only dependent on the translation, and not on the source. If there's no prefix, it's the empty string. - * `confidence`: A value between 0.0 and 1.0 which represents the "confidence" (or perhaps more accurately, "probability in the training data") of that translation pair. The sum of confidence scores for one source word may or may not sum to 1.0. +* `backTranslations`: A list of "back translations" of the target. For example, source words that the target can translate to. The list is guaranteed to contain the source word that was requested (for example, if the source word being looked up is `fly`, then it's guaranteed that `fly` is in the `backTranslations` list). However, it isn't guaranteed to be in the first position, and often isn't. Each element of the `backTranslations` list is an object described by the following properties: - * `prefixWord`: A string giving the word to display as a prefix of the translation. Currently, this is the gendered determiner of nouns, in languages that have gendered determiners. For example, the prefix of the Spanish word "mosca" is "la", since "mosca" is a feminine noun in Spanish. This is only dependent on the translation, and not on the source. If there is no prefix, it will be the empty string. - - * `backTranslations`: A list of "back translations" of the target. For example, source words that the target can translate to. The list is guaranteed to contain the source word that was requested (e.g., if the source word being looked up is "fly", then it is guaranteed that "fly" will be in the `backTranslations` list). However, it is not guaranteed to be in the first position, and often will not be. Each element of the `backTranslations` list is an object described by the following properties: + * `normalizedText`: A string giving the normalized form of the source term that is a back-translation of the target. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md). - * `normalizedText`: A string giving the normalized form of the source term that is a back-translation of the target. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md). + * `displayText`: A string giving the source term that is a back-translation of the target in a form best suited for end-user display. - * `displayText`: A string giving the source term that is a back-translation of the target in a form best suited for end-user display. + * `numExamples`: An integer representing the number of examples that are available for this translation pair. Actual examples must be retrieved with a separate call to [lookup examples](./v3-0-dictionary-examples.md). The number is mostly intended to facilitate display in a UX. For example, a user interface may add a hyperlink to the back-translation if the number of examples is greater than zero. Then the back-translation is shown as plain text if there are no examples. The actual number of examples returned by a call to [lookup examples](./v3-0-dictionary-examples.md) may be less than `numExamples`, because more filtering may be applied on the fly to remove "bad" examples. - * `numExamples`: An integer representing the number of examples that are available for this translation pair. Actual examples must be retrieved with a separate call to [lookup examples](./v3-0-dictionary-examples.md). The number is mostly intended to facilitate display in a UX. For example, a user interface may add a hyperlink to the back-translation if the number of examples is greater than zero and show the back-translation as plain text if there are no examples. Note that the actual number of examples returned by a call to [lookup examples](./v3-0-dictionary-examples.md) may be less than `numExamples`, because additional filtering may be applied on the fly to remove "bad" examples. - - * `frequencyCount`: An integer representing the frequency of this translation pair in the data. The main purpose of this field is to provide a user interface with a means to sort back-translations so the most frequent terms are first. + * `frequencyCount`: An integer representing the frequency of this translation pair in the data. The main purpose of this field is to provide a user interface with a means to sort back-translations so the most frequent terms are first. > [!NOTE] > If the term being looked-up does not exist in the dictionary, the response is 200 (OK) but the `translations` list is an empty list. ## Examples -This example shows how to lookup alternative translations in Spanish of the English term `fly` . +This example shows how to look up alternative translations in Spanish of the English term `fly` . ```curl curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly'}]" curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?ap The response body (abbreviated for clarity) is: -``` +```json [ { "normalizedSource":"fly", The response body (abbreviated for clarity) is: ] ``` -This example shows what happens when the term being looked up does not exist for the valid dictionary pair. +This example shows what happens when the term being looked up doesn't exist for the valid dictionary pair. ```curl curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly123456'}]" ``` -Since the term is not found in the dictionary, the response body includes an empty `translations` list. +Since the term isn't found in the dictionary, the response body includes an empty `translations` list. -``` +```json [ { "normalizedSource":"fly123456", |
ai-services | V3 0 Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-languages.md | +<!-- markdownlint-disable MD033 --> + # Translator 3.0: Languages Gets the set of languages currently supported by other operations of the Translator. Gets the set of languages currently supported by other operations of the Transla ## Request URL Send a `GET` request to:+ ```HTTP https://api.cognitive.microsofttranslator.com/languages?api-version=3.0 ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: -<table width="100%"> - <th width="20%">Query parameter</th> - <th>Description</th> - <tr> - <td>api-version</td> - <td><em>Required parameter</em>.<br/>Version of the API requested by the client. Value must be `3.0`.</td> - </tr> - <tr> - <td>scope</td> - <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.</td> - </tr> -</table> +|Query parameters|Description| +||| +|api-version|**Required parameter**<br><br>The version of the API requested by the client. Value must be `3.0`.| +|scope|**Optional parameter**.<br><br>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration`, and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.| *See* [response body](#response-body). Request headers are: -<table width="100%"> - <th width="20%">Headers</th> - <th>Description</th> - <tr> - <td>Accept-Language</td> - <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available. - </td> - </tr> - <tr> - <td>X-ClientTraceId</td> - <td>*Optional request header*.<br/>A client-generated GUID to uniquely identify the request.</td> - </tr> -</table> +|Headers|Description| +||| +|Accept-Language|**Optional request header**.<br><br>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available.| +|X-ClientTraceId|**Optional request header**.<br>A client-generated GUID to uniquely identify the request.| Authentication isn't required to get language resources. The list of supported languages doesn't change frequently. To save network bandw ## Response headers -<table width="100%"> - <th width="20%">Headers</th> - <th>Description</th> - <tr> - <td>ETag</td> - <td>Current value of the entity tag for the requested groups of supported languages. To make subsequent requests more efficient, the client may send the `ETag` value in an `If-None-Match` header field. - </td> - </tr> - <tr> - <td>X-RequestId</td> - <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td> - </tr> -</table> +|Headers|Description| +| | | +|ETag|Current value of the entity tag for the requested groups of supported languages. To make subsequent requests more efficient, the client may send the `ETag` value in an `If-None-Match` header field.| +|X-RequestId|Value generated by the service to identify the request. It's used for troubleshooting purposes.| ## Response status codes The following are the possible HTTP status codes that a request returns. -<table width="100%"> - <th width="20%">Status Code</th> - <th>Description</th> - <tr> - <td>200</td> - <td>Success.</td> - </tr> - <tr> - <td>304</td> - <td>The resource hasn't been modified since the version specified by request headers `If-None-Match`.</td> - </tr> - <tr> - <td>400</td> - <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td> - </tr> - <tr> - <td>429</td> - <td>The server rejected the request because the client has exceeded request limits.</td> - </tr> - <tr> - <td>500</td> - <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> - </tr> - <tr> - <td>503</td> - <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> - </tr> -</table> +|Status Code|Description| +| | | +|200|Success.| +|304|The resource hasn't been modified since the version specified by request headers `If-None-Match`.| +|400|One of the query parameters is missing or not valid. Correct request parameters before retrying.| +|429|The server rejected the request because the client has exceeded request limits.| +|500|An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.| +|503|Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.| If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). |
ai-services | V3 0 Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-reference.md | Title: Translator V3.0 Reference -description: Reference documentation for the Translator V3.0. Version 3 of the Translator provides a modern JSON-based Web API. +description: Reference documentation for the Translator V3.0. Version 3.0 of the Translator provides a modern JSON-based Web API. Previously updated : 07/18/2023 Last updated : 09/19/2023 -Version 3 of the Translator provides a modern JSON-based Web API. It improves usability and performance by consolidating existing features into fewer operations and it provides new features. +Version 3.0 of the Translator provides a modern JSON-based Web API. It improves usability and performance by consolidating existing features into fewer operations and it provides new features. * Transliteration to convert text in one language from one script to another script. * Translation to multiple languages in one request. To force the request to be handled within a specific geography, use the desired |Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe| |United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2| -<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is: +<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or `Switzerland West`, then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is `my-swiss-n`, then your custom endpoint is `https​://my-swiss-n.cognitiveservices.azure.com`. And a sample request to translate is: + ```curl // Pass secret key and region using headers to a custom endpoint curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \ curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3. -H "Content-Type: application/json" \ -d "[{'Text':'Hello'}]" -v ```+ <sup>`2`</sup> Custom Translator isn't currently available in Switzerland. ## Authentication |
ai-services | V3 0 Translate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-translate.md | Send a `POST` request to: https://api.cognitive.microsofttranslator.com/translate?api-version=3.0 ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: Request parameters passed on the query string are: | profanityMarker | _Optional parameter_. <br>Specifies how profanities should be marked in translations. Possible values are: `Asterisk` (default) or `Tag`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). | | includeAlignment | _Optional parameter_. <br>Specifies whether to include alignment projection from source text to translated text. Possible values are: `true` or `false` (default). | | includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |-| suggestedFrom | _Optional parameter_. <br>Specifies a fallback language if the language of the input text can't be identified. Language autodetection is applied when the `from` parameter is omitted. If detection fails, the `suggestedFrom` language will be assumed. | +| suggestedFrom | _Optional parameter_. <br>Specifies a fallback language if the language of the input text can't be identified. Language autodetection is applied when the `from` parameter is omitted. If detection fails, the `suggestedFrom` language is assumed. | | fromScript | _Optional parameter_. <br>Specifies the script of the input text. | | toScript | _Optional parameter_. <br>Specifies the script of the translated text. |-| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation for language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X ΓåÆ E and E ΓåÆ Y) will need to be custom and have the same category. If no system is found with the specific category, the request will return a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. | +| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation from language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X ΓåÆ E and E ΓåÆ Y) need to be custom and have the same category. If no system is found with the specific category, the request returns a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. | Request headers include: A successful response is a JSON array with one result for each string in the inp * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence. - The `detectedLanguage` property is only present in the result object when language auto-detection is requested. + The `detectedLanguage` property is only present in the result object when language autodetection is requested. * `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes: A successful response is a JSON array with one result for each string in the inp The `transliteration` object isn't included if transliteration doesn't take place. - * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the alignment element will be empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions. + * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be noncontiguous. When no alignment information is available, the alignment element is empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions. * `sentLen`: An object returning sentence boundaries in the input and output texts. Examples of JSON responses are provided in the [examples](#examples) section. | Headers | Description | | | | | X-requestid | Value generated by the service to identify the request. It's used for troubleshooting purposes. |-| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests | -| X-metered-usage |Specifies consumption (the number of characters for which the user will be charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field will return the value '5'.| +| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>*Custom - Request includes a custom system and at least one custom system was used during translation.*</br> Team - All other requests | +| X-metered-usage |Specifies consumption (the number of characters for which the user is charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field returns the value `5`.| ## Response status codes The following are the possible HTTP status codes that a request returns. |500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response headerΓÇ»X-RequestId, and client identifier from request headerΓÇ»X-ClientTraceId. | |503 |Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response headerΓÇ»X-RequestId, and client identifier from request headerΓÇ»X-ClientTraceId. | -If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). +If an error occurs, the request returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). ## Examples curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response body is: -``` +```json [ { "translations":[ curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response body is: -``` +```json [ { "detectedLanguage": {"language": "en", "score": 1.0}, curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response body is: -``` +```json [ { "detectedLanguage":{"language":"en","score":1.0}, curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response contains the translation of all pieces of text in the exact same order as in the request. The response body is: -``` +```json [ { "translations":[ curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response body is: -``` +```json [ { "translations":[ The response body is: ### Handle profanity -Normally the Translator service will retain profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language may be amplified or reduced. +Normally, the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language may be amplified or reduced. -If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, whether you want to mark profanities with appropriate tags (giving you the option to add your own post-processing), or you want no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked` and `NoAction` (default). +If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, marked with appropriate tags (giving you the option to add your own post-processing), or with no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked` and `NoAction` (default). | ProfanityAction | Action | | | |-| `NoAction` | NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: He's a jack. | -| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: He's a** | -| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: He's a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags <profanity> and </profanity>: <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: He's a <profanity>jack</profanity>. | +| `NoAction` | NoAction is the default behavior. Profanity passes from source to target. <br><br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: H​e's a jack. | +| `Deleted` | Profane words are removed from the output without replacement. <br> <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: H​e's a** | +| `Marked` | A marker replaces the marked word in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: H​e's a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags <profanity> and </profanity>: <br>**Example Source (Japanese)**: σ╜╝πü»πé╕πâúπââπé½πé╣πüºπüÖπÇé <br>**Example Translation (English)**: H​e's a <profanity>jack</profanity>. | For example: curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio This request returns: -``` +```json [ { "translations":[ curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio That last request returns: -``` +```json [ { "translations":[ That last request returns: ### Translate content with markup and decide what's translated -It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated. +It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element isn't translated, while the content in the second `div` element is translated. -``` +```html <div class="notranslate">This will not be translated.</div> <div>This will be translated. </div> ``` curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response is: -``` +```json [ { "translations":[ Alignment is returned as a string value of the following format for every word o Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21". -In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case. +In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be noncontiguous. When no alignment information is available, the Alignment element is empty. The method returns no error in that case. To receive alignment information, specify `includeAlignment=true` on the query string. curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio The response is: -``` +```json [ { "translations":[ Obtaining alignment information is an experimental feature that we've enabled fo * from Japanese to Korean or from Korean to Japanese. * from Japanese to Chinese Simplified and Chinese Simplified to Japanese. * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.-* You won't receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences. +* You don't alignment if the sentence is a canned translation. Example of a canned translation is `This is a test`, `I love you` and other high frequency sentences. * Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md) ### Obtain sentence boundaries If you already know the translation you want to apply to a word or a phrase, you The markup to supply uses the following syntax. -``` +```html <mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary> ``` For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request: -``` +```http curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.'}]" ``` The result is: -``` +```json [ { "translations":[ The result is: ] ``` -This dynamic-dictionary feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you can create training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md). +This dynamic-dictionary feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you can create training data that shows your work or phrase in context, you get better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md). ## Next steps |
ai-services | V3 0 Transliterate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-transliterate.md | Send a `POST` request to: https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0 ``` +_See_ [**Virtual Network Support**](v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support. + ## Request parameters Request parameters passed on the query string are: Request headers include: | Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication). | | Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json` | | Content-Length | *Required request header*.<br/>The length of the request body. |-| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | +| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. | ## Request body The body of the request is a JSON array. Each array element is a JSON object wit The following limitations apply: * The array can have at most 10 elements.-* The text value of an array element cannot exceed 1,000 characters including spaces. -* The entire text included in the request cannot exceed 5,000 characters including spaces. +* The text value of an array element can't exceed 1,000 characters including spaces. +* The entire text included in the request can't exceed 5,000 characters including spaces. ## Response body A successful response is a JSON array with one result for each element in the input array. A result object includes the following properties: - * `text`: A string which is the result of converting the input string to the output script. - - * `script`: A string specifying the script used in the output. +* `text`: A string that results from converting the input string to the output script. ++* `script`: A string specifying the script used in the output. An example JSON response is: An example JSON response is: | Headers | Description | | | |-| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. | +| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. | ## Response status codes -The following are the possible HTTP status codes that a request returns. +The following are the possible HTTP status codes that a request returns. | Status Code | Description | | | | | 200 | Success. | | 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |-| 401 | The request could not be authenticated. Check that credentials are specified and valid. | -| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. | +| 401 | The request couldn't be authenticated. Check that credentials are specified and valid. | +| 403 | The request isn't authorized. Check the details error message. This code often indicates that all free translations provided with a trial subscription have been used up. | | 429 | The server rejected the request because the client has exceeded request limits. | | 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. | | 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. | -If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). +If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). ## Examples The JSON payload for the request in this example: [{"text":"こんにちは","script":"jpan"},{"text":"さようなら","script":"jpan"}] ``` -If you are using cURL in a command-line window that does not support Unicode characters, take the following JSON payload and save it into a file named `request.txt`. Be sure to save the file with `UTF-8` encoding. +If you're using cURL in a command-line window that doesn't support Unicode characters, take the following JSON payload and save it into a file named `request.txt`. Be sure to save the file with `UTF-8` encoding. ``` curl -X POST "https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d @request.txt |
aks | Auto Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md | AKS follows a strict supportability versioning window. With properly selected au You can specify cluster auto-upgrade specifics using the following guidance. The upgrades occur based on your specified cadence and are recommended to remain on supported Kubernetes versions. -AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. +AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. Stopped nodepools will be upgraded during an auto-upgrade operation. The upgrade will apply to nodes when the node pool is started. For example, Kubernetes v1.25 will upgrade to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. The following upgrade channels are available: | `patch`| automatically upgrades the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster runs version *1.17.7*, and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.17.9*.| | `stable`| automatically upgrades the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.18.6*.| | `rapid`| automatically upgrades the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster's Kubernetes version is an *N-2* minor version, where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster first upgrades to *1.18.6*, then upgrades to *1.19.1*.|-| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default.| +| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| > [!NOTE] > |
aks | Auto Upgrade Node Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md | The following upgrade channels are available. You're allowed to choose one of th ||| | `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A| | `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.|-| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs.| -| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.| +| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| +| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. Node image upgrades will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example. For more information on Planned Maintenance, see [Use Planned Maintenance to sch * How can I check the current nodeOsUpgradeChannel value on a cluster? -Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to. +Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to: ++```azurecli-interactive +az aks show --resource-group myResourceGroup --name myAKSCluster --query "autoUpgradeProfile" +``` * How can I monitor the status of node OS auto-upgrades? |
aks | Create K8s Cluster With Aks Application Gateway Ingress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-k8s-cluster-with-aks-application-gateway-ingress.md | - Title: Create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform -description: Learn how to create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform. - Previously updated : 09/05/2023--content_well_notification: - - AI-contribution ---# Create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform --[Azure Kubernetes Service (AKS)](/azure/aks/) manages your hosted Kubernetes environment. AKS makes it quick and easy to deploy and manage containerized applications without container orchestration expertise. AKS also eliminates the burden of taking applications offline for operational and maintenance tasks. With AKS, you can provision, upgrade, and scale resources on-demand. --[Azure Application Gateway](/azure/Application-Gateway/) provides Application Gateway Ingress Controller (AGIC). AGIC enables various features for Kubernetes services, including reverse proxy, configurable traffic routing, and TLS termination. Kubernetes Ingress resources help configure the ingress rules for individual Kubernetes services. An ingress controller allows a single IP address to route traffic to multiple services in a Kubernetes cluster. ---In this article, you learn how to: --> [!div class="checklist"] -> -> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet). -> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group). -> * Create a User Assigned Identity using [azurerm_user_assigned_identity](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/user_assigned_identity). -> * Create a virtual network (VNet) using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network). -> * Create a subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet). -> * Create a public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip). -> * Create a Application Gateway using [azurerm_application_gateway](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway). -> * Create a Kubernetes cluster using [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster). -> * Install and run a sample web app to test the availability of the Kubernetes cluster you create. --## Prerequisites --Before you get started, you need to install and configure the following tools: --* [Terraform](/azure/developer/terraform/quickstart-configure) -* [kubectl command-line tool](https://kubernetes.io/docs/tasks/tools/) -* [Helm package manager](https://helm.sh/docs/intro/install/) -* [GNU wget command-line tool](http://www.gnu.org/software/wget/) --## Implement the Terraform code --> [!NOTE] -> You can find the sample code from this article in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/TestRecord.md). -> -> For more information, see [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform). --1. Create a directory to test sample Terraform code and make it your working directory. -2. Create a file named `providers.tf` and copy in the following code: -- :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/providers.tf"::: --3. Create a file named `main.tf` and copy in the following code: -- :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/main.tf"::: --4. Create a file named `variables.tf` and copy in the following code: -- :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/variables.tf"::: --5. Create a file named `outputs.tf` and copy in the following code: -- :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/outputs.tf"::: --## Initialize Terraform ---## Create a Terraform execution plan ---## Apply a Terraform execution plan ---## Test the Kubernetes cluster --1. Get the Azure resource group name. -- ```console - resource_group_name=$(terraform output -raw resource_group_name) - ``` --2. Get the AKS cluster name. -- ```console - aks_cluster_name=$(terraform output -raw aks_cluster_name) - ``` --3. Get the Kubernetes configuration and access credentials for the cluster using the [`az aks get-credentials`](/cli/azure/aks#az-aks-get-credentials) command. -- ```azurecli-interactive - az aks get-credentials \ - --name $aks_cluster_name \ - --resource-group $resource_group_name \ - --overwrite-existing - ``` --4. Verify the health of the cluster using the [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command. -- ```console - kubectl get nodes - ``` -- **Key points:** -- * The details of your worker nodes display with a status of **Ready**. -- :::image type="content" source="media/create-k8s-cluster-with-aks-application-gateway-ingress/kubectl-get-nodes.png" alt-text="Screenshot of kubectl showing the health of your Kubernetes cluster."::: --## Install Azure Active Directory Pod Identity --Azure Active Directory (Azure AD) Pod Identity provides token-based access to [Azure Resource Manager](/azure/azure-resource-manager/resource-group-overview). --[Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity) adds the following components to your Kubernetes cluster: --* Kubernetes [CRDs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/): `AzureIdentity`, `AzureAssignedIdentity`, `AzureIdentityBinding` -* [Managed Identity Controller (MIC)](https://github.com/Azure/aad-pod-identity#managed-identity-controllermic) component -* [Node Managed Identity (NMI)](https://github.com/Azure/aad-pod-identity#node-managed-identitynmi) component --To install Azure AD Pod Identity on your cluster, you need to know if RBAC is enabled or disabled. RBAC is disabled by default for this demo. Enabling or disabling RBAC is done in the `variables.tf` file via the `aks_enable_rbac` block's `default` value. --* If RBAC is **enabled**, run the following `kubectl create` command. -- ```console - kubectl create -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml - ``` --* If RBAC is **disabled**, run the following `kubectl create` command. -- ```console - kubectl create -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment.yaml - ``` --## Install the AGIC Helm repo --1. Add the AGIC Helm repo using the [`helm repo add`](https://helm.sh/docs/helm/helm_repo_add/) command. -- ```console - helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ - ``` --2. Update the AGIC Helm repo using the [`helm repo update`](https://helm.sh/docs/helm/helm_repo_update/) command. -- ```console - helm repo update - ``` --## Configure AGIC using Helm --1. Download `helm-config.yaml` to configure AGIC using the [`wget`](https://www.gnu.org/software/wget/) command. -- ```console - wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml - ``` --2. Open `helm-config.yaml` in a text editor. -3. Enter the following value for the top level keys: -- * `verbosityLevel`: Specify the *verbosity level* of the AGIC logging infrastructure. For more information about logging levels, see [logging Levels section of Application Gateway Kubernetes Ingress](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.md). --4. Enter the following values for the `appgw` block: -- * `appgw.subscriptionId`: Specify the Azure subscription ID used to create the App Gateway. - * `appgw.resourceGroup`: Get the resource group name using the `echo "$(terraform output -raw resource_group_name)"` command. - * `appgw.name`: Get the Application Gateway name using the `echo "$(terraform output -raw application_gateway_name)"` command. - * `appgw.shared`: This boolean flag defaults to `false`. Set it to `true` if you need a [Shared App Gateway](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/072626cb4e37f7b7a1b0c4578c38d1eadc3e8701/docs/setup/install-existing.md#multi-cluster--shared-app-gateway). --5. Enter the following value for the `kubernetes` block: -- * `kubernetes.watchNamespace`: Specify the name space, which AGIC should watch. The namespace can be a single string value or a comma-separated list of namespaces. Leaving this variable commented out or setting it to a blank or an empty string results in the Ingress controller observing all accessible namespaces. --6. Enter the following values for the `armAuth` block: -- * If you specify `armAuth.type` as `aadPodIdentity`: - * `armAuth.identityResourceID`: Get the Identity resource ID by running `echo "$(terraform output -raw identity_resource_id)"`. - * `armAuth.identityClientId`: Get the Identity client ID by running `echo "$(terraform output -raw identity_client_id)"`. -- * If you specify `armAuth.type` as `servicePrincipal`, see [Using a service principal](/azure/application-gateway/ingress-controller-install-existing#using-a-service-principal). --## Install the AGIC package --1. Install the AGIC package using the [`helm install`](https://helm.sh/docs/helm/helm_install/) command. -- ```console - helm install -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure --generate-name - ``` --2. Get the Azure resource group name. -- ```console - resource_group_name=$(terraform output -raw resource_group_name) - ``` --3. Get the identity name. -- ```console - identity_name=$(terraform output -raw identity_name) - ``` --4. Get the key values from your identity using the [`az identity show`](/cli/azure/identity#az-identity-show) command. -- ```azurecli-interactive - az identity show -g $resource_group_name -n $identity_name - ``` --## Install a sample app --1. Download the YAML file using the [`curl`](https://curl.se/) command. -- ```console - curl https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml -o aspnetapp.yaml - ``` --2. Apply the YAML file using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command. -- ```console - kubectl apply -f aspnetapp.yaml - ``` --## Test the sample app --1. Get the app IP address. -- ```console - echo "$(terraform output -raw application_ip_address)" - ``` --2. In a browser, navigate to the IP address from the output of the previous step. -- :::image type="content" source="media/create-k8s-cluster-with-aks-application-gateway-ingress/sample-app.png" alt-text="Screenshot of sample app."::: --## Clean up resources ---## Troubleshoot Terraform on Azure --[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot) --## Next steps --> [!div class="nextstepaction"] -> [Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) |
aks | Ingress Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md | SOURCE_REGISTRY=registry.k8s.io CONTROLLER_IMAGE=ingress-nginx/controller CONTROLLER_TAG=v1.8.1 PATCH_IMAGE=ingress-nginx/kube-webhook-certgen-PATCH_TAG=v2023040 +PATCH_TAG=v20230407 DEFAULTBACKEND_IMAGE=defaultbackend-amd64 DEFAULTBACKEND_TAG=1.5 |
aks | Quick Windows Container Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md | In this section, we create an AKS cluster with the following configuration: After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. -## Add a Windows node pool +## Add a node pool ++### [Add a Windows node pool](#tab/add-windows-node-pool) By default, an AKS cluster is created with a node pool that can run Linux containers. You have to add another node pool that can run Windows Server containers alongside the Linux node pool. By default, an AKS cluster is created with a node pool that can run Linux contai --node-count 1 ``` -## Add a Windows Server 2019 or Windows Server 2022 node pool +### [Add a Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool) -AKS supports Windows Server 2019 and 2022 node pools. Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2019 or Windows Server 2022, you need to specify the following parameters: +Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2019, you need to specify the following parameters: - `os-type` set to `Windows`-- `os-sku` set to `Windows2019` *or* `Windows2022`+- `os-sku` set to `Windows2019` > [!NOTE]-> -> - Windows Server 2022 requires Kubernetes version 1.23.0 or higher. -> - Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes]. +> Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes]. ++- Add a Windows Server 2019 node pool using the `az aks nodepool add` command. ++ ```azurecli-interactive + az aks nodepool add \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --os-type Windows \ + --os-sku Windows2019 \ + --name npwin \ + --node-count 1 + ``` ++### [Add a Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool) ++Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2022, you need to specify the following parameters: ++- `os-type` set to `Windows` +- `os-sku` set to `Windows2022` ++> [!NOTE] +> Windows Server 2022 requires Kubernetes version 1.23.0 or higher. - Add a Windows Server 2022 node pool using the `az aks nodepool add` command. AKS supports Windows Server 2019 and 2022 node pools. Windows Server 2022 is the --node-count 1 ``` ++ ## Connect to the cluster You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the [`az aks install-cli`][az-aks-install-cli] command. |
aks | Quick Windows Container Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md | In this section, we create an AKS cluster with the following configuration: After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. -## Add a Windows node pool +## Add a node pool ++### [Add a Windows node pool](#tab/add-windows-node-pool) By default, an AKS cluster is created with a node pool that can run Linux containers. You have to add another node pool that can run Windows Server containers alongside the Linux node pool. By default, an AKS cluster is created with a node pool that can run Linux contai New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -Name npwin ``` -## Add a Windows Server 2019 or Windows Server 2022 node pool +### [Add a Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool) -AKS supports Windows Server 2019 and 2022 node pools. Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2019 or Windows Server 2022, you need to specify the following parameters: +Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2019, you need to specify the following parameters: * `OsType` set to `Windows`-* `OsSKU` set to `Windows2019` *or* `Windows2022` +* `OsSKU` set to `Windows2019` > [!NOTE] > > * `OsSKU` requires PowerShell Az module version 9.2.0 or higher.-> * Windows Server 2022 requires Kubernetes version 1.23.0 or higher. > * Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes]. +* Add a Windows Server 2019 node pool using the [`New-AzAksNodePool`][new-azaksnodepool] cmdlet. ++ ```azurepowershell-interactive + New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -OsSKU Windows2019 -Name npwin + ``` ++### [Add a Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool) ++Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. To use Windows Server 2022, you need to specify the following parameters: ++* `OsType` set to `Windows` +* `OsSKU` set to `Windows2022` ++> [!NOTE] +> +> * `OsSKU` requires PowerShell Az module version 9.2.0 or higher. +> * Windows Server 2022 requires Kubernetes version 1.23.0 or higher. + * Add a Windows Server 2022 node pool using the [`New-AzAksNodePool`][new-azaksnodepool] cmdlet. ```azurepowershell-interactive- New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -OsSKU Windows2019 Windows -Name npwin + New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -OsSKU Windows2022 -Name npwin ``` ++ ## Connect to the cluster You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the `Install-AzAzAksCliTool` cmdlet. |
aks | Open Ai Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md | Title: Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS) description: Learn how to deploy an application that uses OpenAI on Azure Kubernetes Service (AKS). #Required; article description that is displayed in search results. Previously updated : 6/29/2023 Last updated : 09/18/2023 -# Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS) +# Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS) -In this article, you will learn how to deploy an application that uses Azure OpenAI or OpenAI on AKS. With OpenAI, you can easily adapt different AI models, such as content generation, summarization, semantic search, and natural language to code generation, for your specific tasks. You will start by deploying an AKS cluster in your Azure subscription. Then you will deploy your OpenAI service and the sample application. +In this article, you learn how to deploy an application that uses Azure OpenAI or OpenAI on AKS. With OpenAI, you can easily adapt different AI models, such as content generation, summarization, semantic search, and natural language to code generation, for your specific tasks. You start by deploying an AKS cluster in your Azure subscription. Then, you deploy your OpenAI service and the sample application. ++The sample cloud native application is representative of real-world implementations. The multi-container application is comprised of applications written in multiple languages and frameworks, including: -The sample cloud native application is representative of real-world implementations. The multi-container application is comprised of applications written in multiple languages and frameworks, including: - Golang with Gin - Rust with Actix-Web - JavaScript with Vue.js and Fastify The sample cloud native application is representative of real-world implementati These applications provide front ends for customers and store admins, REST APIs for sending data to RabbitMQ message queue and MongoDB database, and console apps to simulate traffic. > [!NOTE]-> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. +> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. We use them here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. -The codebase for [AKS Store Demo][aks-store-demo] can be found on GitHub. +To access the GitHub codebase for the sample application, see [AKS Store Demo][aks-store-demo]. ## Before you begin - You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- For this demo, you can either use Azure OpenAI service or OpenAI service. If you plan on using Azure OpenAI service, you need to enable it for your Azure subscription by filling out the [Request Access to Azure OpenAI Service][aoai-access] form.-- If you plan on using OpenAI, sign up on the [OpenAI website][open-ai-landing].+- For this demo, you can either use Azure OpenAI service or OpenAI service. + - If you plan on using Azure OpenAI service, you need to request access to enable it on your Azure subscription using the [Request access to Azure OpenAI Service form][aoai-access]. + - If you plan on using OpenAI, sign up on the [OpenAI website][open-ai-landing]. ## Create a resource group-An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. ++An [Azure resource group][azure-resource-group] is a logical group in which you deploy and manage Azure resources. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. The following example creates a resource group named *myResourceGroup* in the *eastus* location. -* Create a resource group using the [`az group create`][az-group-create] command. +- Create a resource group using the [`az group create`][az-group-create] command. ```azurecli-interactive az group create --name myResourceGroup --location eastus ```- - The following output example resembles the successful creation of the resource group: - ```json + The following example output shows successful creation of the resource group: ++ ```output { "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup", "location": "eastus", The following example creates a resource group named *myResourceGroup* in the *e ``` ## Create an AKS cluster-The following example creates a cluster named *myAKSCluster* in the resource group *myResourceGroup* created earlier. -* Create an AKS cluster using the [`az aks create`][az-aks-create] command. +The following example creates a cluster named *myAKSCluster* in *myResourceGroup*. ++- Create an AKS cluster using the [`az aks create`][az-aks-create] command. ```azurecli-interactive az aks create --resource-group myResourceGroup --name myAKSCluster --generate-ssh-keys The following example creates a cluster named *myAKSCluster* in the resource gro ## Connect to the cluster -To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. +To manage a Kubernetes cluster, you use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. 1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command. - ```azurecli + ```azurecli-interactive az aks install-cli ```- Use `sudo az aks install-cli` if elevated permission is required on Linux-based system. ++ > [!NOTE] + > If your Linux-based system requires elevated permissions, you can use the `sudo az aks install-cli` command. 2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command executes the following operations: - * Downloads credentials and configures the Kubernetes CLI to use them. - * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument. + - Downloads credentials and configures the Kubernetes CLI to use them. + - Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument. ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl 3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. - ```bash + ```azurecli-interactive kubectl get nodes ``` - The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*. + The following example output shows the nodes created in the previous steps. Make sure the node status is *Ready*. ```output NAME STATUS ROLES AGE VERSION To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl aks-nodepool1-31469198-vmss000002 Ready agent 3h29m v1.25.6 ``` -## Deploy the application +## Deploy the application :::image type="content" source="media/ai-walkthrough/aks-ai-demo-architecture.png" alt-text="Architecture diagram of AKS AI demo." lightbox="media/ai-walkthrough/aks-ai-demo-architecture.png"::: -For the [AKS Store application][aks-store-demo], this manifest includes the following Kubernetes deployments and -- Product Service: Shows product information-- Order Service: Places orders-- Makeline Service: Processes orders from the queue and completes the orders-- Store Front: Web application for customers to view products and place orders-- Store Admin: Web application for store employees to view orders in the queue and manage product information-- Virtual Customer: Simulates order creation on a scheduled basis-- Virtual Worker: Simulates order completion on a scheduled basis-- Mongo DB: NoSQL instance for persisted data-- Rabbit MQ: Message queue for an order queue+The [AKS Store application][aks-store-demo] manifest includes the following Kubernetes deployments and ++- **Product service**: Shows product information. +- **Order service**: Places orders. +- **Makeline service**: Processes orders from the queue and completes the orders. +- **Store front**: Web application for customers to view products and place orders. +- **Store admin**: Web application for store employees to view orders in the queue and manage product information. +- **Virtual customer**: Simulates order creation on a scheduled basis. +- **Virtual worker**: Simulates order completion on a scheduled basis. +- **Mongo DB**: NoSQL instance for persisted data. +- **Rabbit MQ**: Message queue for an order queue. > [!NOTE]-> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. +> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. We use them here here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. -1. Review the [YAML manifest](https://github.com/Azure-Samples/aks-store-demo/blob/main/aks-store-all-in-one.yaml) for the application. You will see a series of deployments and services that make up the entire application. +1. Review the [YAML manifest](https://github.com/Azure-Samples/aks-store-demo/blob/main/aks-store-all-in-one.yaml) for the application. +2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your yaml manifest. - ```bash + ```azurecli-interactive kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-all-in-one.yaml ``` - The following example resembles output showing successfully created deployments and services. + The following example output shows the successfully created deployments and ```output deployment.apps/mongodb created For the [AKS Store application][aks-store-demo], this manifest includes the foll ``` ## Deploy OpenAI+ You can either use Azure OpenAI or OpenAI and run your application on AKS. ### [Azure OpenAI](#tab/aoai)+ 1. Enable Azure OpenAI on your Azure subscription by filling out the [Request Access to Azure OpenAI Service][aoai-access] form.-1. In the Azure portal, create an Azure OpenAI instance. +1. In the Azure portal, create an Azure OpenAI instance. 1. Select the Azure OpenAI instance you created. 1. Select **Keys and Endpoints** to generate a key. 1. Select **Model Deployments** > **Managed Deployments** to open the [Azure OpenAI studio][aoai-studio].-1. Create a new deployment using the **gpt-35-turbo** model. +1. Create a new deployment using the **gpt-35-turbo** model. -For more information on how to create a deployment in Azure OpenAI, check out [Get started generating text using Azure OpenAI Service][aoai-get-started]. +For more information on how to create a deployment in Azure OpenAI, see [Get started generating text using Azure OpenAI Service][aoai-get-started]. ### [OpenAI](#tab/openai)-1. [Generate an OpenAI key][open-ai-new-key] by selecting **Create new secret key** and save the key. You will need this key in the [next step](#deploy-the-ai-service). -1. [Start a paid plan][openai-paid] to use OpenAI API. +1. [Generate an OpenAI key][open-ai-new-key] by selecting **Create new secret key** and save the key. You need this key in the [next step](#deploy-the-ai-service). +2. [Start a paid plan][openai-paid] to use OpenAI API. - + ## Deploy the AI service -Now that the application is deployed, you can deploy the Python-based microservice that uses OpenAI to automatically generate descriptions for new products being added to the store's catalog. +Now that the application is deployed, you can deploy the Python-based microservice that uses OpenAI to automatically generate descriptions for new products being added to the store's catalog. + ### [Azure OpenAI](#tab/aoai)-1. Create a file named `ai-service.yaml` and copy the following manifest into it. ++1. Create a file named `ai-service.yaml` and copy in the following manifest: + ```yaml apiVersion: apps/v1 kind: Deployment Now that the application is deployed, you can deploy the Python-based microservi resources: requests: cpu: 20m- memory: 46Mi + memory: 50Mi limits: cpu: 30m- memory: 50Mi + memory: 65Mi apiVersion: v1 kind: Service Now that the application is deployed, you can deploy the Python-based microservi selector: app: ai-service ```-1. Set the environment variable `USE_AZURE_OPENAI` to `"True"` -1. Get your Azure OpenAI Deployment name from [Azure OpenAI studio][aoai-studio], and fill in the `AZURE_OPENAI_DEPLOYMENT_NAME` value. -1. Get your Azure OpenAI endpoint and Azure OpenAI API key from the Azure portal by clicking on **Keys and Endpoint** in the left blade of the resource. Fill in your `AZURE_OPENAI_ENDPOINT` and `OPENAI_API_KEY` in the yaml accordingly. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your yaml manifest. - ```bash ++2. Set the environment variable `USE_AZURE_OPENAI` to `"True"`. +3. Get your Azure OpenAI deployment name from [Azure OpenAI studio][aoai-studio] and fill in the `AZURE_OPENAI_DEPLOYMENT_NAME` value. +4. Get your Azure OpenAI endpoint and Azure OpenAI API key from the Azure portal by selecting **Keys and Endpoint** in the left blade of the resource. Update the `AZURE_OPENAI_ENDPOINT` and `OPENAI_API_KEY` in the YAML accordingly. +5. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```azurecli-interactive kubectl apply -f ai-service.yaml ```- The following example resembles output showing successfully created deployments and services. ++ The following example output shows the successfully created deployments and + ```output deployment.apps/ai-service created service/ai-service created ``` ### [OpenAI](#tab/openai)-1. Create a file named `ai-service.yaml` and copy the following manifest into it. ++1. Create a file named `ai-service.yaml` and copy in the following manifest: + ```yaml apiVersion: apps/v1 kind: Deployment Now that the application is deployed, you can deploy the Python-based microservi memory: 46Mi limits: cpu: 30m- memory: 50Mi + memory: 65Mi apiVersion: v1 kind: Service Now that the application is deployed, you can deploy the Python-based microservi selector: app: ai-service ```-1. Set the environment variable `USE_AZURE_OPENAI` to `"False"` -1. Set the environment variable `OPENAI_API_KEY` by pasting in the OpenAI key you generated in the [last step](#deploy-openai). -1. [Find your OpenAI organization ID][open-ai-org-id], copy the value, and set the `OPENAI_ORG_ID` environment variable. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your yaml manifest. - ```bash ++2. Set the environment variable `USE_AZURE_OPENAI` to `"False"`. +3. Set the environment variable `OPENAI_API_KEY` by pasting in the OpenAI key you generated in the [last step](#deploy-openai). +4. [Find your OpenAI organization ID][open-ai-org-id], copy the value, and set the `OPENAI_ORG_ID` environment variable. +5. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```azurecli-interactive kubectl apply -f ai-service.yaml ```- The following example resembles output showing successfully created deployments and services. ++ The following example output shows the successfully created deployments and + ```output deployment.apps/ai-service created service/ai-service created ``` - > [!NOTE] > Directly adding sensitive information, such as API keys, to your Kubernetes manifest files isn't secure and may accidentally get committed to code repositories. We added it here for simplicity. For production workloads, use [Managed Identity][managed-identity] to authenticate to Azure OpenAI service instead or store your secrets in [Azure Key Vault][key-vault]. ## Test the application-1. See the status of the deployed pods using the [kubectl get pods][kubectl-get] command. - ```bash +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. ++ ```azurecli-interactive kubectl get pods ```- Make sure all the pods are *Running* before continuing to the next step. ++ Make sure all the pods are *Running* before continuing to the next step. + ```output NAME READY STATUS RESTARTS AGE makeline-service-7db94dc7d4-8g28l 1/1 Running 0 99s Now that the application is deployed, you can deploy the Python-based microservi virtual-worker-6d77fff4b5-7g7rj 1/1 Running 0 99s ``` -1. To get the IP of the store admin web application and store front web application, use the `kubectl get service` command. - - ```bash +2. Get the IP of the store admin web application and store front web application using the `kubectl get service` command. ++ ```azurecli-interactive kubectl get service store-admin ```- The application exposes the Store Admin site to the internet via a public load balancer provisioned by the Kubernetes service. This process can take a few minutes to complete. **EXTERNAL IP** initially shows *pending*, until the service comes up and shows the IP address. ++ The application exposes the Store Admin site to the internet via a public load balancer provisioned by the Kubernetes service. This process can take a few minutes to complete. **EXTERNAL IP** initially shows *pending* until the service comes up and shows the IP address. + ```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE store-admin LoadBalancer 10.0.142.228 40.64.86.161 80:32494/TCP 50m ```- Repeat the same step for the service named store-front. - -1. Open a web browser and browse to the external IP address of your service. In the example shown here, open 40.64.86.161 to see Store Admin in the browser. Repeat the same step for Store Front. -1. In store admin, click on the products tab, then select **Add Products**. -1. When the ai-service is running successfully, you should see the Ask OpenAI button next to the description field. Fill in the name, price, and keywords, then click Ask OpenAI to generate a product description. Then click save product. See the picture for an example of adding a new product. ++ Repeat the same step for the service named `store-front``. ++3. Open a web browser and browse to the external IP address of your service. In the example shown here, open *40.64.86.161* to see Store Admin in the browser. Repeat the same step for Store Front. +4. In store admin, select the products tab, then select **Add Products**. +5. When the `ai-service`` is running successfully, you should see the Ask OpenAI button next to the description field. Fill in the name, price, and keywords, then generate a product description by selecting **Ask OpenAI** > **Save product**. :::image type="content" source="media/ai-walkthrough/ai-generate-description.png" alt-text="Screenshot of how to use openAI to generate a product description.":::- -1. You can now see the new product you created on Store Admin used by sellers. In the picture, you can see Jungle Monkey Chew Toy is added. ++6. You can now see the new product you created on Store Admin used by sellers. In the picture, you can see Jungle Monkey Chew Toy is added. :::image type="content" source="media/ai-walkthrough/new-product-store-admin.png" alt-text="Screenshot viewing the new product in the store admin page.":::- -1. You can also see the new product you created on Store Front used by buyers. In the picture, you can see Jungle Monkey Chew Toy is added. Remember to get the IP address of store front by using [kubectl get service][kubectl-get]. ++7. You can also see the new product you created on Store Front used by buyers. In the picture, you can see Jungle Monkey Chew Toy is added. Remember to get the IP address of store front using the [`kubectl get service`][kubectl-get] command. :::image type="content" source="media/ai-walkthrough/new-product-store-front.png" alt-text="Screenshot viewing the new product in the store front page."::: ## Next steps-Now that you've seen how to add OpenAI functionality to an AKS application, learn more about what you can do with generative AI for your use cases. Here are some resources to get started: ++Now that you added OpenAI functionality to an AKS application, you can [Secure access to Azure OpenAI from Azure Kubernetes Service (AKS)](./open-ai-secure-access-quickstart.md). ++To learn more about generative AI use cases, see the following resources: + - [Azure OpenAI Service Documentation][aoai]-- [Microsoft Learn | Introduction to Azure OpenAI Services][learn-aoai]+- [Introduction to Azure OpenAI Services][learn-aoai] - [OpenAI Platform][openai-platform] - [Project Miyagi - Envisioning sample for Copilot stack][miyagi] <!-- Links external --> [aks-store-demo]: https://github.com/Azure-Samples/aks-store-demo- [kubectl]: https://kubernetes.io/docs/reference/kubectl/- [kubeconfig-file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/- [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get- [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply- [aoai-studio]: https://oai.azure.com/portal/- [open-ai-landing]: https://openai.com/- [open-ai-new-key]: https://platform.openai.com/account/api-keys- [open-ai-org-id]: https://platform.openai.com/account/org-settings- [aoai-access]: https://aka.ms/oai/access- [openai-paid]: https://platform.openai.com/account/billing/overview- [openai-platform]: https://platform.openai.com/- [miyagi]: https://github.com/Azure-Samples/miyagi <!-- Links internal --> [azure-resource-group]: ../azure-resource-manager/management/overview.md - [az-group-create]: /cli/azure/group#az-group-create- [az-aks-create]: /cli/azure/aks#az-aks-create- [az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli- [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials- [aoai-get-started]: ../ai-services/openai/quickstart.md- [managed-identity]: /azure/ai-services/openai/how-to/managed-identity#authorize-access-to-managed-identities- [key-vault]: csi-secrets-store-driver.md- [aoai]: ../ai-services/openai/index.yml- [learn-aoai]: /training/modules/explore-azure-openai |
aks | Open Ai Secure Access Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-secure-access-quickstart.md | + + Title: Secure access to Azure OpenAI from Azure Kubernetes Service (AKS) +description: Learn how to secure access to Azure OpenAI from Azure Kubernetes Service (AKS). ++ Last updated : 09/18/2023+++++# Secure access to Azure OpenAI from Azure Kubernetes Service (AKS) ++In this article, you learn how to secure access to Azure OpenAI from Azure Kubernetes Service (AKS) using Azure Active Directory (Azure AD) Workload Identity. You learn how to: ++* Enable workload identities on an AKS cluster. +* Create an Azure user-assigned managed identity. +* Create an Azure AD federated credential. +* Enable workload identity on a Kubernetes Pod. ++> [!NOTE] +> We recommend using Azure AD Workload Identity and managed identities on AKS for Azure OpenAI access because it enables a secure, passwordless authentication process for accessing Azure resources. ++## Before you begin ++* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* This article builds on [Deploy an application that uses OpenAI on AKS](./open-ai-quickstart.md). You should complete that article before you begin this one. +* You need a custom domain name enabled on your Azure OpenAI account to use for Azure AD authorization. For more information, see [Custom subdomain names for Azure AI services](../ai-services/cognitive-services-custom-subdomains.md). +++## Enable Azure AD Workload Identity on an AKS cluster ++The Azure AD Workload Identity and OIDC Issuer Endpoint features aren't enabled on AKS by default. You must enable them on your AKS cluster before you can use them. ++1. Set the resource group name and AKS cluster resource group name variables. ++ ```azurecli-interactive + # Set the resource group variable + RG_NAME=myResourceGroup ++ # Set the AKS cluster resource group variable + AKS_NAME=$(az resource list --resource-group $RG_NAME --resource-type Microsoft.ContainerService/managedClusters --query "[0].name" -o tsv) + ``` ++2. Enable the Azure AD Workload Identity and OIDC Issuer Endpoint features on your existing AKS cluster using the [`az aks update`][az-aks-update] command. ++ ```azurecli-interactive + az aks update \ + --resource-group $RG_NAME \ + --name $AKS_NAME \ + --enable-workload-identity \ + --enable-oidc-issuer + ``` ++3. Get the AKS OIDC Issuer Endpoint URL using the [`az aks show`][az-aks-show] command. ++ ```azurecli-interactive + AKS_OIDC_ISSUER=$(az aks show --resource-group $RG_NAME --name $AKS_NAME --query "oidcIssuerProfile.issuerUrl" -o tsv) + ``` ++## Create an Azure user-assigned managed identity ++1. Create an Azure user-assigned managed identity using the [`az identity create`][az-identity-create] command. ++ ```azurecli-interactive + # Set the managed identity name variable + MANAGED_IDENTITY_NAME=myIdentity ++ # Create the managed identity + az identity create \ + --resource-group $RG_NAME \ + --name $MANAGED_IDENTITY_NAME + ``` ++2. Get the managed identity client ID and object ID using the [`az identity show`][az-identity-show] command. ++ ```azurecli-interactive + # Get the managed identity client ID + MANAGED_IDENTITY_CLIENT_ID=$(az identity show --resource-group $RG_NAME --name $MANAGED_IDENTITY_NAME --query clientId -o tsv) ++ # Get the managed identity object ID + MANAGED_IDENTITY_OBJECT_ID=$(az identity show --resource-group $RG_NAME --name $MANAGED_IDENTITY_NAME --query principalId -o tsv) + ``` ++3. Get the Azure OpenAI resource ID using the [`az resource list`][az-resource-list] command. ++ ```azurecli-interactive + AOAI_RESOURCE_ID=$(az resource list --resource-group $RG_NAME --resource-type Microsoft.CognitiveServices/accounts --query "[0].id" -o tsv) + ``` ++4. Grant the managed identity access to the Azure OpenAI resource using the [`az role assignment create`][az-role-assignment-create] command. ++ ```azurecli-interactive + az role assignment create \ + --role "Cognitive Services OpenAI User" \ + --assignee-object-id $MANAGED_IDENTITY_OBJECT_ID \ + --assignee-principal-type ServicePrincipal \ + --scope $AOAI_RESOURCE_ID + ``` ++## Create an Azure AD federated credential ++1. Set the federated credential, namespace, and service account variables. ++ ```azurecli-interactive + # Set the federated credential name variable + FEDERATED_CREDENTIAL_NAME=myFederatedCredential ++ # Set the namespace variable + SERVICE_ACCOUNT_NAMESPACE=default ++ # Set the service account variable + SERVICE_ACCOUNT_NAME=ai-service-account + ``` ++2. Create the federated credential using the [`az identity federated-credential create`][az-identity-federated-credential-create] command. ++ ```azurecli-interactive + az identity federated-credential create \ + --name ${FEDERATED_CREDENTIAL_NAME} \ + --resource-group ${RG_NAME} \ + --identity-name ${MANAGED_IDENTITY_NAME} \ + --issuer ${AKS_OIDC_ISSUER} \ + --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} + ``` ++## Use Azure AD Workload Identity on AKS ++To use Azure AD Workload Identity on AKS, you need to make a few changes to the `ai-service` deployment manifest. ++### Create a ServiceAccount ++1. Get the kubeconfig for your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials \ + --resource-group $RG_NAME \ + --name $AKS_NAME + ``` ++2. Create a Kubernetes ServiceAccount using the [`kubectl apply`][kubectl-apply] command. ++ ```azurecli-interactive + kubectl apply -f - <<EOF + apiVersion: v1 + kind: ServiceAccount + metadata: + annotations: + azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID} + name: ${SERVICE_ACCOUNT_NAME} + namespace: ${SERVICE_ACCOUNT_NAMESPACE} + EOF + ``` ++### Enable Azure AD Workload Identity on the Pod ++1. Set the Azure OpenAI resource name, endpoint, and deployment name variables. ++ ```azurecli-interactive + # Get the Azure OpenAI resource name + AOAI_NAME=$(az resource list \ + --resource-group $RG_NAME \ + --resource-type Microsoft.CognitiveServices/accounts \ + --query "[0].name" -o tsv) ++ # Get the Azure OpenAI endpoint + AOAI_ENDPOINT=$(az cognitiveservices account show \ + --resource-group $RG_NAME \ + --name $AOAI_NAME \ + --query properties.endpoint -o tsv) ++ # Get the Azure OpenAI deployment name + AOAI_DEPLOYMENT_NAME=$(az cognitiveservices account deployment list \ + --resource-group $RG_NAME \ + --name $AOAI_NAME \ + --query "[0].name" -o tsv) + ``` ++2. Redeploy the `ai-service` with the ServiceAccount and the `azure.workload.identity/use` annotation set to `true` using the [`kubectl apply`][kubectl-apply] command. ++ ```azurecli-interactive + kubectl apply -f - <<EOF + apiVersion: apps/v1 + kind: Deployment + metadata: + name: ai-service + spec: + replicas: 1 + selector: + matchLabels: + app: ai-service + template: + metadata: + labels: + app: ai-service + azure.workload.identity/use: "true" + spec: + serviceAccountName: $SERVICE_ACCOUNT_NAME + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: ai-service + image: ghcr.io/azure-samples/aks-store-demo/ai-service:latest + ports: + - containerPort: 5001 + env: + - name: USE_AZURE_OPENAI + value: "True" + - name: USE_AZURE_AD + value: "True" + - name: AZURE_OPENAI_DEPLOYMENT_NAME + value: "${AOAI_DEPLOYMENT_NAME}" + - name: AZURE_OPENAI_ENDPOINT + value: "${AOAI_ENDPOINT}" + resources: + requests: + cpu: 20m + memory: 50Mi + limits: + cpu: 30m + memory: 65Mi + EOF + ``` ++### Test the application ++1. Verify the new pod is running using the [`kubectl get pods`][kubectl-get-pods] command. ++ ```azurecli-interactive + kubectl get pods --selector app=ai-service -w + ``` ++2. Get the pod logs using the [`kubectl logs`][kubectl-logs] command. It may take a few minutes for the pod to initialize. ++ ```azurecli-interactive + kubectl logs --selector app=ai-service -f + ``` ++ The following example output shows the app has initialized and is ready to accept requests. The first line suggests the code is missing configuration variables. However, the Azure Identity SDK handles this process and sets the `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` variables. ++ ```output + Incomplete environment configuration. These variables are set: AZURE_CLIENT_ID, AZURE_TENANT_ID + INFO: Started server process [1] + INFO: Waiting for application startup. + INFO: Application startup complete. + INFO: Uvicorn running on http://0.0.0.0:5001 (Press CTRL+C to quit) + ``` ++3. Get the pod environment variables using the [`kubectl describe pod`][kubectl-describe-pod] command. The output demonstrates that the Azure OpenAI API key no longer exists in the Pod's environment variables. ++ ```azurecli-interactive + kubectl describe pod --selector app=ai-service + ``` ++4. Open a new terminal and get the IP of the store admin service using the following `echo` command. ++ ```azurecli-interactive + echo "http://$(kubectl get svc/store-admin -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" + ``` ++5. Open a web browser and navigate to the IP address from the previous step. +6. Select **Products**. You should be able to add a new product and get a description for it using Azure OpenAI. ++## Next steps ++In this article, you learned how to secure access to Azure OpenAI from Azure Kubernetes Service (AKS) using Azure Active Directory (Azure AD) Workload Identity. ++For more information on Azure AD Workload Identity, see [Azure AD Workload Identity](./workload-identity-overview.md). ++<!-- Links internal --> +[az-aks-update]: /cli/azure/aks#az_aks_update +[az-aks-show]: /cli/azure/aks#az_aks_show +[az-identity-create]: /cli/azure/identity#az_identity_create +[az-identity-show]: /cli/azure/identity#az_identity_show +[az-resource-list]: /cli/azure/resource#az_resource_list +[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create +[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials ++<!-- Links external --> +[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply +[kubectl-get-pods]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get +[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs +[kubectl-describe-pod]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | View the upcoming version releases on the AKS Kubernetes release calendar. To se For the past release history, see [Kubernetes history](https://github.com/kubernetes/kubernetes/releases). -| K8s version | Upstream release | AKS preview | AKS GA | End of life | -|--|-|--||-| -| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 -| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 -| 1.27 | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024 -| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || +| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | +|--|-|--||-|--| +| 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA | +| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | Until 1.29 GA | +| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | +| 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2026 | Until 1.31 GA | +| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || Until 1.32 GA| ++*\* Indicates the version is designated for Long Term Support* ### AKS Kubernetes release schedule Gantt chart If you prefer to see this information visually, here's a Gantt chart with all the current releases displayed: ## AKS Components Breaking Changes by Version When a new minor version is introduced, the oldest supported minor version and p When AKS releases 1.18.\*, all the 1.15.\* versions go out of support 30 days later. -> [!NOTE] -> If you're running an unsupported Kubernetes version, you'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md). AKS also supports a maximum of two **patch** releases of a given minor version. For example, given the following supported versions: New Supported Version List Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported. -Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. +Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy. AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream. |
aks | Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md | All of the following criteria must be met in order for the stop to occur: * The upgrade operation is a Kubernetes minor version change for the cluster control plane. * The Kubernetes version you're upgrading to is 1.26 or later * If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later.-* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.134 or later must be installed. +* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.154 or later must be installed. * The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection. * Even API usage that is actually watching for deprecated resources is covered here. Look at the [Verb][k8s-api] for the distinction. If you attempt an upgrade and all of the previous criteria are met, you receive ```output Bad Request({ "code": "ValidationError",- "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n", + "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set enable-force-upgrade in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n", "subcode": "UpgradeBlockedOnDeprecatedAPIUsage" }) ``` You can also check past API usage by enabling [Container Insights][container-ins > [!NOTE] > This method requires you to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes. -Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command and setting the `upgrade-settings` property to `IgnoreKubernetesDeprecations` and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future. +Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command, specifying `enable-force-upgrade`, and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future. ```azurecli-interactive-az aks update --name myAKSCluster --resource-group myResourceGroup --upgrade-settings IgnoreKubernetesDeprecations --upgrade-override-until 2023-04-01T13:00:00Z +az aks update --name myAKSCluster --resource-group myResourceGroup --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z ``` > [!NOTE] |
aks | Use Azure Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md | Title: Use the Azure Linux container host on Azure Kubernetes Service (AKS) description: Learn how to use the Azure Linux container host on Azure Kubernetes Service (AKS) Previously updated : 04/19/2023 Last updated : 09/18/2023 # Use the Azure Linux container host for Azure Kubernetes Service (AKS) az aks nodepool upgrade \ The Azure Linux container host is available for use in the same regions as AKS. -## Limitations +## Next steps -The Azure Linux container host has the following limitations: --* Image SKUs for SGX and FIPS aren't available. -* It doesn't meet the [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) compliance requirements and [Center for Internet Security (CIS)](https://www.cisecurity.org/) certification. -* Azure Linux can't yet be deployed through the Azure portal. -* Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Azure Linux today. -* Azure Linux doesn't support AppArmor. Support for SELinux can be manually configured. -* Some addons, extensions, and open-source integrations may not be supported yet on Azure Linux. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are supported. +To learn more about Azure Linux, see the [Azure Linux documentation][azurelinuxdocumentation]. <!-- LINKS - Internal --> [azurelinux-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux-[azurelinux-capabilities]: https://microsoft.github.io/CBL-Mariner/docs/#key-capabilities-of-cbl-mariner-linux +[azurelinux-capabilities]: ../azure-linux/intro-azure-linux.md#azure-linux-container-host-key-benefits [azurelinux-cluster-config]: cluster-configuration.md#azure-linux-container-host-for-aks [azurelinux-node-pool]: create-node-pools.md#add-an-azure-linux-node-pool [ubuntu-to-azurelinux]: create-node-pools.md#migrate-ubuntu-nodes-to-azure-linux-nodes [auto-upgrade-aks]: auto-upgrade-cluster.md [kured]: node-updates-kured.md+[azurelinuxdocumentation]: ../azure-linux/intro-azure-linux.md |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
app-service | How To Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md | In addition to the core properties, there are other configuration options that y * *networkingConfiguration -> inboundIpAddressOverride*: Optional. Allow you to create an App Service Environment with your own Azure Public IP address (specify the resource ID) or define a static IP for ILB deployments. This setting can't be changed after the App Service Environment is created. * *customDnsSuffixConfiguration*: Optional. Allows you to specify a custom domain suffix for the App Service Environment. Requires a valid certificate from a Key Vault and access using a Managed Identity. For more information about the specific parameters, see [configuration custom domain suffix](./how-to-custom-domain-suffix.md). +> [!NOTE] +> The properties `dnsSuffix`, `multiSize`, `frontEndScaleFactor`, `userWhitelistedIpRanges`, and `ipSslAddressCount` are not supported when creating App Service Environment v3. + ### Deploying the App Service Environment After creating the ARM template, for example named *azuredeploy.json* and optionally a parameters file for example named *azuredeploy.parameters.json*, you can create the App Service Environment by using the Azure CLI code snippet. Change the file paths to match the Resource Manager template-file locations on your machine. Remember to supply your own value for the resource group name: |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 08/30/2023 Last updated : 09/19/2023 App Service Environment v3 is available in the following regions: | France South | ✅ | | ✅ | | Germany North | ✅ | | ✅ | | Germany West Central | ✅ | ✅ | ✅ | +| Italy North | ✅ | ✅** | | | Japan East | ✅ | ✅ | ✅ | | Japan West | ✅ | | ✅ | | Jio India West | | | ✅ | App Service Environment v3 is available in the following regions: | West US 2 | ✅ | ✅ | ✅ | | West US 3 | ✅ | ✅ | ✅ | -\* Limited availability and no support for dedicated host deployments +\* Limited availability and no support for dedicated host deployments. +\** To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. ### Azure Government: |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
application-gateway | How To Backend Mtls Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md | This document helps set up an example application that uses the following resour - Create an [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) resource that references a backend service. - Create a [BackendTLSPolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.BackendTLSPolicy) resource that has a client and CA certificate for the backend service referenced in the HTTPRoute. +## Background ++Mutual Transport Layer Security (MTLS) is a process that relies on certificate authentication to create an encrypted TLS connection. You can use MTLS to secure the connection from a client device to the Application Gateway for Containers backend target. If a client certificate is revoked or invalid, the connection is not secure. ++See the following figure: ++[ ![A diagram showing the Application Gateway for Containers backend MTLS process.](./media/how-to-backend-mtls-gateway-api/backend-mtls.png) ](./media/how-to-backend-mtls-gateway-api/backend-mtls.png#lightbox) + ## Prerequisites > [!IMPORTANT] |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
automation | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/managed-identity.md | CategoryInfo : CloseError: (:) [Connect-AzureRmAccount], HttpRequestException + The most common cause for this is that you didn't enable the identity before trying to use it. To verify this, run the following PowerShell runbook in the affected Automation account. ```powershell-resource= "?resource=https://management.azure.com/" +$resource= "?resource=https://management.azure.com/" $url = $env:IDENTITY_ENDPOINT + $resource $Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER) If this article doesn't resolve your issue, try one of the following channels fo * Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport). This is the official Microsoft Azure account for connecting the Azure community to the right resources: answers, support, and experts.-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. +* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 # |
azure-arc | Deliver Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md | The status of the selected machines changes to **Enabled**. :::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-enabled-resources.png" alt-text="Screenshot of eligible resources tab showing status of enabled for previously selected servers." lightbox="media/deliver-extended-security-updates/extended-security-updates-enabled-resources.png"::: If any problems occur during the enablement process, see [Troubleshoot delivery of Extended Security Updates for Windows Server 2012](troubleshoot-extended-security-updates.md) for assistance.++## Additional scenarios ++There are several scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Three of these scenarios supported by Azure Arc include the following: ++- Dev/Test +- Visual Studio +- Disaster Recovery ++To qualify for these scenarios, you must have: ++1. Provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios) ++1. Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test, association with Visual Studio subscriptions, or Disaster Recovery ++To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, follow these steps to tag and link: ++1. Tag both the WS2012 Arc ESU License and the Azure Arc-enabled server with one of the following three name-value pairs, corresponding to the appropriate exception: ++ 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 DEV TESTΓÇ¥ + 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 VISUAL STUDIOΓÇ¥ + 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 DISASTER RECOVERYΓÇ¥ + + In the case that you're using the ESU License for multiple exception scenarios, mark the license with the tag: Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 MULTIPURPOSEΓÇ¥ ++1. Link the tagged license to your tagged Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines. + + This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. ++> [!NOTE] +> The usage of these exception scenarios will be available for auditing purposes and abuse of these exceptions may result in recusal of WS2012 ESU privileges. +> + |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-functions | Durable Functions Orchestrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md | Each *instance* of an orchestration has an instance identifier (also known as an The following are some rules about instance IDs: -* Instance IDs must be between 1 and 256 characters. +* Instance IDs must be between 1 and 100 characters. * Instance IDs must not start with `@`. * Instance IDs must not contain `/`, `\`, `#`, or `?` characters. * Instance IDs must not contain control characters. > [!NOTE] > It is generally recommended to use autogenerated instance IDs whenever possible. User-generated instance IDs are intended for scenarios where there is a one-to-one mapping between an orchestration instance and some external application-specific entity, like a purchase order or a document.+> +> Also, the actual enforcement of character restriction rules may vary depending on the [storage provider](durable-functions-storage-providers.md) being used by the app. To ensure correct behavior and compatibility, it's strongly recommended that you follow the instance ID rules listed previously. An orchestration's instance ID is a required parameter for most [instance management operations](durable-functions-instance-management.md). They are also important for diagnostics, such as [searching through orchestration tracking data](durable-functions-diagnostics.md#application-insights) in Application Insights for troubleshooting or analytics purposes. For this reason, it is recommended to save generated instance IDs to some external location (for example, a database or in application logs) where they can be easily referenced later. |
azure-functions | Durable Functions Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versions.md | Durable Functions 2.x is available starting in version 2.x of the [Azure Functio Python support in Durable Functions requires Durable Functions 2.x or greater. -To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 3.x (`[3.*, 4.0.0)`). +To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 4.x (`[4.*, 5.0.0)`). ```json { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",- "version": "[3.*, 4.0.0)" + "version": "[4.*, 5.0.0)" } } ``` To update the extension bundle version in your project, open host.json and updat > [!NOTE] > If Visual Studio Code is not displaying the correct templates after you change the extension bundle version, reload the window by running the *Developer: Reload Window* command (<kbd>Ctrl+R</kbd> on Windows and Linux, <kbd>Command+R</kbd> on macOS). -#### Java (preview) --Durable Functions 2.x is available starting in version 4.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles). You must use the Azure Functions 3.0 runtime or greater to execute Java functions. +#### Java -To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 4.x (`[4.*, 5.0.0)`). Because Java support is currently in preview, you must also use the `Microsoft.Azure.Functions.ExtensionBundle.Preview` bundle, which is different from product-ready bundles. +Durable Functions 2.x is available starting in version 4.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles). You must use the Azure Functions 4.0 runtime to execute Java functions. +To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 4.x (`[4.*, 5.0.0)`). ```json { "version": "2.0", "extensionBundle": {- "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview", + "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[4.*, 5.0.0)" } } |
azure-maps | Power Bi Visual Add Bubble Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md | Title: Add a bubble layer to an Azure Maps Power BI visual -description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual. +description: In this article, you learn how to use the bubble layer in an Azure Maps Power BI visual. Last updated 11/14/2022 The **Bubble layer** renders location data as scaled circles on the map. :::image type="content" source="./media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="A map displaying point data using the bubble layer"::: -Initially all bubbles have the same fill color. If a field is passed into the **Legend** bucket of the **Fields** pane, the bubbles will be colored based on their categorization. The outline of the bubbles is white be default but can be changed to a new color or by enabling the high-contrast outline option. The **High-contrast outline** option dynamically assigns an outline color that is a high-contrast variant of the fill color. This helps to ensure the bubbles are clearly visible regardless of the style of the map. The following are the primary settings in the **Format** pane that are available in the **Bubble layer** section. +Initially all bubbles have the same fill color. If a field is passed into the **Legend** bucket of the **Fields** pane, the bubbles are colored based on their categorization. The outline of the bubbles is white be default but can be changed to a new color or by enabling the high-contrast outline option. The **High-contrast outline** option dynamically assigns an outline color that is a high-contrast variant of the fill color. This helps to ensure the bubbles are clearly visible regardless of the style of the map. The following are the primary settings in the **Format** pane that are available in the **Bubble layer** section. | Setting | Description | |-|-|-| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. | +| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. | +| Range scaling | Used to define how the bubble layer scales the bubbles.<br><br>ΓÇó **Magnitude**: Bubble size scales by magnitude. Negative values are automatically converted to positive values.<br>ΓÇó **DataRange**: Bubble size scales from min-of-data to max-of-data. There's no anchoring to zero.<br>ΓÇó **Automatic**: Bubble size scales automatically into one of the two types, as follows:<br> ΓÇó **Magnitude**: Positive or negative only data.<br> ΓÇó **DataRange**: data that contains both positive and negative values.<br>ΓÇó **(Deprecated)**: Applies to reports created prior to the range scaling property, to provide backward-compatibility. It's recommended that you change this to use any one of the three preceding options. | | Shape | Transparency. The fill transparency of each bubble. |-| Color | Fill color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. | +| Color | Fill color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section appears in the **Format** pane. | | Border | Settings for the border include color, width, transparency and blur.<br/> ΓÇó Color specifies the color that outlines the bubble. This option is hidden when the **High-contrast outline** option is enabled.<br/> ΓÇó Width specifies the width of the outline in pixels.<br/> ΓÇó Transparency specifies the transparency of each bubble.<br/> ΓÇó Blur specifies the amount of blur applied to the outline of the bubble. A value of one blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect.|-| Zoom | Settings for the zoom property include scale, maximum and minimum.<br/> ΓÇó Zoom scale is the amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling.<br/> ΓÇó Maximum zoom level tiles are available.<br/> ΓÇó Minimum zoom level tiles are available. | +| Zoom | Settings for the zoom property include scale, maximum and minimum.<br/> ΓÇó (Deprecated) Zoom scale is the amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling.<br/> ΓÇó Maximum zoom level tiles are available.<br/> ΓÇó Minimum zoom level tiles are available. | | Options | Settings for the options property include pitch alignment and layer position.<br/> ΓÇó Pitch alignment specifies how the bubbles look when the map is pitched.<br/> ΓÇó layer position specifies the position of the layer relative to other map layers. | ## Bubble size scaling -If a field is passed into the **Size** bucket of the **Fields** pane, the bubbles will be scaled relatively to the measure value of each data point. The **Size** option in the **Bubble layer** section of the **Format** pane will disappear when a field is passed into the **Size** bucket, as the bubbles will have their radii scaled between a min and max value. The following options will appear in the **Bubble layer** section of the **Format** pane when a **Size** bucket has a field specified. +> [!NOTE] +> +> **Bubble size scaling retirement** +> +> The Power BI Visual bubble layer **Bubble size scaling** settings were deprecated starting in the September 2023 release of Power BI. You can no longer create reports using these settings, but existing reports will continue to work. It is recomended that you upgrade existing reports that use these settings to the new **range scaling** property. To upgrade to to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list: +> +> :::image type="content" source="./media/power-bi-visual/range-scaling-drop-down.png" alt-text="A screenshot of the range scaling drop-down"::: +> +> For more information on the range scaling settings, see **range scaling** in the [previous section](#add-a-bubble-layer). ++If a field is passed into the **Size** bucket of the **Fields** pane, the bubbles are scaled relatively to the measure value of each data point. The **Size** option in the **Bubble layer** section of the **Format** pane disappears when a field is passed into the **Size** bucket, as the bubbles have their radii scaled between a min and max value. The following options appear in the **Bubble layer** section of the **Format** pane when a **Size** bucket has a field specified. | Setting | Description | ||--| If a field is passed into the **Size** bucket of the **Fields** pane, the bubble | Max size | Maximum bubble size when scaling the data.| | Size scaling method | Scaling algorithm used to determine relative bubble size.<br/><br/> ΓÇó Linear: Range of input data linearly mapped to the min and max size. (default)<br/> ΓÇó Log: Range of input data logarithmically mapped to the min and max size.<br/> ΓÇó Cubic-Bezier: Specify X1, Y1, X2, Y2 values of a Cubic-Bezier curve to create a custom scaling method. | -When the **Size scaling method** is set to **Log**, the following options will be made available. +When the **Size scaling method** is set to **Log**, the following options are made available. | Setting | Description | |--|| | Log scale | The logarithmic scale to apply when calculating the size of the bubbles. | -When the **Size scaling method** is set to **Cubic-Bezier**, the following options will be made available to customize the scaling curve. +When the **Size scaling method** is set to **Cubic-Bezier**, the following options are made available to customize the scaling curve. | Setting | Description | ||| When the **Size scaling method** is set to **Cubic-Bezier**, the following optio ## Category labels -When displaying a **bubble layer** map, the **Category labels** settings will become active in the **Format visual** pane. +When the **bubble layer** displays on a map, the **Category labels** settings become active in the **Format visual** pane. :::image type="content" source="./media/power-bi-visual/category-labels.png" alt-text="A screenshot showing the category labels settings in the format visual section of Power BI." lightbox="./media/power-bi-visual/category-labels.png"::: |
azure-maps | Power Bi Visual Understanding Layers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md | The general layer section of the **Format** pane are common settings that apply | Setting | Description | |-|-| | Unselected transparency | The transparency of shapes that aren't selected, when one or more shapes are selected. |-| Show zeros | Specifies if points that have a size value of zero should be shown on the map using the minimum radius. | -| Show negatives | Specifies if absolute value of negative size values should be plotted. | +| Show zeros | (Deprecated) Specifies if points that have a size value of zero should be shown on the map using the minimum radius. | +| Show negatives | (Deprecated) Specifies if absolute value of negative size values should be plotted. | | Min data value | The minimum value of the input data to scale against. Good for clipping outliers. | | Max data value | The maximum value of the input data to scale against. Good for clipping outliers. | +> [!NOTE] +> +> **General layer settings retirement** +> +> The **Show zeros** and **Show negatives** Power BI Visual General layer settings were deprecated starting in the September 2023 release of Power BI. You can no longer create new reports using these settings, but existing reports will continue to work. It is recomended that you upgrade existing reports. To upgrade to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list: +> +> :::image type="content" source="./media/power-bi-visual/range-scaling-drop-down.png" alt-text="A screenshot of the range scaling drop-down"::: +> +> For more information on the range scaling option, see **Range scaling** in the properties table of the [Add a bubble layer] article. + ## Next steps Change how your data is displayed on the map: Add more context to the map: > [!div class="nextstepaction"] > [Show real-time traffic](power-bi-visual-show-real-time-traffic.md)++ [Add a bubble layer]: power-bi-visual-add-bubble-layer.md |
azure-monitor | Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md | The following example shows how to configure the Java agent to use system-assign ```JSON { "connectionString": "App Insights Connection String with IngestionEndpoint", - "preview": { - "authentication": { - "enabled": true, - "type": "SAMI" - } + "authentication": { + "enabled": true, + "type": "SAMI" } } ``` The following example shows how to configure the Java agent to use user-assigned ```JSON { "connectionString": "App Insights Connection String with IngestionEndpoint", - "preview": { - "authentication": { - "enabled": true, - "type": "UAMI", - "clientId":"<USER-ASSIGNED MANAGED IDENTITY CLIENT ID>" - } - } + "authentication": { + "enabled": true, + "type": "UAMI", + "clientId":"<USER-ASSIGNED MANAGED IDENTITY CLIENT ID>" + } } ``` :::image type="content" source="media/azure-ad-authentication/user-assigned-managed-identity.png" alt-text="Screenshot that shows user-assigned managed identity." lightbox="media/azure-ad-authentication/user-assigned-managed-identity.png"::: The following example shows how to configure the Java agent to use a service pri ```JSON { "connectionString": "App Insights Connection String with IngestionEndpoint",- "preview": { - "authentication": { - "enabled": true, - "type": "CLIENTSECRET", - "clientId":"<YOUR CLIENT ID>", - "clientSecret":"<YOUR CLIENT SECRET>", - "tenantId":"<YOUR TENANT ID>" - } + "authentication": { + "enabled": true, + "type": "CLIENTSECRET", + "clientId":"<YOUR CLIENT ID>", + "clientSecret":"<YOUR CLIENT SECRET>", + "tenantId":"<YOUR TENANT ID>" } } ``` |
azure-monitor | Java Get Started Supplemental | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md | Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 08/30/2023 Last updated : 09/18/2023 ms.devlang: java For more information, see [Use Application Insights Java In-Process Agent in Azu ### Docker entry point -If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: +If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: ```-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.16.jar", "-jar", "<myapp.jar>"] +ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"] ``` -If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example: +If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example: ```-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar> +ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar> ``` FROM ... COPY target/*.jar app.jar -COPY agent/applicationinsights-agent-3.4.16.jar applicationinsights-agent-3.4.16.jar +COPY agent/applicationinsights-agent-3.4.17.jar applicationinsights-agent-3.4.17.jar COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING" -ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.16.jar", "-jar", "app.jar"] +ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.17.jar", "-jar", "app.jar"] ``` -In this example we have copied the `applicationinsights-agent-3.4.16.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container. +In this example we have copied the `applicationinsights-agent-3.4.17.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container. ### Third-party container images The following sections show how to set the Application Insights Java agent path If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.16.jar" +JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar" ``` #### Tomcat installed via download and unzip JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.16.jar" If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.16.jar" +CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar" ``` -If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`. ### Tomcat 8 (Windows) If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `- Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.16.jar +set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar ``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.16.jar" +set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar" ``` -If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`. #### Run Tomcat as a Windows service -Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the `Java Options` under the `Java` tab. +Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the `Java Options` under the `Java` tab. ### JBoss EAP 7 #### Standalone server -Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): +Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): ```java ...- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.16.jar -Xms1303m -Xmx1303m ..." + JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.17.jar -Xms1303m -Xmx1303m ..." ... ``` #### Domain server -Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: ```xml ... Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `j <jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->- <option value="-javaagent:path/to/applicationinsights-agent-3.4.16.jar"/> + <option value="-javaagent:path/to/applicationinsights-agent-3.4.17.jar"/> <option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options> Add these lines to `start.ini`: ``` --exec--javaagent:path/to/applicationinsights-agent-3.4.16.jar+-javaagent:path/to/applicationinsights-agent-3.4.17.jar ``` ### Payara 5 -Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: ```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>- -javaagent:path/to/applicationinsights-agent-3.4.16.jar> + -javaagent:path/to/applicationinsights-agent-3.4.17.jar> </jvm-options> ... </java-config> Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `j 1. In `Generic JVM arguments`, add the following JVM argument: ```- -javaagent:path/to/applicationinsights-agent-3.4.16.jar + -javaagent:path/to/applicationinsights-agent-3.4.17.jar ``` 1. Save and restart the application server. Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `j Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.16.jar+-javaagent:path/to/applicationinsights-agent-3.4.17.jar ``` ### Others |
azure-monitor | Java Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md | Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 08/30/2023 Last updated : 09/18/2023 ms.devlang: java There are two options for enabling Application Insights Java with Spring Boot: J ## Enabling with JVM argument -Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example: +Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example: ```-java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar> +java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar> ``` ### Spring Boot via Docker entry point -If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: +If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: ```-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.16.jar", "-jar", "<myapp.jar>"] +ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"] ``` -If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example: +If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example: ```-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar> +ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar> ``` ### Configuration To enable Application Insights Java programmatically, you must add the following <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>- <version>3.4.16</version> + <version>3.4.17</version> </dependency> ``` |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 08/30/2023 Last updated : 09/18/2023 ms.devlang: java More information and configuration options are provided in the following section ## Configuration file path -By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.16.jar`. +By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.17.jar`. You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property -If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.16.jar` is located. +If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located. Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`. Or you can set the connection string by using the Java system property `applicat You can also set the connection string by specifying a file to load the connection string from. -If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.16.jar` is located. +If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located. ```json { and add `applicationinsights-core` to your application: <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>- <version>3.4.16</version> + <version>3.4.17</version> </dependency> ``` By default, Application Insights Java 3.x sends a heartbeat metric once every 15 > [!NOTE] > You can't increase the interval to longer than 15 minutes because the heartbeat data is also used to track Application Insights usage. -## Authentication (preview) +## Authentication > [!NOTE]-> The authentication feature is available starting from version 3.2.0. +> The authentication feature is GA since version 3.4.17. You can use authentication to configure the agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory authentication. For more information, see the [Authentication](./azure-ad-authentication.md) documentation. In the preceding configuration example: * `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where-`applicationinsights-agent-3.4.16.jar` is located. +`applicationinsights-agent-3.4.17.jar` is located. Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration. |
azure-monitor | Java Standalone Upgrade From 2X | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md | Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 08/30/2023 Last updated : 09/18/2023 ms.devlang: java There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.16.jar+-javaagent:path/to/applicationinsights-agent-3.4.17.jar ``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example. |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `Co The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry meter provider to add runtime instrumentation. builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddRuntimeInstrumentation());++// Add the Azure Monitor telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` app.Run(); The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics. ```csharp+// Create a new OpenTelemetry meter provider and add runtime instrumentation and the Azure Monitor metric exporter. +// It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() .AddRuntimeInstrumentation() .AddAzureMonitorMetricExporter(); describes the instruments and provides examples of when you might use each one. Application startup must subscribe to a Meter by name. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry meter provider to add a meter named "OTel.AzureMonitor.Demo". builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));++// Add the Azure Monitor telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` The `Meter` must be initialized using that same name. ```csharp+// Create a new meter named "OTel.AzureMonitor.Demo". var meter = new Meter("OTel.AzureMonitor.Demo");++// Create a new histogram metric named "FruitSalePrice". Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice"); +// Create a new Random object. var rand = new Random();++// Record a few random sale prices for apples and lemons, with different colors. myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", " ```csharp public class Program {+ // Create a static readonly Meter object named "OTel.AzureMonitor.Demo". + // This meter will be used to track metrics about the application. private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); public static void Main() {+ // Create a new MeterProvider object using the OpenTelemetry SDK. + // The MeterProvider object is responsible for managing meters and sending + // metric data to exporters. + // It is important to keep the MetricsProvider instance active + // throughout the process lifetime. + // + // The MeterProviderBuilder is configured to add a meter named + // "OTel.AzureMonitor.Demo" and an Azure Monitor metric exporter. using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo") .AddAzureMonitorMetricExporter() .Build(); + // Create a new Histogram metric named "FruitSalePrice". + // This metric will track the distribution of fruit sale prices. Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice"); + // Create a new Random object. This object will be used to generate random sale prices. var rand = new Random();+ + // Record a few random sale prices for apples and lemons, with different colors. + // Each record includes a timestamp, a value, and a set of attributes. + // The attributes can be used to filter and analyze the metric data. myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); public class Program myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); + // Display a message to the user and wait for them to press Enter. + // This allows the user to see the message and the console before the + // application exits. System.Console.WriteLine("Press Enter key to exit."); System.Console.ReadLine(); } input() Application startup must subscribe to a Meter by name. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry meter provider to add a meter named "OTel.AzureMonitor.Demo". builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));++// Add the Azure Monitor telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` The `Meter` must be initialized using that same name. ```csharp+// Create a new meter named "OTel.AzureMonitor.Demo". var meter = new Meter("OTel.AzureMonitor.Demo");++// Create a new counter metric named "MyFruitCounter". Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter"); +// Record the number of fruits sold, grouped by name and color. myFruitCounter.Add(1, new("name", "apple"), new("color", "red")); myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow")); myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow")); myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); ```csharp public class Program {+ // Create a static readonly Meter object named "OTel.AzureMonitor.Demo". + // This meter will be used to track metrics about the application. private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); public static void Main() {+ // Create a new MeterProvider object using the OpenTelemetry SDK. + // The MeterProvider object is responsible for managing meters and sending + // metric data to exporters. + // It is important to keep the MetricsProvider instance active + // throughout the process lifetime. + // + // The MeterProviderBuilder is configured to add a meter named + // "OTel.AzureMonitor.Demo" and an Azure Monitor metric exporter. using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo") .AddAzureMonitorMetricExporter() .Build(); + // Create a new counter metric named "MyFruitCounter". + // This metric will track the number of fruits sold. Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter"); + // Record the number of fruits sold, grouped by name and color. myFruitCounter.Add(1, new("name", "apple"), new("color", "red")); myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow")); myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow")); public class Program myFruitCounter.Add(5, new("name", "apple"), new("color", "red")); myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); + // Display a message to the user and wait for them to press Enter. + // This allows the user to see the message and the console before the + // application exits. System.Console.WriteLine("Press Enter key to exit."); System.Console.ReadLine(); } input() Application startup must subscribe to a Meter by name. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry meter provider to add a meter named "OTel.AzureMonitor.Demo". builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));++// Add the Azure Monitor telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` The `Meter` must be initialized using that same name. ```csharp+// Get the current process. var process = Process.GetCurrentProcess(); +// Create a new meter named "OTel.AzureMonitor.Demo". var meter = new Meter("OTel.AzureMonitor.Demo");++// Create a new observable gauge metric named "Thread.State". +// This metric will track the state of each thread in the current process. ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); private static IEnumerable<Measurement<int>> GetThreadState(Process process) {+ // Iterate over all threads in the current process. foreach (ProcessThread thread in process.Threads) {+ // Create a measurement for each thread, including the thread state, process ID, and thread ID. yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); } } private static IEnumerable<Measurement<int>> GetThreadState(Process process) ```csharp public class Program {+ // Create a static readonly Meter object named "OTel.AzureMonitor.Demo". + // This meter will be used to track metrics about the application. private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); public static void Main() {+ // Create a new MeterProvider object using the OpenTelemetry SDK. + // The MeterProvider object is responsible for managing meters and sending + // metric data to exporters. + // It is important to keep the MetricsProvider instance active + // throughout the process lifetime. + // + // The MeterProviderBuilder is configured to add a meter named + // "OTel.AzureMonitor.Demo" and an Azure Monitor metric exporter. using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo") .AddAzureMonitorMetricExporter() .Build(); + // Get the current process. var process = Process.GetCurrentProcess(); + // Create a new observable gauge metric named "Thread.State". + // This metric will track the state of each thread in the current process. ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); + // Display a message to the user and wait for them to press Enter. + // This allows the user to see the message and the console before the + // application exits. System.Console.WriteLine("Press Enter key to exit."); System.Console.ReadLine(); } private static IEnumerable<Measurement<int>> GetThreadState(Process process) {+ // Iterate over all threads in the current process. foreach (ProcessThread thread in process.Threads) {+ // Create a measurement for each thread, including the thread state, process ID, and thread ID. yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); } } to draw attention in relevant experiences including the failures section and end - To log an Exception using an Activity: ```csharp+ // Start a new activity named "ExceptionExample". using (var activity = activitySource.StartActivity("ExceptionExample")) {+ // Try to execute some code. try { throw new Exception("Test exception"); }+ // If an exception is thrown, catch it and set the activity status to "Error". catch (Exception ex) { activity?.SetStatus(ActivityStatusCode.Error); to draw attention in relevant experiences including the failures section and end ``` - To log an Exception using `ILogger`: ```csharp+ // Create a logger using the logger factory. The logger category name is used to filter and route log messages. var logger = loggerFactory.CreateLogger(logCategoryName); + // Try to execute some code. try { throw new Exception("Test Exception"); } catch (Exception ex) {+ // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0. + // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written. logger.Log( logLevel: LogLevel.Error, eventId: 0, to draw attention in relevant experiences including the failures section and end - To log an Exception using an Activity: ```csharp+ // Start a new activity named "ExceptionExample". using (var activity = activitySource.StartActivity("ExceptionExample")) {+ // Try to execute some code. try { throw new Exception("Test exception"); }+ // If an exception is thrown, catch it and set the activity status to "Error". catch (Exception ex) { activity?.SetStatus(ActivityStatusCode.Error); to draw attention in relevant experiences including the failures section and end ``` - To log an Exception using `ILogger`: ```csharp+ // Create a logger using the logger factory. The logger category name is used to filter and route log messages. var logger = loggerFactory.CreateLogger("ExceptionExample"); try {+ // Try to execute some code. throw new Exception("Test Exception"); } catch (Exception ex) {+ // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0. + // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written. logger.Log( logLevel: LogLevel.Error, eventId: 0, You may want to add a custom span in two scenarios. First, when there's a depend ```csharp+// Define an activity source named "ActivitySourceName". This activity source will be used to create activities for all requests to the application. internal static readonly ActivitySource activitySource = new("ActivitySourceName"); +// Create an ASP.NET Core application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry tracer provider to add a source named "ActivitySourceName". This will ensure that all activities created by the activity source are traced. builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));++// Add the Azure Monitor telemetry service to the application. This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core application. var app = builder.Build(); +// Map a GET request to the root path ("/") to the specified action. app.MapGet("/", () => {+ // Start a new activity named "CustomActivity". This activity will be traced and the trace data will be sent to Azure Monitor. using (var activity = activitySource.StartActivity("CustomActivity")) { // your code here } + // Return a response message. return $"Hello World!"; }); +// Start the ASP.NET Core application. app.Run(); ``` app.Run(); > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). ```csharp+// Create an OpenTelemetry tracer provider builder. +// It is important to keep the TracerProvider instance active throughout the process lifetime. using var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddSource("ActivitySourceName") .AddAzureMonitorTraceExporter() .Build(); +// Create an activity source named "ActivitySourceName". var activitySource = new ActivitySource("ActivitySourceName"); +// Start a new activity named "CustomActivity". This activity will be traced and the trace data will be sent to Azure Monitor. using (var activity = activitySource.StartActivity("CustomActivity")) { // your code here To add span attributes, use either of the following two ways: > Add the processor shown here *before* adding Azure Monitor. ```csharp+// Create an ASP.NET Core application builder. var builder = WebApplication.CreateBuilder(args); +// Configure the OpenTelemetry tracer provider to add a new processor named ActivityEnrichingProcessor. builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));++// Add the Azure Monitor telemetry service to the application. This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); +// Build the ASP.NET Core application. var app = builder.Build(); +// Start the ASP.NET Core application. app.Run(); ``` To add span attributes, use either of the following two ways: > Add the processor shown here *before* the Azure Monitor Exporter. ```csharp+// Create an OpenTelemetry tracer provider builder. +// It is important to keep the TracerProvider instance active throughout the process lifetime. using var tracerProvider = Sdk.CreateTracerProviderBuilder()- .AddSource("OTel.AzureMonitor.Demo") - .AddProcessor(new ActivityEnrichingProcessor()) - .AddAzureMonitorTraceExporter() + // Add a source named "OTel.AzureMonitor.Demo". + .AddSource("OTel.AzureMonitor.Demo") // Add a new processor named ActivityEnrichingProcessor. + .AddProcessor(new ActivityEnrichingProcessor()) // Add the Azure Monitor trace exporter. + .AddAzureMonitorTraceExporter() // Add the Azure Monitor trace exporter. .Build(); ``` Add `ActivityEnrichingProcessor.cs` to your project with the following code: ```csharp public class ActivityEnrichingProcessor : BaseProcessor<Activity> {+ // The OnEnd method is called when an activity is finished. This is the ideal place to enrich the activity with additional data. public override void OnEnd(Activity activity) {+ // Update the activity's display name. // The updated activity will be available to all processors which are called after this processor. activity.DisplayName = "Updated-" + activity.DisplayName;+ // Set custom tags on the activity. activity.SetTag("CustomDimension1", "Value1"); activity.SetTag("CustomDimension2", "Value2"); } You can populate the _client_IP_ field for requests by setting the `http.client_ Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`: ```C#+// Add the client IP address to the activity as a tag. // only applicable in case of activity.Kind == Server activity.SetTag("http.client_ip", "<IP Address>"); ``` activity.SetTag("http.client_ip", "<IP Address>"); Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`: ```C#+// Add the client IP address to the activity as a tag. // only applicable in case of activity.Kind == Server activity.SetTag("http.client_ip", "<IP Address>"); ``` You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by u Use the add [custom property example](#add-a-custom-property-to-a-span). ```csharp+// Add the user ID to the activity as a tag, but only if the activity is not null. activity?.SetTag("enduser.id", "<User Id>"); ``` activity?.SetTag("enduser.id", "<User Id>"); Use the add [custom property example](#add-a-custom-property-to-a-span). ```csharp+// Add the user ID to the activity as a tag, but only if the activity is not null. activity?.SetTag("enduser.id", "<User Id>"); ``` You might use the following ways to filter out telemetry before it leaves your a > Add the processor shown here *before* adding Azure Monitor. ```csharp+ // Create an ASP.NET Core application builder. var builder = WebApplication.CreateBuilder(args); + // Configure the OpenTelemetry tracer provider to add a new processor named ActivityFilteringProcessor. builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityFilteringProcessor()));+ // Configure the OpenTelemetry tracer provider to add a new source named "ActivitySourceName". builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));+ // Add the Azure Monitor telemetry service to the application. This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); + // Build the ASP.NET Core application. var app = builder.Build(); + // Start the ASP.NET Core application. app.Run(); ``` You might use the following ways to filter out telemetry before it leaves your a ```csharp public class ActivityFilteringProcessor : BaseProcessor<Activity> {+ // The OnStart method is called when an activity is started. This is the ideal place to filter activities. public override void OnStart(Activity activity) { // prevents all exporters from exporting internal activities You might use the following ways to filter out telemetry before it leaves your a 1. Use a custom processor: ```csharp+ // Create an OpenTelemetry tracer provider builder. + // It is important to keep the TracerProvider instance active throughout the process lifetime. using var tracerProvider = Sdk.CreateTracerProviderBuilder()- .AddSource("OTel.AzureMonitor.Demo") - .AddProcessor(new ActivityFilteringProcessor()) - .AddAzureMonitorTraceExporter() + .AddSource("OTel.AzureMonitor.Demo") // Add a source named "OTel.AzureMonitor.Demo". + .AddProcessor(new ActivityFilteringProcessor()) // Add a new processor named ActivityFilteringProcessor. + .AddAzureMonitorTraceExporter() // Add the Azure Monitor trace exporter. .Build(); ``` You might use the following ways to filter out telemetry before it leaves your a ```csharp public class ActivityFilteringProcessor : BaseProcessor<Activity> {+ // The OnStart method is called when an activity is started. This is the ideal place to filter activities. public override void OnStart(Activity activity) { // prevents all exporters from exporting internal activities You might want to get the trace ID or span ID. If you have logs sent to a destin > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). ```csharp+// Get the current activity. Activity activity = Activity.Current;+// Get the trace ID of the activity. string traceId = activity?.TraceId.ToHexString();+// Get the span ID of the activity. string spanId = activity?.SpanId.ToHexString(); ``` string spanId = activity?.SpanId.ToHexString(); > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). ```csharp+// Get the current activity. Activity activity = Activity.Current;+// Get the trace ID of the activity. string traceId = activity?.TraceId.ToHexString();+// Get the span ID of the activity. string spanId = activity?.SpanId.ToHexString(); ``` |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Use one of the following three ways to configure the connection string: - Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, it is in either your `startup.cs` or `program.cs` class. ```csharp+ // Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); + // Add the OpenTelemetry telemetry service to the application. + // This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(options => { options.ConnectionString = "<Your Connection String>"; }); + // Build the ASP.NET Core web application. var app = builder.Build(); + // Start the ASP.NET Core web application. app.Run(); ``` - Set an environment variable: Use one of the following two ways to configure the connection string: - Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. ```csharp+ // Create a new OpenTelemetry tracer provider. + // It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter(options => { options.ConnectionString = "<Your Connection String>"; }); + // Create a new OpenTelemetry meter provider. + // It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() .AddAzureMonitorMetricExporter(options => { options.ConnectionString = "<Your Connection String>"; }); + // Create a new logger factory. + // It is important to keep the LoggerFactory instance active throughout the process lifetime. var loggerFactory = LoggerFactory.Create(builder => { builder.AddOpenTelemetry(options => Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://githu ```csharp // Setting role name and role instance++// Create a dictionary of resource attributes. var resourceAttributes = new Dictionary<string, object> { { "service.name", "my-service" }, { "service.namespace", "my-namespace" }, { "service.instance.id", "my-instance" }}; +// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Add the OpenTelemetry telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor();++// Configure the OpenTelemetry tracer provider to add the resource attributes to all traces. builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.ConfigureResource(resourceBuilder => resourceBuilder.AddAttributes(resourceAttributes))); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://githu ```csharp // Setting role name and role instance++// Create a dictionary of resource attributes. var resourceAttributes = new Dictionary<string, object> { { "service.name", "my-service" }, { "service.namespace", "my-namespace" }, { "service.instance.id", "my-instance" }};++// Create a resource builder. var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes); +// Create a new OpenTelemetry tracer provider and set the resource builder. +// It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() // Set ResourceBuilder on the TracerProvider. .SetResourceBuilder(resourceBuilder) .AddAzureMonitorTraceExporter(); +// Create a new OpenTelemetry meter provider and set the resource builder. +// It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() // Set ResourceBuilder on the MeterProvider. .SetResourceBuilder(resourceBuilder) .AddAzureMonitorMetricExporter(); +// Create a new logger factory and add the OpenTelemetry logger provider with the resource builder. +// It is important to keep the LoggerFactory instance active throughout the process lifetime. var loggerFactory = LoggerFactory.Create(builder => { builder.AddOpenTelemetry(options => You may want to enable sampling to reduce your data ingestion volume, which redu The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Add the OpenTelemetry telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(o => {+ // Set the sampling ratio to 10%. This means that 10% of all traces will be sampled and sent to Azure Monitor. o.SamplingRatio = 0.1F; }); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` app.Run(); The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. ```csharp+// Create a new OpenTelemetry tracer provider. +// It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter(options =>- { + { + // Set the sampling ratio to 10%. This means that 10% of all traces will be sampled and sent to Azure Monitor. options.SamplingRatio = 0.1F; }); ``` We support the credential classes provided by [Azure Identity](https://github.co 1. Provide the desired credential class: ```csharp+ // Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); + // Add the OpenTelemetry telemetry service to the application. + // This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(options => {+ // Set the Azure Monitor credential to the DefaultAzureCredential. + // This credential will use the Azure identity of the current user or + // the service principal that the application is running as to authenticate + // to Azure Monitor. options.Credential = new DefaultAzureCredential(); }); + // Build the ASP.NET Core web application. var app = builder.Build(); + // Start the ASP.NET Core web application. app.Run(); ``` We support the credential classes provided by [Azure Identity](https://github.co 1. Provide the desired credential class: ```csharp+ // Create a DefaultAzureCredential. var credential = new DefaultAzureCredential(); + // Create a new OpenTelemetry tracer provider and set the credential. + // It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter(options => { options.Credential = credential; }); + // Create a new OpenTelemetry meter provider and set the credential. + // It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() .AddAzureMonitorMetricExporter(options => { options.Credential = credential; }); + // Create a new logger factory and add the OpenTelemetry logger provider with the credential. + // It is important to keep the LoggerFactory instance active throughout the process lifetime. var loggerFactory = LoggerFactory.Create(builder => { builder.AddOpenTelemetry(options => The Distro package includes the AzureMonitorExporter, which by default uses one To override the default directory, you should set `AzureMonitorOptions.StorageDirectory`. ```csharp+// Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); +// Add the OpenTelemetry telemetry service to the application. +// This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(options => {+ // Set the Azure Monitor storage directory to "C:\\SomeDirectory". + // This is the directory where the OpenTelemetry SDK will store any telemetry data that cannot be sent to Azure Monitor immediately. options.StorageDirectory = "C:\\SomeDirectory"; }); +// Build the ASP.NET Core web application. var app = builder.Build(); +// Start the ASP.NET Core web application. app.Run(); ``` By default, the AzureMonitorExporter uses one of the following locations for off To override the default directory, you should set `AzureMonitorExporterOptions.StorageDirectory`. ```csharp+// Create a new OpenTelemetry tracer provider and set the storage directory. +// It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter(options => {+ // Set the Azure Monitor storage directory to "C:\\SomeDirectory". + // This is the directory where the OpenTelemetry SDK will store any trace data that cannot be sent to Azure Monitor immediately. options.StorageDirectory = "C:\\SomeDirectory"; }); +// Create a new OpenTelemetry meter provider and set the storage directory. +// It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() .AddAzureMonitorMetricExporter(options => {+ // Set the Azure Monitor storage directory to "C:\\SomeDirectory". + // This is the directory where the OpenTelemetry SDK will store any metric data that cannot be sent to Azure Monitor immediately. options.StorageDirectory = "C:\\SomeDirectory"; }); +// Create a new logger factory and add the OpenTelemetry logger provider with the storage directory. +// It is important to keep the LoggerFactory instance active throughout the process lifetime. var loggerFactory = LoggerFactory.Create(builder => { builder.AddOpenTelemetry(options => { options.AddAzureMonitorLogExporter(options => {+ // Set the Azure Monitor storage directory to "C:\\SomeDirectory". + // This is the directory where the OpenTelemetry SDK will store any log data that cannot be sent to Azure Monitor immediately. options.StorageDirectory = "C:\\SomeDirectory"; }); }); You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside th 1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs). ```csharp+ // Create a new ASP.NET Core web application builder. var builder = WebApplication.CreateBuilder(args); + // Add the OpenTelemetry telemetry service to the application. + // This service will collect and send telemetry data to Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor();+ + // Add the OpenTelemetry OTLP exporter to the application. + // This exporter will send telemetry data to an OTLP receiver, such as Prometheus builder.Services.AddOpenTelemetry().WithTracing(builder => builder.AddOtlpExporter()); builder.Services.AddOpenTelemetry().WithMetrics(builder => builder.AddOtlpExporter()); + // Build the ASP.NET Core web application. var app = builder.Build(); + // Start the ASP.NET Core web application. app.Run(); ``` You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside th 1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs). ```csharp+ // Create a new OpenTelemetry tracer provider and add the Azure Monitor trace exporter and the OTLP trace exporter. + // It is important to keep the TracerProvider instance active throughout the process lifetime. var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter() .AddOtlpExporter(); + // Create a new OpenTelemetry meter provider and add the Azure Monitor metric exporter and the OTLP metric exporter. + // It is important to keep the MetricsProvider instance active throughout the process lifetime. var metricsProvider = Sdk.CreateMeterProviderBuilder() .AddAzureMonitorMetricExporter() .AddOtlpExporter(); For more information about OpenTelemetry SDK configuration, see the [OpenTelemet |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 09/12/2023 Last updated : 09/18/2023 ms.devlang: csharp, javascript, typescript, python dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter #### [Java](#tab/java) -Download the [applicationinsights-agent-3.4.16.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.16/applicationinsights-agent-3.4.16.jar) file. +Download the [applicationinsights-agent-3.4.17.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.17/applicationinsights-agent-3.4.17.jar) file. > [!WARNING] > var loggerFactory = LoggerFactory.Create(builder => Java autoinstrumentation is enabled through configuration changes; no code changes are required. -Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to your application's JVM args. +Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to your application's JVM args. > [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md). To paste your Connection String, select from the following options: B. Set via Configuration File - Java Only (Recommended) - Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.16.jar` with the following content: + Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.17.jar` with the following content: ```json { |
azure-monitor | Migrate To Azure Storage Lifecycle Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md | Last updated 08/16/2023 # Migrate from diagnostic settings storage retention to Azure Storage lifecycle management -The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics use Azure Storage Lifecycle Management. +The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics sent to an Azure Storage account, use Azure Storage Lifecycle Management. This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.+For logs sent to a Log Analytics workspace, retention is set for each table on the **Tables** page of your workspace. > [!IMPORTANT] > **Deprecation Timeline.** An existing diagnostic setting logging to a storage account. ## Migration Procedures ++## [Azure portal](#tab/portal) To migrate your diagnostics settings retention rules, follow the steps below: 1. Go to the Diagnostic Settings page for your logging resource and locate the diagnostic setting you wish to migrate To set the rule for a specific webapp app, use *insights-activity-logs/ResourceI 1. Select **Add** to save the rule. :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-filter-set.png" lightbox="./media/retention-migration/lifecycle-management-add-rule-filter-set.png" alt-text="A screenshot showing the filters tab for adding a lifecycle rule."::: ++## [CLI](#tab/cli) ++Use the [az storage account management-policy create](https://docs.microsoft.com/cli/azure/storage/account/management-policy?view=azure-cli-latest#az-storage-account-management-policy-create) command to create a lifecycle management policy. You must still set the retention in your diagnostic settings to *0*. See the Azure portal section above for more information. ++++```azurecli ++az storage account management-policy create --account-name <storage account name> --resource-group <resource group name> --policy @<policy definition file> +``` ++The sample policy definition file below sets the retention for all blobs in the container *insights-activity-logs* for the given subscription ID. For more information, see [Lifecycle management policy definition](https://learn.microsoft.com/azure/storage/blobs/lifecycle-management-overview#lifecycle-management-policy-definition). ++```json +{ + "rules": [ + { + "enabled": true, + "name": "Susbcription level lifecycle rule", + "type": "Lifecycle", + "definition": { + "actions": { + "version": { + "delete": { + "daysAfterCreationGreaterThan": 90 + } + }, + "baseBlob": { + "tierToCool": { + "daysAfterModificationGreaterThan": 30 + }, + "tierToArchive": { + "daysAfterModificationGreaterThan": 90, + "daysAfterLastTierChangeGreaterThan": 7 + }, + "delete": { + "daysAfterModificationGreaterThan": 2555 + } + } + }, + "filters": { + "blobTypes": [ + "blockBlob" + ], + "prefixMatch": [ + "insights-activity-logs/ResourceId=/SUBSCRIPTIONS/ABCD1234-5849-ABCD-1234-9876543210AB" + ] + } + } + } + ] +} ++`````` ++## [Templates](#tab/templates) ++Apply the following template to create a lifecycle management policy. You must still set the retention in your diagnostic settings to *0*. See the Azure portal section above for more information. ++```azurecli ++az deployment group create --resource-group <resource group name> --template-file <template file> ++`````` ++The following template sets the retention for storage account *azmonstorageaccount001* for all blobs in the container *insights-activity-logs* for all resources for the subscription ID *ABCD1234-5849-ABCD-1234-9876543210AB*. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "resources": [ + { + "type": "Microsoft.Storage/storageAccounts/managementPolicies", + "apiVersion": "2021-02-01", + "name": "azmonstorageaccount001/default", + "properties": { + "policy": { + "rules": [ + { + "enabled": true, + "name": "Edtest", + "type": "Lifecycle", + "definition": { + "filters": { + "blobTypes": [ + "blockBlob" + ], + "prefixMatch": [ + "insights-activity-logs/ResourceId=/SUBSCRIPTIONS/ABCD1234-5849-ABCD-1234-9876543210AB" + ] + }, + "actions": { + "baseBlob": { + "tierToCool": { + "daysAfterModificationGreaterThan": 30 + }, + "tierToArchive": { + "daysAfterModificationGreaterThan": 90 + }, + "delete": { + "daysAfterModificationGreaterThan": 1000 + } + } + } + } + } + ] + } + } + } + ] +} +`````` +++ ## Next steps -[Configure a lifecycle management policy](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal). +[Configure a lifecycle management policy](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal). |
azure-monitor | Azure Monitor Data Explorer Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md | For example: union customEvents, adx('https://help.kusto.windows.net/Samples').StormEvents | take 10 ```+ ```kusto let CL1 = adx('https://help.kusto.windows.net/Samples').StormEvents; union customEvents, CL1 | take 10--```sql +``` > [!TIP] > Shorthand format is allowed: *ClusterName*/*InitialCatalog*. For example, `adx('help/Samples')` is translated to `adx('help.kusto.windows.net/Samples')`. When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator For example: -kusto +```kusto AzureDiagnostics | join hint.remote=left adx("cluster=ClusterURI").AzureDiagnostics on (ColumnName) ``` Here are some sample Azure Log Analytics queries that use the new Azure Resource | project name ``` - - Retrieve performance data related to CPU utilization and filter to resources with the ΓÇ£prodΓÇ¥ tag. + - Retrieve performance data related to CPU utilization and filter to resources with the ΓÇ£prodΓÇ¥ tag. - ```kusto - InsightsMetrics - | where Name == "UtilizationPercentage" - | lookup ( - arg("").Resources - | where type == 'microsoft.compute/virtualmachines' - | project _ResourceId=tolower(id), tags - ) - on _ResourceId - | where tostring(tags.Env) == "Prod" - ``` + ```kusto + InsightsMetrics + | where Name == "UtilizationPercentage" + | lookup ( + arg("").Resources + | where type == 'microsoft.compute/virtualmachines' + | project _ResourceId=tolower(id), tags + ) + on _ResourceId + | where tostring(tags.Env) == "Prod" + ``` More use cases: - Use a tag to determine whether VMs should be running 24x7 or should be shut down at night. let CL1 = arg("").Resources ; union AzureActivity, CL1 | take 10 ``` - When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you need to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in Azure Resource Graph with data in the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`. For example: ```kusto To create a new alert rule based on a cross-service query, follow the steps in [ ## Limitations * Database names are case sensitive.-* Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass along the time filter. -* The cross-service query ability is used for data retrieval only. -* [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) are not support cross-service queries. -* mv-expand is limited to 2000 records. --* The following operators do not work with the cross query with ability with Azure Resource Graph: smv-apply(), rand(), arg_max() , arg_min(), avg() , avg_if(), countif(), sumif(), percentile() , percentiles() , percentilew() , percentilesw(), stdev() , stdevif() , stdevp(), variance() , variancep() , varianceif(). +* Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass the time filter. +* Cross-service queries support data retrieval only. +* [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) do not support cross-service queries. +* `mv-expand` is limited to 2000 records. +* Azure Resource Graph cross-queries do not support these operators: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`. ## Next steps * [Write queries](/azure/data-explorer/write-queries) |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | Key features of the Calling SDK: - **PSTN** - The Calling SDK can initiate voice calls with the traditional publicly switched telephone network, [using phone numbers you acquire in the Azure portal](../../quickstarts/telephony/get-phone-number.md) or programmatically. - **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video dataplane. - **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app isn't running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users of an incoming call.+- **User Facing Diagnostics (UFD)** - The Calling SDK provides [events](user-facing-diagnostics.md) that are designed to provide insights into underlying issues that could affect call quality. Developers can subscribe to triggers such as weak network signals or muted microphones, ensuring that they're always aware of any factors impacting the calls. +- **Media Stats** - The Calling SDK provides comprehensive insights into [the metrics](media-quality-sdk.md) of your VoIP and video calls. With this information, developers have a clearer understanding of call quality and can make informed decisions to further enhance their communication experience. +- **Video Constraints** - The Calling SDK provides APIs that gain the ability to regulate [video quality among other parameters](../../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls by adjusting parameters such as resolution and frame rate supporting different call situations for different levels of video quality ## Detailed capabilities The following list presents the set of features that are currently available in | | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ | ✔️ | | | Show if a participant is muted | ✔️ | ✔️ | ✔️ | ✔️ | | | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ | ✔️ |-| Screen sharing | Share the entire screen from within the application | ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> | ✔️<sup>1</sup> | -| | Share a specific application (from the list of running applications) | ✔️ | ✔️<sup>1</sup> | ❌ | ❌ | -| | Share a web browser tab from the list of open tabs | ✔️ | | | | +| Screen sharing | Share the entire screen from within the application | ✔️ | ✔️<sup>1</sup>| ✔️<sup>1</sup> | ✔️<sup>1</sup> | +| | Share a specific application (from the list of running applications) | ✔️ | ✔️<sup>1</sup>| ❌ | ❌ | +| | Share a web browser tab from the list of open tabs | ✔️ | | | | | | Share system audio during screen sharing | ❌ | ❌ | ❌ | ❌ | | | Participant can view remote screen share | ✔️ | ✔️ | ✔️ | ✔️ | | Roster | List participants | ✔️ | ✔️ | ✔️ | ✔️ | The following list presents the set of features that are currently available in | | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ | | | Set camera | ✔️ | ✔️ | ✔️ | ✔️ | | | Get selected camera | ✔️ | ✔️ | ✔️ | ✔️ |-| | Get microphone list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌<sup>2</sup> | -| | Set microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | -| | Get selected microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | -| | Get speakers list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | -| | Set speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | -| | Get selected speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Get microphone list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Set microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Get selected microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Get speakers list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Set speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | +| | Get selected speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> | | Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ | | | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ | | | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |+| Video Effects | [Background Blur](../../quickstarts/voice-video-calling/get-started-video-effects.md) | ✔️ | ✔️ | ✔️ | ✔️ | +| | Custom background image | ✔️ | ❌ | ❌ | ❌ | -1. The Share Screen capability can be achieved using Raw Media, if you want to learn, **how to add Raw Media**, visit [the quickstart guide](../../quickstarts/voice-video-calling/get-started-raw-media-access.md). ++1. The Share screen capability can be achieved using Raw Media, if you want to learn, **how to add Raw Media**, visit [the quickstart guide](../../quickstarts/voice-video-calling/get-started-raw-media-access.md). 2. The Calling SDK doesn't have an explicit API, you need to use the OS (android & iOS) API to achieve it. ## UI Library The following table represents the set of supported browsers, which are currentl - Outgoing Screen Sharing isn't supported on iOS or Android mobile browsers. - Firefox support is in public preview.-- ACS only supports Android System WebView on Android, iOS WebView(WKWebView) in public preview. Other types of embedded browsers or WebView on other OS platforms aren't officially supported, for example, GeckoView, Chromium Embedded Framework (CEF), Microsoft Edge WebView2.+- Currently, the calling SDK only supports Android System WebView on Android, iOS WebView(WKWebView) in public preview. Other types of embedded browsers or WebView on other OS platforms aren't officially supported, for example, GeckoView, Chromium Embedded Framework (CEF), Microsoft Edge WebView2. Running JavaScript Calling SDK on these platforms isn't actively tested, it may or may not work. - [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this issue is a limitation of the OS, and there's always only one device, OS controls default device selection. |
communication-services | Video Constraints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-constraints.md | Another benefit of the Video Constraints API is that it enables developers to op ACS Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser. -The ACS native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call. +The native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call. ## Supported constraints |
communication-services | Add Voip Push Notifications Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-voip-push-notifications-event-grid.md | Title: Using Event Grid Notifications to send VOIP push payload to ANH.- -description: Using Event Grid Notification from Azure Communication Services Native Calling to Incoming VOIP payload to devices via ANH. + Title: Using Event Grid Notifications to send VOIP call events push payload to Azure Notification Hub ++description: Using Event Grid Notification from Azure Communication Services Native Calling to Incoming VOIP call events payload to devices via Azure Notification Hub Last updated 07/25/2023 -# Deliver VOIP Push Notification to Devices without ACS Calling SDK --This tutorial explains how to deliver VOIP push notifications to native applications without using the Azure Communication Services register push notifications API [here](../how-tos/calling-sdk/push-notifications.md). --## Current Limitations -The current limitations of using the ACS Native Calling SDK are that - * There's a 24-hour limit after the register push notification API is called when the device token information is saved. - After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again. - * Can't deliver push notifications using Baidu or any other notification types supported by Azure Notification Hub but not yet supported in the ACS SDK. --## Setup for listening the events from Event Grid Notification -1. Azure functions with APIs - 1. Save device endpoint information. - 2. Delete device endpoint information. - 3. Get device endpoint information for a given `CommunicationIdentifier`. -2. Azure function API with EventGridTrigger that listens to the `Microsoft.Communication.IncomingCall` event from the Azure Communication resource. -3. Some kind of database like MongoDb to save the device endpoint information. -4. Azure Notification Hub to deliver the VOIP notifications. --## Steps to deliver the Push Notifications -Here are the steps to deliver the push notification: -1. Instead of calling the API `CallAgent.registerPushNotifications` with device token when the application starts, send the device token to the Azure function app. -2. When there's an incoming call for an ACS user, Azure Communication calling resource will trigger the `EventGridTrigger` Azure function API with the incoming call payload. -3. Get all the device token information from the database. -4. Convert the payload to how the VOIP push notification payload is by `PushNotificationInfo.fromDictionary` API like in iOS SDK. -5. Send the push payload using the REST API provided by Azure Notification Hub. -6. VOIP push is successfully delivered to the device and `CallAgent.handlePush` API should be called. +# Connect Calling Native Push Notification with Azure Event Grid ++With Azure Communication Services, you can receive real-time event notifications in a dependable, expandable, and safe way by integrating it with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). This integration can be used to build a notification system that sends push notifications to your users on mobile devices. To achieve it, create an Event Grid subscription that triggers an [Azure Function](../../azure-functions/functions-overview.md) or webhook. +++In this tutorial, we explore how to implement Azure Communication Services Calling with Azure Event Grid to receive push notifications on native platforms. Azure Event Grid is a serverless event routing service that makes it easy to build event-driven applications. This tutorial helps you set up and understand how to receive push notifications for incoming calls. ++You can take a look at [voice and video calling events](https://learn.microsoft.com/azure/event-grid/communication-services-voice-video-events) available using Event Grid. ++## Current limitations with the Push Notification model ++The current limitations of using the Native Calling SDK and [Push Notifications](../how-tos/calling-sdk/push-notifications.md) are: ++* There's a **24-hour limit** after the register push notification API is called when the device token information is saved. After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices can't be delivered to the devices if those devices don't call the register push notification API again. +* Can't deliver push notifications using Baidu or any other notification types supported by Azure Notification Hub but not yet supported in the Calling SDK. ++## Prerequisites ++* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* A deployed Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). +* A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../quickstarts/identity/access-tokens.md) +* [The Azure Event Grid topic](https://learn.microsoft.com/azure/event-grid/custom-event-quickstart-portal): Create an Azure Event Grid topic in your Azure subscription, it's used to send events when incoming calls occur. +* Optional: Complete the quickstart for [getting started with adding calling to your application](../quickstarts/voice-video-calling/getting-started-with-calling.md) +* Optional [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to build your own serverless applications. For example, you can host your authentication application in Azure Functions. +* Optional, review the quickstart to learn how to [handle voice and video calling events](../quickstarts/voice-video-calling/handle-calling-events.md). ++Let's consider a scenario where you want to notify users on their mobile devices (iOS and Android) when they receive an incoming call through Azure Communication Services. We use Azure Event Grid to achieve. ++## Implementation steps ++### Setup for register events into Event Grid ++#### Azure functions to handle device information ++The Azure functions should be used to handle device registration data. Create three separate webhook endpoints for each registration task. ++* Store the device endpoint information. +* Delete the device endpoint information. +* Get the device endpoint information for a given `CommunicationIdentifier`. ++You should use a database to store device information. In this example, we're using MongoDB for simplicity. However, feel free to use any database you feel comfortable with. ++**You can use the code from [this class](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/blob/main/add-calling-push-notifications-event-grid/ACSCallingNativeRegistrarLite/Functions/ACSCallingNativeDeviceTokenRegistrar.cs).** ++#### Azure function to deliver the notifications ++```csharp + // Read all the required settings. + var anhHubConnectionString = Environment.GetEnvironmentVariable("ANH_Hub_Connection_String"); + var anhHubName = Environment.GetEnvironmentVariable("ANH_Hub_Name"); + var anhHubUrl = Environment.GetEnvironmentVariable("ANH_Hub_URL"); + var anhHubApiVersion = Environment.GetEnvironmentVariable("ANH_Hub_Api_Version") ?? Defaults.ANH_DEFAULT_REST_API_VERSION; +++ // Generate the SAS token for making the REST API to Azure Notification Hub + var authorization = GenerateToken(anhHubConnectionString, anhHubName); ++ // Create the payload to sent to ANH. + PushNotificationInfo? pushNotificationInfo = Helpers.ConvertToPNInfo(input, logger) ?? throw new Exception("Could not extract PN info"); + var body = new RootPayloadBody(pushNotificationInfo); ++ // Send the payload to all the devices registered. + // You can get the device info data from the database ++ using var client = new HttpClient(); + client.DefaultRequestHeaders.Add("Accept", "application/json"); + client.DefaultRequestHeaders.Add("Authorization", authorization); + client.DefaultRequestHeaders.Add("ServiceBusNotification-Format", deviceInfo.platform); + client.DefaultRequestHeaders.Add("ServiceBusNotification-Type", deviceInfo.platform); + client.DefaultRequestHeaders.Add("ServiceBusNotification-DeviceHandle", deviceInfo.deviceToken); + if (deviceInfo.platform.Equals(Platform.apple.ToString())) + { + client.DefaultRequestHeaders.Add("ServiceBusNotification-Apns-Push-Type", "voip"); + } ++ var payload = JsonConvert.SerializeObject(The Event Grid payload model); + + using var httpContent = new StringContent(payload, Encoding.UTF8, "application/json"); + var httpResponse = await client.PostAsync(new Uri(anhHubUrl), httpContent).ConfigureAwait(false); + +``` ++**You can use the code from [this class](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/blob/main/add-calling-push-notifications-event-grid/ACSCallingNativeRegistrarLite/Functions/IncomingCallEventHandler.cs).** ++#### Azure function to handle Event Grid Trigger ++After deploying the Azure functions, configure the Event Grid and Azure Communication Services resource to listen for `IncomingCall` event. You can follow [these steps](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/tree/main/add-calling-push-notifications-event-grid#steps) to easily configure your resources. ++### Register the Push Notifications ++In your Calling native app; instead of calling the API `CallAgent.registerPushNotifications` (iOS SDK) with device token when the application starts, send the device token to the Azure function app, send a **POST** request to the `AddDeviceToken` function (register endpoint one). ++### Test your implementation ++Test your implementation by placing calls to your Azure Communication Services application. Ensure that push notifications are received on your iOS and Android devices when incoming calls occur. ++## Summary workflow ++1. When there's an incoming call for an Azure Communication Services user; Azure Communication calling resource triggers the `EventGridTrigger`, and the Azure function with the incoming call payload be executed. +2. The Azure function gets the device token information from the database. +3. Convert the payload to the VOIP push notification payload is required `PushNotificationInfo.fromDictionary` (iOS SDK). +4. The Azure function sends the push payload using the REST API provided by Azure Notification Hub. +5. The push is successfully delivered to the device and `CallAgent.handlePush` API should be called. ++In the tutorial, you have learned how to implement Azure Communication Services Calling with Azure Event Grid for push notifications. By integrating Calling with Event Grid and handling events in your native platform apps, you can notify users about incoming calls in real-time. The Azure Event Grid can enhance the user experience and improve communication within your application. ## Sample-The sample provided below works for any Native platforms (iOS, Android , Windows). ++The sample provided below works for any Native platforms (iOS, Android, Windows). Code sample is provided [here](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/tree/main/add-calling-push-notifications-event-grid).++## Next steps ++* Learn more about: [event handling in Azure Communication Services](../../event-grid/event-schema-communication-services.md). +* Learn more about: [notification alternatives in Azure Communication Services](../concepts/notifications.md). +* Learn more about: [add traditional push notification in Azure Communication Services](../how-tos/calling-sdk/push-notifications.md). |
communication-services | Click To Call Widget | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md | - Title: Tutorial - Embed a Teams call widget into your web application- -description: Learn how to use Azure Communication Services to embed a calling widget into your web application. ---- Previously updated : 04/17/2023------# Embed a Teams call widget into your web application --Enable your customers to talk with your support agent on Teams through a call interface directly embedded into your web application. ---## Prerequisites -- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).--## Set up an Azure Function to provide access tokens --Follow instructions from our [trusted user access service tutorial](../trusted-service-tutorial.md) to deploy your Azure Function for access tokens. This service returns an access token that our widget uses to authenticate to Azure Communication Services and start the call to the Teams user we define. --## Set up boilerplate vanilla web application --1. Create an HTML file named `https://docsupdatetracker.net/index.html` and add the following code to it: -- ``` html -- <!DOCTYPE html> - <html> - <head> - <meta charset="utf-8"> - <title>Call Widget App - Vanilla</title> - <link rel="stylesheet" href="style.css"> - </head> - <body> - <div id="call-widget"> - <div id="call-widget-header"> - <div id="call-widget-header-title">Call Widget App</div> - <button class='widget'> ? </button > - <div class='callWidget'></div> - </div> - </div> - </body> - </html> -- ``` --2. Create a CSS file named `style.css` and add the following code to it: -- ``` css -- .widget { - height: 75px; - width: 75px; - position: absolute; - right: 0; - bottom: 0; - background-color: blue; - margin-bottom: 35px; - margin-right: 35px; - border-radius: 50%; - text-align: center; - vertical-align: middle; - line-height: 75px; - color: white; - font-size: 30px; - } -- .callWidget { - height: 400px; - width: 600px; - background-color: blue; - position: absolute; - right: 35px; - bottom: 120px; - z-index: 10; - display: none; - border-radius: 5px; - border-style: solid; - border-width: 5px; - } -- ``` --3. Configure the call window to be hidden by default. We show it when the user clicks the button. -- ``` html -- <script> - var open = false; - const button = document.querySelector('.widget'); - const content = document.querySelector('.callWidget'); - button.addEventListener('click', async function() { - if(!open){ - open = !open; - content.style.display = 'block'; - button.innerHTML = 'X'; - //Add code to initialize call widget here - } else if (open) { - open = !open; - content.style.display = 'none'; - button.innerHTML = '?'; - } - }); -- async function getAccessToken(){ - //Add code to get access token here - } - </script> -- ``` --At this point, we have set up a static HTML page with a button that opens a call widget when clicked. Next, we add the widget script code. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define. --## Fetch an access token from your Azure Function --Add the following code to the `getAccessToken()` function: --``` javascript -- async function getAccessToken(){ - const response = await fetch('https://<your-function-name>.azurewebsites.net/api/GetAccessToken?code=<your-function-key>'); - const data = await response.json(); - return data.token; - } --``` - -You need to add the URL of your Azure Function. You can find these values in the Azure portal under your Azure Function resource. ---## Initialize the call widget --1. Add a script tag to load the call widget script: -- ``` html -- <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script> -- ``` --We provide a test script hosted on GitHub for you to use for testing. For production scenarios, we recommend hosting the script on your own CDN. For more information on how to build your own bundle, see [this article](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page#build-your-own-composite-js-bundle-files). --2. Add the following code under the button event listener: -- ``` javascript -- button.addEventListener('click', async function() { - if(!open){ - open = !open; - content.style.display = 'block'; - button.innerHTML = 'X'; - let response = await getChatContext(); - console.log(response); - const callAdapter = await callComposite.loadCallComposite( - { - displayName: "Test User", - locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']}, - userId: response.user, - token: response.userToken - }, - content, - { - formFactor: 'mobile', - key: new Date() - } - ); - } else if (open) { - open = !open; - content.style.display = 'none'; - button.innerHTML = '?'; - } - }); -- ``` --Add a Microsoft Graph [User](/graph/api/resources/user) ID to the `participantIds` array. You can find this value through [Microsoft Graph](/graph/api/user-get?tabs=http) or through [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) for testing purposes. There you can grab the `id` value from the response. --## Run code --Open the `https://docsupdatetracker.net/index.html` in a browser. This code initializes the call widget when the button is clicked. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define. |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | In addition, a workload profiles environment reserves the following addresses: ### User defined routes (UDR) -User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment, which is in preview. In the Consumption only environment, these features aren't supported. +User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment. In the Consumption only environment, these features aren't supported. > [!NOTE] > When using UDR with Azure Firewall in Azure Container Apps, you need to add certain FQDN's and service tags to the allowlist for the firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewall). |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
container-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md | The *Is Configurable* column in the following tables denotes a feature maximum m | Cores | Replica | Up to maximum cores a workload profile supports | No | Maximum number of cores available to a revision replica. | | Cores | Environment | 100 | Yes | Maximum number of cores all Dedicated workload profiles in a Dedicated plan environment can accommodate. Calculated by the sum of cores available in each node of all workload profile in a Dedicated plan environment. | | Cores | General Purpose Workload Profiles | 100 | Yes | The total cores available to all general purpose (D-series) profiles within an environment. |-| Cores | Memory Optimized Workload Profiles | 100 | Yes | The total cores available to all memory optimised (E-series) profiles within an environment. | -| Cores | Compute Optimized Workload Profiles | 100 | Yes | The total cores available to all compute optimised (F-series) profiles within an environment. | +| Cores | Memory Optimized Workload Profiles | 50 | Yes | The total cores available to all memory optimised (E-series) profiles within an environment. | + For more information regarding quotas, see the [Quotas roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository. |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
cosmos-db | How To Setup Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md | The following conditions are necessary to successfully restore a periodic backup ### How do customer-managed keys affect continuous backups? -Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must use a user-assigned managed identity in the Key Vault access policy. Azure Cosmos DB first-party identities or system-assigned managed identities aren't currently supported on accounts using continuous backups. +Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must use a system-assigned or user-assigned managed identity in the Key Vault access policy. Azure Cosmos DB first-party identities is not currently supported on accounts using continuous backups. The following conditions are necessary to successfully perform a point-in-time restore: |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
cosmos-db | Product Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md | Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### September 2023+* Preview: [32 TiB storage per node for multi-node configurations](./resources-compute.md#multi-node-cluster) is now available in all supported regions. + * See [how to maximize IOPS on your cluster](./resources-compute.md#maximum-iops-for-your-compute--storage-configuration). * General availability: Azure Cosmos DB for PostgreSQL is now available in Australia Central, Canada East, and Qatar Central. * See [all supported regions](./resources-regions.md). Updates that change cluster internals, such as installing a [new minor PostgreSQ * See [all supported PostgreSQL versions](reference-versions.md). * See [this guidance](howto-upgrade.md) for the steps to upgrade your Azure Cosmos DB for PostgreSQL cluster to PostgreSQL 15. -- ### November 2022 * General availability: [Cross-region cluster read replicas](concepts-read-replicas.md) for improved read scalability and cross-region disaster recovery (DR). might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) +* [32 TiB storage per node in multi-node clusters](./resources-compute.md#multi-node-cluster) * [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview) * [Azure CLI support for Azure Cosmos DB for PostgreSQL](/cli/azure/cosmosdb/postgres) * Azure SDKs: [.NET](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDBForPostgreSql/1.0.0-beta.1), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/cosmosforpostgresql/armcosmosforpostgresql@v0.1.0), [Java](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmosdbforpostgresql/1.0.0-beta.1/overview), [JavaScript](https://www.npmjs.com/package/@azure/arm-cosmosdbforpostgresql/v/1.0.0-beta.1), and [Python](https://pypi.org/project/azure-mgmt-cosmosdbforpostgresql/1.0.0b1/) |
cosmos-db | Resources Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-compute.md | +> [!IMPORTANT] +> 32 TiB storage in Azure Cosmos DB for PostgreSQL is currently in preview. +> This preview is provided without a service level agreement, and it's not recommended +> for production workloads. Certain features might not be supported or might have constrained +> capabilities. + Compute resources are provided as vCores, which represent the logical CPU of the underlying hardware. The storage size for provisioning refers to the capacity available to the coordinator and worker nodes in your cluster. The Postgres server logs. You can select the compute and storage settings independently for worker nodes and the coordinator node in a multi-node cluster. -| Resource | Worker node | Coordinator node | -|--|--|-| -| Compute, vCores | 4, 8, 16, 32, 64, 96, 104 | 4, 8, 16, 32, 64, 96 | -| Memory per vCore, GiB | 8 | 4 | -| Storage size, TiB | 0.5, 1, 2, 4, 8, 16 | 0.128, 0.25, 0.5, 1, 2, 4, 8, 16 | -| Storage type | General purpose (SSD) | General purpose (SSD) | +| Resource | Worker node | Coordinator node | +|--||| +| Compute, vCores | 4, 8, 16, 32, 64, 96, 104 | 4, 8, 16, 32, 64, 96 | +| Memory per vCore, GiB | 8 | 4 | +| Storage size, TiB | 0.5, 1, 2, 4, 8, 16, 32 (preview) | 0.128, 0.25, 0.5, 1, 2, 4, 8, 16, 32 (preview) | +| Storage type | General purpose (SSD) | General purpose (SSD) | The total amount of RAM in a single node is based on the selected number of vCores. available to each worker and coordinator node. | 4 | 7,500 | | 8 | 16,000 | | 16 | 18,000 |+| 32 (preview) | 20,000 | For the entire cluster, the aggregated IOPS work out to the following values: -| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 or 4 TiB, total IOPS | 8 TiB, total IOPS | 16 TiB, total IOPS | -|--||-||-|--| -| 2 | 4,600 | 10,000 | 15,000 | 32,000 | 36,000 | -| 3 | 6,900 | 15,000 | 22,500 | 48,000 | 54,000 | -| 4 | 9,200 | 20,000 | 30,000 | 64,000 | 72,000 | -| 5 | 11,500 | 25,000 | 37,500 | 80,000 | 90,000 | -| 6 | 13,800 | 30,000 | 45,000 | 96,000 | 108,000 | -| 7 | 16,100 | 35,000 | 52,500 | 112,000 | 126,000 | -| 8 | 18,400 | 40,000 | 60,000 | 128,000 | 144,000 | -| 9 | 20,700 | 45,000 | 67,500 | 144,000 | 162,000 | -| 10 | 23,000 | 50,000 | 75,000 | 160,000 | 180,000 | -| 11 | 25,300 | 55,000 | 82,500 | 176,000 | 198,000 | -| 12 | 27,600 | 60,000 | 90,000 | 192,000 | 216,000 | -| 13 | 29,900 | 65,000 | 97,500 | 208,000 | 234,000 | -| 14 | 32,200 | 70,000 | 105,000 | 224,000 | 252,000 | -| 15 | 34,500 | 75,000 | 112,500 | 240,000 | 270,000 | -| 16 | 36,800 | 80,000 | 120,000 | 256,000 | 288,000 | -| 17 | 39,100 | 85,000 | 127,500 | 272,000 | 306,000 | -| 18 | 41,400 | 90,000 | 135,000 | 288,000 | 324,000 | -| 19 | 43,700 | 95,000 | 142,500 | 304,000 | 342,000 | -| 20 | 46,000 | 100,000 | 150,000 | 320,000 | 360,000 | +| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 or 4 TiB, total IOPS | 8 TiB, total IOPS | 16 TiB, total IOPS | 32 TiB, total IOPS | +|--||-||-|--|--| +| 2 | 4,600 | 10,000 | 15,000 | 32,000 | 36,000 | 40,000 | +| 3 | 6,900 | 15,000 | 22,500 | 48,000 | 54,000 | 60,000 | +| 4 | 9,200 | 20,000 | 30,000 | 64,000 | 72,000 | 80,000 | +| 5 | 11,500 | 25,000 | 37,500 | 80,000 | 90,000 | 100,000 | +| 6 | 13,800 | 30,000 | 45,000 | 96,000 | 108,000 | 120,000 | +| 7 | 16,100 | 35,000 | 52,500 | 112,000 | 126,000 | 140,000 | +| 8 | 18,400 | 40,000 | 60,000 | 128,000 | 144,000 | 160,000 | +| 9 | 20,700 | 45,000 | 67,500 | 144,000 | 162,000 | 180,000 | +| 10 | 23,000 | 50,000 | 75,000 | 160,000 | 180,000 | 200,000 | +| 11 | 25,300 | 55,000 | 82,500 | 176,000 | 198,000 | 220,000 | +| 12 | 27,600 | 60,000 | 90,000 | 192,000 | 216,000 | 240,000 | +| 13 | 29,900 | 65,000 | 97,500 | 208,000 | 234,000 | 260,000 | +| 14 | 32,200 | 70,000 | 105,000 | 224,000 | 252,000 | 280,000 | +| 15 | 34,500 | 75,000 | 112,500 | 240,000 | 270,000 | 300,000 | +| 16 | 36,800 | 80,000 | 120,000 | 256,000 | 288,000 | 320,000 | +| 17 | 39,100 | 85,000 | 127,500 | 272,000 | 306,000 | 340,000 | +| 18 | 41,400 | 90,000 | 135,000 | 288,000 | 324,000 | 360,000 | +| 19 | 43,700 | 95,000 | 142,500 | 304,000 | 342,000 | 380,000 | +| 20 | 46,000 | 100,000 | 150,000 | 320,000 | 360,000 | 400,000 | ## Single node cluster compute](concepts-burstable-compute.md) and regular compute. ΓÇá 1024 GiB and 2048 GiB are supported with 8 vCores or greater. +## Maximum IOPS for your compute / storage configuration +Each compute configuration has an IOPS limit that depends on the number of vCores in a node. Make sure you select compute configuration for the coordinator and worker nodes in your cluster to fully utilize IOPS in the selected storage. ++**Worker nodes, per node** ++| Compute | Storage size to maximize IOPS usage, up to | IOPS with the max recommended storage size, up to | +||--|| +| 4 vCores | 1 TiB | 5,000 | +| 8 vCores | 4 TiB | 7,500 | +| 16 vCores | 32 TiB | 20,000 | +| 32 vCores | 32 TiB | 20,000 | +| 64 vCores | 32 TiB | 20,000 | +| 96 vCores | 32 TiB | 20,000 | +| 104 vCores | 32 TiB | 20,000 | ++**Coordinator and single node with regular compute** ++| Compute | Storage size to maximize IOPS usage, up to | IOPS with the max recommended storage size, up to | +||--|| +| 2 vCores | 0.5 TiB | 2,300 | +| 4 vCores | 1 TiB | 5,000 | +| 8 vCores | 4 TiB | 7,500 | +| 16 vCores | 32 TiB | 20,000 | +| 32 vCores | 32 TiB | 20,000 | +| 64 vCores | 32 TiB | 20,000 | +| 96 vCores | 32 TiB | 20,000 | ++To put it another way, if you need 8 TiB of storage per node or more, make sure you select 16 vCores or more for the node's compute configuration. That would allow you to maximize IOPS usage provided by the selected storage. + ## Next steps * Learn how to [create a cluster in the portal](quickstart-create-portal.md) |
cost-management-billing | Understand Rhel Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-rhel-reservation-charges.md | For example, if you buy a plan for Red Hat Linux Enterprise Server for a VM with - 1 deployed VMs with 1 to 4 vCPUs, - or 0.46 or about 46% of Red Hat Enterprise Linux costs for a VM with 5 or more vCPUs. -For more in formation to [Review SUSE VM usage before you buy](understand-suse-reservation-charges.md#review-suse-vm-usage-before-you-buy) +For more information to [Review RedHat VM usage before you buy](understand-suse-reservation-charges.md#review-redhat-vm-usage-before-you-buy) ## Next steps |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
defender-for-iot | How To Troubleshoot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-sensor.md | Use the **Cloud connectivity troubleshooting** page in your OT sensor to learn m - From the sensor's **Overview** page, select the **Troubleshoot*** link in the error at the top of the page - Select **System settings > Sensor management > Health and troubleshooting > Cloud connectivity troubleshooting** -The **Cloud connectivity troubleshooting** pane opens on the right. If the sensor is connected to the Azure portal, the pane indicates that **The sensor is connected to cloud successfully**. If the sensor isn't connected, a description of the issue and any mitigation instructions are listed instead. For example: <!--need new image--> +The **Cloud connectivity troubleshooting** pane opens on the right. If the sensor is connected to the Azure portal, the pane indicates that **The sensor is connected to cloud successfully**. If the sensor isn't connected, a description of the issue and any mitigation instructions are listed instead. For example: :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/connectivity-troubleshooting.png" alt-text="Screenshot of the Connectivity troubleshooting pane."::: |
event-grid | Azure Active Directory Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md | Title: Azure Active Directory events description: This article describes Azure AD event types and provides event samples. Previously updated : 06/09/2022 Last updated : 09/19/2023 # Azure Active Directory events Last updated 06/09/2022 This article provides the properties and schema for Azure Active Directory (Azure AD) events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md). ## Available event types-These events are triggered when a [User](/graph/api/resources/user) or [Group](/graph/api/resources/group) is created, updated or deleted in Azure AD or by operating over those resources using Microsoft Graph API. +These events are triggered when a [User](/graph/api/resources/user) or [Group](/graph/api/resources/group) is created, updated, or deleted in Azure AD or by operating over those resources using Microsoft Graph API. | Event name | Description | | - | -- | The data object has the following properties: | `@odata.id` | string | The Graph API resource identifier for which the event was raised. | | `id` | string | The resource identifier for which the event was raised. | | `organizationId` | string | The Azure AD tenant identifier. |-| `eventTime` | string | The time at which the resource state occurred. | +| `eventTime` | string | The time when the resource state changed. | | `sequenceNumber` | string | A sequence number. | | `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. | | `subscriptionId` | string | The Graph API subscription identifier. | The data object has the following properties: * For information on how to subscribe to Microsoft Graph API to receive Azure AD events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md). * For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md). * For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).-* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md). +* For information about how to configure an event subscription to select specific events to be delivered, see [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md). |
event-grid | Event Schema Communication Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-communication-services.md | Title: Azure Communication Services as an Event Grid source - Overview description: This article describes how to use Azure Communication Services as an Event Grid event source. Previously updated : 06/11/2021 Last updated : 09/19/2023 # Event Handling in Azure Communication Services -Azure Communication Services integrates with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to deliver real-time event notifications in a reliable, scalable and secure manner. The purpose of this article is to help you configure your applications to listen to Communication Services events. For example, you may want to update a database, create a work item and deliver a push notification whenever an SMS message is received by a phone number associated with your Communication Services resource. +Azure Communication Services integrates with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to deliver real-time event notifications in a reliable, scalable, and secure manner. The purpose of this article is to help you configure your applications to listen to Communication Services events. For example, you may want to update a database, create a work item and deliver a push notification whenever an SMS message is received by a phone number associated with your Communication Services resource. Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](overview.md). You can use the Azure portal or Azure CLI to subscribe to events emitted by your ## Event subjects -The `subject` field of all Communication Services events identifies the user, phone number or entity that is targeted by the event. Common prefixes are used to allow simple [Event Grid Filtering](event-filtering.md). +The `subject` field of all Communication Services events identifies the user, phone number, or entity that is targeted by the event. Common prefixes are used to allow simple [Event Grid filtering](event-filtering.md). | Subject Prefix | Communication Service Entity | | - | - | |
event-grid | Handler Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-event-hubs.md | Title: Event hub as an event handler for Azure Event Grid events description: Describes how you can use event hubs as event handlers for Azure Event Grid events. Previously updated : 09/30/2021 Last updated : 09/19/2023 # Event hub as an event handler for Azure Event Grid events See the following examples: ``` ## Delivery properties-Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Event Hubs. +Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that the destination requires. You can set custom headers on the events that are delivered to Azure Event Hubs. If you need to publish events to a specific partition within an event hub, set the `PartitionKey` property on your event subscription to specify the partition key that identifies the target event hub partition. |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **Claro** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Berlin<br/>Chicago<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>London<br/>London2<br/>Marseille<br/>Milan<br/>Munich<br/>Newport<br/>Osaka<br/>Paris<br/>Seoul<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Tokyo2<br/>Washington DC<br/>Zurich | -| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago<br/>Silicon Valley<br/>Washington DC | -| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 | -| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC | -| **Crown Castle** |Supported |Supported | New York | | **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Bogota<br/>Queretaro<br/>Rio De Janeiro | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles | |
firewall | Logs And Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md | The following metrics are available for Azure Firewall: - **AZFW Latency Probe** - Estimates Azure Firewall average latency. - Unit: m/s + Unit: ms - This metric measures the overall or average latency of Azure Firewall. Administrators can use this metric for the following purposes: + This metric measures the overall or average latency of Azure Firewall in milliseconds. Administrators can use this metric for the following purposes: - Diagnose if Azure Firewall is the cause of latency in the network The following metrics are available for Azure Firewall: This metric doesn't measure end-to-end latency of a given network path. In other words, this latency health probe doesn't measure how much latency Azure Firewall adds. - When the latency metric isn't functioning as expected, a value of 0 appears in the metrics dashboard.- - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment. + - As a reference, the average expected latency for a firewall is approximately 1 ms. This may vary depending on deployment size and environment. - The latency probe is based on Microsoft's Ping Mesh technology. So, intermittent spikes in the latency metric are to be expected. These spikes are normal and don't signal an issue with the Azure Firewall. They're part of the standard host networking setup that supports the system. As a result, if you experience consistent high latency that last longer than typical spikes, consider filing a Support ticket for assistance. |
frontdoor | Rules Match Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md | In this example, we match all requests where the request uses the `HTTP` protoco Identifies requests that match the specified URL. The entire URL is evaluated, including the protocol and query string, but not the fragment. You can specify multiple values to match, which will be combined using OR logic. > [!TIP]-> When you use this rule condition, be sure to include the protocol. For example, use `https://www.contoso.com` instead of just `www.contoso.com`. +> When you use this rule condition, be sure to include the protocol and a trailing forward slash `/`. For example, use `https://www.contoso.com/` instead of just `www.contoso.com`. ### Properties |
governance | Remediation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md | resource hierarchy or individual resource to remediate. ## Policy definition ID -If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceId** property must be used to specify which policy definition(s) in the initiative the subject resource(s) are to be remediated. As a remediation can only remediation in a scope of one definition, -this property is a _string_. The value must match the value in the initiative definition in the -`policyDefinitions.policyDefinitionReferenceId` field. +If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceId** property must be used to specify which policy definition in the initiative the subject resource(s) are to be remediated. As a remediation can only remediate in a scope of one definition, +this property is a _string_ and not an array. The value must match the value in the initiative definition in the +`policyDefinitions.policyDefinitionReferenceId` field instead of the global identifier for policy definition `Id`. ## Resource count and parallel deployments |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/13/2023 Last updated : 09/19/2023 The name of each built-in links to the policy definition in the Azure portal. Us [!INCLUDE [azure-policy-reference-policies-azure-stack-edge](../../../../includes/policy/reference/bycat/policies-azure-stack-edge.md)] +## Azure Update Manager ++ ## Backup [!INCLUDE [azure-policy-reference-policies-backup](../../../../includes/policy/reference/bycat/policies-backup.md)] The name of each built-in links to the policy definition in the Azure portal. Us [!INCLUDE [azure-policy-reference-policies-tags](../../../../includes/policy/reference/bycat/policies-tags.md)] -## Update Management Center -- ## Video Analyzers [!INCLUDE [azure-policy-reference-policies-video-analyzers](../../../../includes/policy/reference/bycat/policies-video-analyzers.md)] |
hdinsight | Find Host Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/find-host-name.md | -HDInsight cluster is created with public DNS `clustername.azurehdinsight.net`. When you SSH to individual nodes or set connection to cluster nodes with in the same custom virtual network, you need to use host name, or fully qualified domain names (FQDN) of cluster nodes. +HDInsight cluster is created with public DNS `clustername.azurehdinsight.net`. When you SSH to individual nodes or set up a connection to cluster nodes within the same custom virtual network, you need to use the host name or fully qualified domain names (FQDN) of cluster nodes. In this article, you learn how to get the host names of cluster nodes. You can get it manually through Ambari Web UI or automatically through Ambari REST API. |
hdinsight | Apache Hbase Migrate New Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-new-version.md | description: Learn how to migrate Apache HBase clusters in Azure HDInsight to a Previously updated : 08/26/2022 Last updated : 09/19/2023 # Migrate an Apache HBase cluster to a new version |
hdinsight | Hbase Troubleshoot Unassigned Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-unassigned-regions.md | Title: Issues with region servers in Azure HDInsight description: Issues with region servers in Azure HDInsight Previously updated : 08/30/2022 Last updated : 09/19/2023 # Issues with region servers in Azure HDInsight |
hdinsight | Kafka Troubleshoot Full Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-troubleshoot-full-disk.md | |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
iot-hub-device-update | Device Update Ubuntu Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md | For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi 1. Open the configuration details (See how to [set up configuration file here](device-update-configuration-file.md) with the command below. Set your connectionType as 'AIS' and connectionData as empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents). ```bash- sudo /etc/adu/du-config.json + sudo nano /etc/adu/du-config.json ``` 1. Restart the Device Update agent. |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
logic-apps | Monitor Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md | Title: Monitor status, view history, and set up alerts -description: Troubleshoot logic apps by checking run status, reviewing trigger history, and enabling alerts in Azure Logic Apps. + Title: Monitor workflow status, view run history, and set up alerts +description: Check your workflow run status, review trigger and workflow run history, and enable alerts in Azure Logic Apps. ms.suite: integration Previously updated : 08/01/2022 Last updated : 09/29/2023 -# Monitor run status, review trigger history, and set up alerts for Azure Logic Apps +# Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps -> [!NOTE] -> This article applies only to Consumption logic apps. For information about reviewing run status and monitoring for Standard logic apps, -> see the following sections in [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md): -> [Review run history](create-single-tenant-workflows-azure-portal.md#review-run-history), [Review trigger history](create-single-tenant-workflows-azure-portal.md#review-trigger-history), and [Enable or open Application Insights after deployment](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights). +After you create and run a logic app workflow, you can check that workflow's run status, trigger history, workflow run history, and performance. ++This guide shows how to perform the following tasks: ++- [Review trigger history](#review-trigger-history). +- [Review workflow run history](#review-runs-history). +- [Set up alerts](#add-azure-alerts) to get notifications about failures or other possible problems. For example, you can create an alert that detects "when more than five runs fail in an hour". ++To monitor and review the workflow run status for Standard workflows, see the following sections in [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md): -After you create and run a [Consumption logic app workflow](quickstart-create-example-consumption-workflow.md), you can check that workflow's run status, [trigger history](#review-trigger-history), [runs history](#review-runs-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour." +- [Review trigger history](create-single-tenant-workflows-azure-portal.md#review-trigger-history) +- [Review workflow run history](create-single-tenant-workflows-azure-portal.md#review-run-history). +- [Enable or open Application Insights after deployment](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights). -For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-workflows-collect-diagnostic-data.md). +For real-time event monitoring and richer debugging, you can set up diagnostics logging for your logic app workflow by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-workflows-collect-diagnostic-data.md). > [!NOTE]-> If your logic apps run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) +> +> If your workflow runs in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) > that was created to use an [internal access endpoint](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access), > you can view and access inputs and outputs from a workflow runs history *only from inside your virtual network*. Make sure that you have network > connectivity between the private endpoints and the computer from where you want to access runs history. For example, your client computer can exist Each workflow run starts with a trigger, which either fires on a schedule or wai ### [Consumption](#tab/consumption) -1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. To find your logic app, in the portal search box, enter **logic apps**, and then select **Logic apps**. - ![Screenshot showing the Azure portal main search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png) + The Azure portal shows all the logic app resources in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on. - The Azure portal shows all the logic apps in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on. +1. Select your logic app resource. On your logic app menu, select **Overview**. On the **Overview** pane, select **Trigger history**. - ![Screenshot showing the Azure portal with all logic apps associated with selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png) + ![Screenshot shows Overview pane for Consumption logic app workflow with selected option named Trigger history.](./media/monitor-logic-apps/overview-logic-app-trigger-history-consumption.png) -1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Trigger history**. -- ![Screenshot showing "Overview" pane for a Consumption logic app workflow with "Trigger history" selected.](./media/monitor-logic-apps/overview-logic-app-trigger-history-consumption.png) -- Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time. + Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time. - ![Screenshot showing "Overview" pane for a Consumption logic app workflow with multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-consumption.png) + ![Screenshot shows Overview pane with Consumption logic app workflow and multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-consumption.png) The following table lists the possible trigger statuses: | Trigger status | Description | |-|-|- | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. | + | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt, and choose **Outputs**. For example, you might find inputs that aren't valid. | | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. | | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |- ||| > [!TIP] > Each workflow run starts with a trigger, which either fires on a schedule or wai 1. To view information about a specific trigger attempt, select that trigger event. - ![Screenshot showing the Consumption workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review.png) + ![Screenshot shows Consumption workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review.png) If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar. You can now review information about the selected trigger event, for example: - ![Screenshot showing the selected Consumption workflow trigger history information.](./media/monitor-logic-apps/view-specific-trigger-details.png) + ![Screenshot shows selected Consumption workflow trigger history information.](./media/monitor-logic-apps/view-specific-trigger-details.png) ### [Standard](#tab/standard) -1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. To find your logic app, in the portal search box, enter **logic apps**, and then select **Logic apps**. - ![Screenshot showing the Azure portal search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png) -- The Azure portal shows all the logic apps in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on. + The Azure portal shows all the logic app resources in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on. - ![Screenshot showing Azure portal with all logic apps associated with selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png) +1. Select your logic app resource. On your logic app menu, select **Overview**. On the **Overview** pane, select **Trigger history**. -1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Trigger history**. -- ![Screenshot showing Overview pane with "Trigger history" selected.](./media/monitor-logic-apps/overview-logic-app-trigger-history-standard.png) + ![Screenshot shows Overview pane for Standard logic app with selected option named Trigger history.](./media/monitor-logic-apps/overview-logic-app-trigger-history-standard.png) Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time. - ![Screenshot showing Overview pane with multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-standard.png) + ![Screenshot shows Overview pane with Standard logic app workflow and multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-standard.png) The following table lists the possible trigger statuses: Each workflow run starts with a trigger, which either fires on a schedule or wai | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. | | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. | | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |- ||| > [!TIP] > Each workflow run starts with a trigger, which either fires on a schedule or wai 1. To view information about a specific trigger attempt, select that trigger event. - ![Screenshot showing a Standard workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review-standard.png) + ![Screenshot shows Standard workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review-standard.png) If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar. 1. Check the trigger's inputs to confirm that they appear as you expect. On the **History** pane, under **Inputs link**, select the link, which shows the **Inputs** pane. - ![Screenshot showing Standard logic app workflow trigger inputs.](./media/monitor-logic-apps/review-trigger-inputs-standard.png) + ![Screenshot shows Standard workflow trigger inputs.](./media/monitor-logic-apps/review-trigger-inputs-standard.png) 1. Check the triggers outputs, if any, to confirm that they appear as you expect. On the **History** pane, under **Outputs link**, select the link, which shows the **Outputs** pane. Each workflow run starts with a trigger, which either fires on a schedule or wai For example, the RSS trigger generated an error message that states that the RSS feed wasn't found. - ![Screenshot showing Standard logic app workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-standard.png) + ![Screenshot shows Standard workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-standard.png) Each workflow run starts with a trigger, which either fires on a schedule or wai ## Review workflow run history -Each time the trigger successfully fires, Azure Logic Apps creates a workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during each run, including the status, inputs, and outputs for each step in the workflow. +Each time a trigger successfully fires, Azure Logic Apps creates a workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during each run, including the status, inputs, and outputs for each step in the workflow. ### [Consumption](#tab/consumption) -1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer. -- To find your logic app, in the main Azure search box, enter **logic apps**, and then select **Logic apps**. +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. - ![Screenshot showing Azure portal main search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png) + To find your logic app, in the Azure search box, enter **logic apps**, and then select **Logic apps**. - The Azure portal shows all the logic apps that are associated with your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on. + The Azure portal shows all the logic apps in your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on. - ![Screenshot showing all the logic apps in selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png) --1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Runs history**. +1. Select your logic app resource. On your logic app menu, select **Overview**. On the **Overview** pane, select **Runs history**. Under **Runs history**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time. - ![Screenshot showing Consumption logic app workflow "Overview" pane with "Runs history" selected.](./media/monitor-logic-apps/overview-logic-app-runs-history-consumption.png) + ![Screenshot shows Consumption workflow and Overview pane with selected option for Runs history.](./media/monitor-logic-apps/overview-logic-app-runs-history-consumption.png) The following table lists the possible run statuses: Each time the trigger successfully fires, Azure Logic Apps creates a workflow in | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |- ||| 1. To review the steps and other information for a specific run, under **Runs history**, select that run. If the list shows many runs, and you can't find the entry that you want, try filtering the list. Each time the trigger successfully fires, Azure Logic Apps creates a workflow in > If the run status doesn't appear, try refreshing the overview pane by selecting **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data. - ![Screenshot showing the Consumption logic app workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-consumption.png) + ![Screenshot shows Consumption workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-consumption.png) The **Logic app run** pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example: - ![Screenshot showing each action in the selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-consumption.png) + ![Screenshot shows each action in the selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-consumption.png) To view this information in list form, on the **Logic app run** toolbar, select **Run Details**. - ![Screenshot showing the "Logic app run" toolbar with "Run Details" selected.](./media/monitor-logic-apps/toolbar-select-run-details.png) + ![Screenshot shows toolbar named Logic app run with the selected option Run Details.](./media/monitor-logic-apps/toolbar-select-run-details.png) The Run Details lists each step, their status, and other information. Each time the trigger successfully fires, Azure Logic Apps creates a workflow in ### [Standard](#tab/standard) -1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer. -- To find your logic app, in the main Azure search box, enter **logic apps**, and then select **Logic apps**. -- ![Screenshot showing Azure portal search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png) +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. - The Azure portal shows all the logic apps that are associated with your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on. + To find your logic app, in the Azure search box, enter **logic apps**, and then select **Logic apps**. - ![Screenshot showing all logic apps in selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png) + The Azure portal shows all the logic apps in your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on. -1. Select your logic app. On your logic app's menu, under **Workflows**, select **Workflows**, and then select your workflow. +1. Select your logic app resource. On your logic app menu, under **Workflows**, select **Workflows**, and then select your workflow. > [!NOTE] > > By default, stateless workflows don't store run history unless you enable this capability for debugging. > For more information, review [Stateful versus stateless workflows](single-tenant-overview-compare.md#stateful-stateless). -1. On your workflow's menu, select **Overview**. On the Overview pane, select **Run History**. +1. On your workflow's menu, select **Overview**. On the **Overview** pane, select **Run History**. Under **Run History**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time. - ![Screenshot showing Standard logic app workflow "Overview" pane with "Run History" selected.](./media/monitor-logic-apps/overview-logic-app-runs-history-standard.png) + ![Screenshot shows Standard workflow and Overview pane with selected option for Run History.](./media/monitor-logic-apps/overview-logic-app-runs-history-standard.png) The following table lists the possible run statuses: Each time the trigger successfully fires, Azure Logic Apps creates a workflow in | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |- ||| 1. To review the steps and other information for a specific run, under **Run History**, select that run. If the list shows many runs, and you can't find the entry that you want, try filtering the list. Each time the trigger successfully fires, Azure Logic Apps creates a workflow in > If the run status doesn't appear, try refreshing the overview pane by selecting **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data. - ![Screenshot showing the Standard workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-standard.png) + ![Screenshot shows the Standard workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-standard.png) The workflow run pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example: - ![Screenshot showing each action in selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-standard.png) + ![Screenshot shows each action in selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-standard.png) 1. After all the steps in the run appear, select each step to review more information such as inputs, outputs, and any errors that happened in that step. For example, suppose you had an action that failed, and you wanted to review which inputs might have caused that step to fail. - ![Screenshot showing Standard logic app workflow with failed step inputs.](./media/monitor-logic-apps/failed-action-inputs-standard.png) + ![Screenshot shows Standard workflow with failed step inputs.](./media/monitor-logic-apps/failed-action-inputs-standard.png) The following screenshot shows the outputs from the failed step. - ![Screenshot showing Standard logic app workflow with failed step outputs.](./media/monitor-logic-apps/failed-action-outputs-standard.png) + ![Screenshot shows Standard logic app workflow with failed step outputs.](./media/monitor-logic-apps/failed-action-outputs-standard.png) > [!NOTE] > Each time the trigger successfully fires, Azure Logic Apps creates a workflow in ## Set up monitoring alerts -To get alerts based on specific metrics or exceeded thresholds for your logic app, set up [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md). For more information, review [Metrics in Azure](../azure-monitor/data-platform.md). To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-overview.md), follow these steps. +To get alerts based on specific metrics or exceeded thresholds for your logic app, set up [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md). For more information, review [Metrics in Azure](../azure-monitor/data-platform.md). -1. On your logic app menu, under **Monitoring**, select **Alerts**. On the toolbar, select **Create** > **Alert rule**. +To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-overview.md), follow these steps, which apply to both Consumption and Standard logic app resources: - ![Screenshot showing Azure portal, logic app menu with "Alerts" selected, and toolbar with "Create", "Alert rule" selected.](./media/monitor-logic-apps/add-new-alert-rule.png) +1. On your logic app menu, under **Monitoring**, select **Alerts**. On the toolbar, select **Create** > **Alert rule**. -1. On the **Select a signal** pane, under **Signal type**, select the signal for which you want to get an alert. -- > [!TIP] - > - > You can use the search box, or to sort the signals alphabetically, - > select the **Signal name** column header. +1. On the **Create an alert rule** page, from the **Signal name** list, select the signal for which you want to get an alert. For example, to send an alert when a trigger fails, follow these steps: - 1. In the **Signal name** column, find and select the **Triggers Failed** signal. -- ![Screenshot showing "Select a signal pane", the "Signal name" column, and "Triggers Failed" signal selected.](./media/monitor-logic-apps/find-and-select-signal.png) + 1. From the **Signal name** list, select the **Triggers Failed** signal. - 1. On the **Configure signal logic** pane, under **Alert logic**, set up your condition, and select **Done**, for example: + 1. Under **Alert logic**, set up your condition, for example: | Property | Example value | |-||- | **Operator** | **Greater than or equal to** | + | **Threshold** | **Static** | | **Aggregation type** | **Count** |- | **Threshold value** | **1** | + | **Operator** | **Greater than or equal to** | | **Unit** | **Count** |- | **Condition preview** | **Whenever the count of triggers failed is greater than or equal to 1** | - | **Aggregation granularity (Period)** | **1 minute** | - | **Frequency of evaluation** | **Every 1 Minute** | - ||| + | **Threshold value** | **1** | ++ The **Preview** section now shows the condition that you set up, for example: - For more information, review [Create, view, and manage log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md). + **Whenever the count Triggers Failed is greater than or equal to 1** - The following screenshot shows the finished condition: + 1. Under **When to evaluate**, set up the schedule for checking the condition: ++ | Property | Example value | + |-|| + | **Check every** | **1 minute** | + | **Lookback period** | **5 minutes** | - ![Screenshot showing the condition for alert.](./media/monitor-logic-apps/set-up-condition-for-alert.png) + For example, the finished condition looks similar to the following example, and the **Create an alert rule** page now shows the cost for running that alert: - The **Create an alert rule** page now shows the condition that you created and the cost for running that alert. + ![Screenshot shows the alert rule condition.](./media/monitor-logic-apps/set-up-condition-for-alert.png) - ![Screenshot showing the new alert on the "Create an alert rule" page.](./media/monitor-logic-apps/finished-alert-condition-cost.png) +1. When you're ready, select **Review + Create**. -1. If you're satisfied, select **Next: Details** to finish creating the rule. +For general information, see [Create an alert rule from a specific resource - Azure Monitor](../azure-monitor/alerts/alerts-create-new-alert-rule.md#create-or-edit-an-alert-rule-in-the-azure-portal). ## Next steps |
logic-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md | Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 ms.suite: integration |
machine-learning | Concept Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md | Actual cached images in your workspace ACR will have names like `azureml/azureml Microsoft is responsible for patching the base images for known security vulnerabilities. Updates for supported images are released every two weeks, with a commitment of no unpatched vulnerabilities older than 30 days in the latest version of the image. Patched images are released with a new immutable tag and the `:latest` tag is updated to the latest version of the patched image. -If you provide your own images, you are responsible for updating them. +You'll need to update associated Azure Machine Learning assets to use the newly patched image. For example, when working with a managed online endpoint, you'll need to redeploy your endpoint to use the patched image. ++If you provide your own images, you're responsible for updating them and updating the Azure Machine Learning assets that use them. + For more information on the base images, see the following links: * [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) GitHub repository. * [Use a custom container to deploy a model to an online endpoint](how-to-deploy-custom-container.md)+* [Managing environments and container images](concept-vulnerability-management.md#managing-environments-and-container-images) ## Next steps |
machine-learning | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md | The following table shows more limits in the platform. Reach out to the Azure Ma <sup>2</sup> Jobs on a low-priority node can be preempted whenever there's a capacity constraint. We recommend that you implement checkpoints in your job. +### Azure Machine Learning shared quota +Azure Machine Learning provides a pool of shared quota that is available for different users across various regions to use concurrently. Depending upon availability, users can temporarily access quota from the shared pool, and use the quota to perform testing for a limited amount of time. The specific time duration depends on the use case. By temporarily using quota from the quota pool, you no longer need to file a support ticket for a short-term quota increase or wait for your quota request to be approved before you can proceed with your workload. ++Use of the shared quota pool is available for running Spark jobs and for testing inferencing for Llama models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should request dedicated quota by [filing a support ticket](https://ml.azure.com/quota). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families. + ### Azure Machine Learning managed online endpoints Azure Machine Learning managed online endpoints have limits described in the following table. These limits are _regional_, meaning that you can use up to these limits per each region you're using. |
machine-learning | How To Submit Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md | These prerequisites cover the submission of a Spark job from Azure Machine Learn > [!NOTE]-> To learn more about resource access while using Azure Machine Learning serverless Spark compute (preview), and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs). +> To learn more about resource access while using Azure Machine Learning serverless Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs). ### Attach user assigned managed identity using CLI v2 |
machine-learning | How To Troubleshoot Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md | If you are creating or updating a Kubernetes online deployment, you can see [Com ### ERROR: ImageBuildFailure -This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, `"The build log is available in the workspace blob store '[storage-account-name]' under the path '/azureml/ImageLogs/your-image-id/build.log'"`. In this case, "azureml" is the name of the blob container in the storage account. +This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, `"the build log under the storage account '[storage-account-name]' in the container '[container-name]' at the path '[path-to-the-log]'"`. This is a list of common image build failure scenarios: * [Azure Container Registry (ACR) authorization failure](#container-registry-authorization-failure)+* [Image build compute not set in a private workspace with VNet](#image-build-compute-not-set-in-a-private-workspace-with-vnet) * [Generic or unknown failure](#generic-image-build-failure) We also recommend reviewing the default [probe settings](reference-yaml-deployment-managed-online.md#probesettings) in case of ImageBuild timeouts. However, you can [manually call for a synchronization of keys](/cli/azure/ml/wor Container registries that are behind a virtual network may also encounter this error if set up incorrectly. You must verify that the virtual network that you have set up properly. +#### Image build compute not set in a private workspace with VNet ++If the error message mentions `"failed to communicate with the workspace's container registry"` and you're using virtual networks and the the workspace's Azure Container Registry is private and configured with a private endpoint, you will need to [enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr) to allow building images in the virtual network. + #### Generic image build failure As stated above, you can check the build log for more information on the failure. |
machine-learning | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md | Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
machine-learning | How To Secure Prompt Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md | Workspace managed virtual network is the recommended way to support network isol ## Known limitations -- Only public access enable storage account is supported. You can't use private storage account now. Find workaround here: [Why I can't create or upgrade my flow when I disable public network access of storage account?](./tools-reference/troubleshoot-guidance.md#why-i-cant-create-or-upgrade-my-flow-when-i-disable-public-network-access-of-storage-account)+- Only public access enable storage account is supported. You can't use private storage account now. Find workaround here: [Why can't I create or upgrade my flow when I disable public network access of storage account?](./tools-reference/troubleshoot-guidance.md#why-cant-i-create-or-upgrade-my-flow-when-i-disable-public-network-access-of-storage-account) - Workspace hub / lean workspace and AI studio don't support bring your own virtual network. - Managed online endpoint only supports workspace managed virtual network. If you want to use your own virtual network, you may need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network. |
machine-learning | Troubleshoot Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md | To resolve the issue, you have two options: - Update your runtime to latest version. - Remove the old tool and re-create a new tool. -## Why I can't create or upgrade my flow when I disable public network access of storage account? -Prompt flow relies on fileshare to store snapshot of flow. Prompt flow currently doesn't support private storage account. Here are some workarounds you can try: +## Why can't I create or upgrade my flow when I disable public network access of storage account? +Prompt flow relies on fileshare to store snapshot of flow. Prompt flow didn't support private storage account now. Here are some workarounds you can try: - Make the storage account as public access enabled if there's no security concern. - If you're only using UI to authoring prompt flow, you can add following flights (flight=PromptFlowCodeFirst=false) to use our old UI. - You can use our CLI/SDK to authoring prompt flow, CLI/SDK authoring didn't rely on fileshare. See [Integrate Prompt Flow with LLM-based application DevOps ](../how-to-integrate-with-llm-app-devops.md). -## Why I can't upgrade my old flow? -Prompt flow relies on fileshare to store snapshot of flow. If fileshare have some issue, you may encounter this issue. Here are some workarounds you can try: -- If you're using private storage account, please see [Why I can't create or upgrade my flow when I disable public network access of storage account?](#why-i-cant-create-or-upgrade-my-flow-when-i-disable-public-network-access-of-storage-account)+## Why can't I upgrade my old flow? +Prompt flow relies on fileshare to store snapshot of flow. If fileshare has some issue, you may encounter this issue. Here are some workarounds you can try: +- If you're using private storage account, please see [Why can't I create or upgrade my flow when I disable public network access of storage account?](#why-cant-i-create-or-upgrade-my-flow-when-i-disable-public-network-access-of-storage-account) - If the storage account is enabled public access, please check whether there are datastore named `workspaceworkingdirectory` in your workspace, it should be fileshare type. ![workspaceworkingdirectory](../media/faq/working-directory.png) - If you didn't get this datastore, you need add it in your workspace. Prompt flow relies on fileshare to store snapshot of flow. If fileshare have som - Create data store with name `workspaceworkingdirectory` . See [Create datastores](../../how-to-datastore.md) - If you have `workspaceworkingdirectory` datastore but its type is `blob` instead of `fileshare`, please create new workspace and use storage didn't enable hierarchical namespaces ADLS Gen2 as workspace default storage account. See [Create workspace](../../how-to-manage-workspace.md#create-a-workspace) - +## Flow is missing +++Prompt flow relies on fileshare to store snapshot of flow. This error mean prompt flow service can operate prompt flow folder in fileshare, but the prompt flow UI can't find folder in fileshare. There are some potential reasons: +- Prompt flow relies datastore named `workspaceworkingdirectory` in your workspace, which using `code-391ff5ac-6576-460f-ba4d-7e03433c68b6`, please make sure your data store using the same container. If your data store is using different fileshare name, you need use new workspace. +![name of fileshare in datastore detail page](../media/faq/file-share-name.png) ++- If your fileshare is correctly named, then please try in different network environment, such as home network, company network, etc. There is a rare case where a fileshare can't be accessed in some network environments even if it's public-access enabled. + ## Runtime related issues ### My runtime is failed with a system error **runtime not ready** when using a custom environment |
mariadb | Concept Reserved Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concept-reserved-pricing.md | Last updated 06/24/2022 # Prepay for Azure Database for MariaDB compute resources with reserved capacity + Azure Database for MariaDB now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MariaDB reserved capacity, you make an upfront commitment on MariaDB servers for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MariaDB reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> You do not need to assign the reservation to specific Azure Database for MariaDB servers. An already running Azure Database for MariaDB or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure Database for MariaDB compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MariaDB Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MariaDB are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MariaDB reserved capacity offering](https://azure.microsoft.com/pricing/details/mariadb/). </br> |
mariadb | Concepts Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-audit-logs.md | Last updated 06/24/2022 # Audit Logs in Azure Database for MariaDB + In Azure Database for MariaDB, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance. ## Configure audit logging |
mariadb | Concepts Azure Advisor Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-azure-advisor-recommendations.md | |
mariadb | Concepts Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-backup.md | Last updated 06/24/2022 # Backup and restore in Azure Database for MariaDB + Azure Database for MariaDB automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. ## Backups |
mariadb | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md | Last updated 06/24/2022 # Overview of business continuity with Azure Database for MariaDB + This article describes the capabilities that Azure Database for MariaDB provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance. ## Features that you can use to provide business continuity |
mariadb | Concepts Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-certificate-rotation.md | Last updated 06/24/2022 # Understanding the changes in the Root CA change for Azure Database for MariaDB + Azure database for MariaDB as part of standard maintenance and security best practices will complete the root certificate change starting March 2023. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server. > [!NOTE] |
mariadb | Concepts Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-compatibility.md | Last updated 06/24/2022 # MariaDB drivers and management tools compatible with Azure Database for MariaDB + This article describes the drivers and management tools that are compatible with Azure Database for MariaDB. ## MariaDB Drivers |
mariadb | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md | Last updated 06/24/2022 # Connectivity architecture in Azure Database for MariaDB + This article explains the Azure Database for MariaDB connectivity architecture and how the traffic is directed to your Azure Database for MariaDB instance from clients both within and outside Azure. ## Connectivity architecture |
mariadb | Concepts Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity.md | Last updated 06/24/2022 # Handling of transient connectivity errors for Azure Database for MariaDB + This article describes how to handle transient errors connecting to Azure Database for MariaDB. ## Transient errors |
mariadb | Concepts Data Access Security Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-private-link.md | Last updated 06/24/2022 # Private Link for Azure Database for MariaDB + Private Link allows you to create private endpoints for Azure Database for MariaDB and so brings Azure services inside your private Virtual Network (VNet). The private endpoint exposes a private IP you can use to connect to your Azure Database for MariaDB database server just like any other resource in the VNet. For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../virtual-network/virtual-networks-overview.md) and Subnet. |
mariadb | Concepts Data Access Security Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-vnet.md | Last updated 06/24/2022 # Use Virtual Network service endpoints and rules for Azure Database for MariaDB + *Virtual network rules* are one firewall security feature that controls whether your Azure Database for MariaDB server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MariaDB server. To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MariaDB: |
mariadb | Concepts Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-in-replication.md | Last updated 06/24/2022 # Replicate data into Azure Database for MariaDB + Data-in Replication allows you to synchronize data from a MariaDB server running on-premises, in virtual machines, or database services hosted by other cloud providers into the Azure Database for MariaDB service. Data-in Replication is based on the binary log (binlog) file position-based replication native to MariaDB. To learn more about binlog replication, see the [binlog replication overview](https://mariadb.com/kb/en/library/replication-overview/). ## When to use Data-in Replication |
mariadb | Concepts Firewall Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-firewall-rules.md | Last updated 06/24/2022 # Azure Database for MariaDB server firewall rules + Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level. |
mariadb | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-high-availability.md | Last updated 06/24/2022 # High availability in Azure Database for MariaDB + The Azure Database for MariaDB service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/MariaDB) uptime. Azure Database for MariaDB provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MariaDB can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service. Azure Database for MariaDB is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components. |
mariadb | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-limits.md | Last updated 06/24/2022 # Limitations in Azure Database for MariaDB + The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. ## Server parameters |
mariadb | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-monitoring.md | Last updated 06/24/2022 # Monitoring in Azure Database for MariaDB + Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MariaDB provides various metrics that give insight into the behavior of your server. ## Metrics |
mariadb | Concepts Planned Maintenance Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-planned-maintenance-notification.md | Last updated 06/24/2022 # Planned maintenance notification in Azure Database for MariaDB + Learn how to prepare for planned maintenance events on your Azure Database for MariaDB. ## What is a planned maintenance? |
mariadb | Concepts Pricing Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-pricing-tiers.md | Last updated 06/24/2022 # Azure Database for MariaDB pricing tiers + You can create an Azure Database for MariaDB server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MariaDB server level. A server can have one or many databases. | Resource | **Basic** | **General Purpose** | **Memory Optimized** | |
mariadb | Concepts Query Performance Insight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-performance-insight.md | Last updated 06/24/2022 # Query Performance Insight in Azure Database for MariaDB + **Applies to:** Azure Database for MariaDB 10.2 Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. |
mariadb | Concepts Query Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-store.md | |
mariadb | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-read-replicas.md | |
mariadb | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-security.md | Last updated 06/24/2022 # Security in Azure Database for MariaDB + There are multiple layers of security that are available to protect the data on your Azure Database for MariaDB server. This article outlines those security options. ## Information protection and encryption |
mariadb | Concepts Server Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-logs.md | Last updated 06/24/2022 # Slow query logs in Azure Database for MariaDB + In Azure Database for MariaDB, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting. For more information about the slow query log, see the MariaDB documentation for [slow query log](https://mariadb.com/kb/en/library/slow-query-log-overview/). |
mariadb | Concepts Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-parameters.md | Last updated 06/24/2022 # Server parameters in Azure Database for MariaDB + This article provides considerations and guidelines for configuring server parameters in Azure Database for MariaDB. ## What are server parameters? |
mariadb | Concepts Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-servers.md | Last updated 06/24/2022 # Server concepts in Azure Database for MariaDB + This article provides considerations and guidelines for working with Azure Database for MariaDB servers. ## What is an Azure Database for MariaDB server? |
mariadb | Concepts Ssl Connection Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-ssl-connection-security.md | |
mariadb | Concepts Supported Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-supported-versions.md | Last updated 06/24/2022 # Supported Azure Database for MariaDB server versions + Azure Database for MariaDB has been developed from the open-source [MariaDB Server](https://downloads.mariadb.org/), using the InnoDB engine. MariaDB uses the X.Y.Z naming scheme. X is the major version, Y is the minor version, and Z is the patch version. |
mariadb | Connect Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/connect-workbench.md | Last updated 06/24/2022 # Quickstart: Azure Database for MariaDB: Use MySQL Workbench to connect and query data + This quickstart demonstrates how to connect to an Azure Database for MariaDB instance by using MySQL Workbench. ## Prerequisites |
mariadb | Howto Alert Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-alert-metric.md | Last updated 06/24/2022 # Use the Azure portal to set up alerts on metrics for Azure Database for MariaDB + This article shows you how to set up Azure Database for MariaDB alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services. The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met. |
mariadb | Howto Auto Grow Storage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-cli.md | Last updated 06/24/2022 # Auto-grow Azure Database for MariaDB storage using the Azure CLI + This article describes how you can configure an Azure Database for MariaDB server storage to grow without impacting the workload. The server [reaching the storage limit](concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](concepts-pricing-tiers.md#storage) apply. |
mariadb | Howto Auto Grow Storage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-portal.md | Last updated 06/24/2022 # Auto grow storage in Azure Database for MariaDB using the Azure portal + This article describes how you can configure an Azure Database for MariaDB server storage to grow without impacting the workload. When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](concepts-pricing-tiers.md#storage) apply. |
mariadb | Howto Auto Grow Storage Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-powershell.md | |
mariadb | Howto Configure Audit Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md | |
mariadb | Howto Configure Audit Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-portal.md | Last updated 06/24/2022 # Configure and access audit logs in the Azure portal + You can configure the [Azure Database for MariaDB audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal. ## Prerequisites |
mariadb | Howto Configure Privatelink Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-cli.md | Last updated 06/24/2022 # Create and manage Private Link for Azure Database for MariaDB using CLI + A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MariaDB server with an Azure private endpoint. > [!NOTE] |
mariadb | Howto Configure Privatelink Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-portal.md | Last updated 06/24/2022 # Create and manage Private Link for Azure Database for MariaDB using Portal + A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MariaDB server with an Azure private endpoint. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. |
mariadb | Howto Configure Server Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-cli.md | Last updated 06/24/2022 # Configure and access Azure Database for MariaDB slow query logs by using Azure CLI + You can download the Azure Database for MariaDB slow query logs by using Azure CLI, the Azure command-line utility. ## Prerequisites |
mariadb | Howto Configure Server Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-portal.md | Last updated 06/24/2022 # Configure and access Azure Database for MariaDB slow query logs from the Azure portal + You can configure, list, and download the [Azure Database for MariaDB slow query logs](concepts-server-logs.md) from the Azure portal. ## Prerequisites |
mariadb | Howto Configure Server Parameters Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-cli.md | Last updated 06/24/2022 # Configure server parameters in Azure Database for MariaDB using the Azure CLI + You can list, show, and update configuration parameters for an Azure Database for MariaDB server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. >[!Note] |
mariadb | Howto Configure Server Parameters Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md | |
mariadb | Howto Configure Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md | |
mariadb | Howto Connection String Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string-powershell.md | Last updated 06/24/2022 # How to generate an Azure Database for MariaDB connection string with PowerShell + This article demonstrates how to generate a connection string for an Azure Database for MariaDB server. You can use a connection string to connect to an Azure Database for MariaDB from many different applications. |
mariadb | Howto Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string.md | |
mariadb | Howto Create Manage Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-manage-server-portal.md | Last updated 06/24/2022 # Manage an Azure Database for MariaDB server using the Azure portal + This article shows you how to manage your Azure Database for MariaDB servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details. ## Sign in |
mariadb | Howto Create Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-users.md | Last updated 06/24/2022 # Create users in Azure Database for MariaDB + This article describes how you can create users in Azure Database for MariaDB. When you first created your Azure Database for MariaDB, you provided a server admin login user name and password. For more information, you can follow the [Quickstart](quickstart-create-mariadb-server-database-using-azure-portal.md). You can locate your server admin login user name from the Azure portal. |
mariadb | Howto Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-data-in-replication.md | Last updated 04/19/2023 # Configure Data-in Replication in Azure Database for MariaDB + This article describes how to set up [Data-in Replication](concepts-data-in-replication.md) in Azure Database for MariaDB by configuring the source and replica servers. This article assumes that you have some prior experience with MariaDB servers and databases. To create a replica in the Azure Database for MariaDB service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MariaDB server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based replication native to MariaDB. To learn more about binlog replication, see the [binlog replication overview](https://mariadb.com/kb/en/library/replication-overview/). |
mariadb | Howto Deny Public Network Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-deny-public-network-access.md | Last updated 06/24/2022 # Deny Public Network Access in Azure Database for MariaDB using Azure portal + This article describes how you can configure an Azure Database for MariaDB server to deny all public configurations and allow only connections through private endpoints to further enhance the network security. ## Prerequisites |
mariadb | Howto Manage Firewall Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-cli.md | Last updated 06/24/2022 # Create and manage Azure Database for MariaDB firewall rules by using the Azure CLI + Server-level firewall rules can be used to manage access to an Azure Database for MariaDB Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MariaDB firewalls, see [Azure Database for MariaDB server firewall rules](./concepts-firewall-rules.md). Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-cli.md). |
mariadb | Howto Manage Firewall Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-portal.md | Last updated 06/24/2022 # Create and manage Azure Database for MariaDB firewall rules by using the Azure portal + Server-level firewall rules can be used to manage access to an Azure Database for MariaDB Server from a specified IP address or a range of IP addresses. Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-portal.md). |
mariadb | Howto Manage Vnet Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-cli.md | Last updated 06/24/2022 # Create and manage Azure Database for MariaDB VNet service endpoints using Azure CLI + Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MariaDB server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MariaDB VNet service endpoints, including limitations, see [Azure Database for MariaDB Server VNet service endpoints](concepts-data-access-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MariaDB. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] |
mariadb | Howto Manage Vnet Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-portal.md | Last updated 06/24/2022 # Create and manage Azure Database for MariaDB VNet service endpoints and VNet rules by using the Azure portal + Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MariaDB server. For an overview of Azure Database for MariaDB VNet service endpoints, including limitations, see [Azure Database for MariaDB Server VNet service endpoints](concepts-data-access-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MariaDB. > [!NOTE] |
mariadb | Howto Migrate Dump Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-migrate-dump-restore.md | Last updated 04/19/2023 # Migrate your MariaDB database to an Azure database for MariaDB by using dump and restore + This article explains two common ways to back up and restore databases in your Azure database for MariaDB: - Dump and restore by using a command-line tool (using mysqldump). |
mariadb | Howto Move Regions Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-move-regions-portal.md | Last updated 06/24/2022 # Move an Azure Database for MariaDB server to another region by using the Azure portal + There are various scenarios for moving an existing Azure Database for MariaDB server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. You can use an Azure Database for MariaDB [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic. |
mariadb | Howto Read Replicas Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-cli.md | Last updated 06/24/2022 # How to create and manage read replicas in Azure Database for MariaDB using the Azure CLI and REST API + In this article, you will learn how to create and manage read replicas in the Azure Database for MariaDB service using the Azure CLI and REST API. ## Azure CLI |
mariadb | Howto Read Replicas Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-portal.md | Last updated 06/24/2022 # How to create and manage read replicas in Azure Database for MariaDB using the Azure portal + In this article, you will learn how to create and manage read replicas in the Azure Database for MariaDB service using the Azure portal. ## Prerequisites |
mariadb | Howto Read Replicas Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md | |
mariadb | Howto Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-redirection.md | Last updated 04/19/2023 # Connect to Azure Database for MariaDB with redirection + This topic explains how to connect an application your Azure Database for MariaDB server with redirection mode. Redirection aims to reduce network latency between client applications and MariaDB servers by allowing applications to connect directly to backend server nodes. ## Before you begin |
mariadb | Howto Restart Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-cli.md | Last updated 06/24/2022 # Restart Azure Database for MariaDB server using the Azure CLI + This topic describes how you can restart an Azure Database for MariaDB server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores. |
mariadb | Howto Restart Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-portal.md | Last updated 06/24/2022 # Restart Azure Database for MariaDB server using Azure portal + This topic describes how you can restart an Azure Database for MariaDB server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores. |
mariadb | Howto Restart Server Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md | |
mariadb | Howto Restore Dropped Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-dropped-server.md | Last updated 06/24/2022 # Restore a deleted Azure Database for MariaDB server + When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MariaDB server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system. ## Pre-requisites |
mariadb | Howto Restore Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-cli.md | Last updated 06/24/2022 # How to back up and restore a server in Azure Database for MariaDB using the Azure CLI + Azure Database for MariaDB servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] |
mariadb | Howto Restore Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-portal.md | Last updated 06/24/2022 # How to backup and restore a server in Azure Database for MariaDB using the Azure portal + ## Backup happens automatically Azure Database for MariaDB servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. |
mariadb | Howto Restore Server Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-powershell.md | |
mariadb | Howto Server Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-server-parameters.md | Last updated 06/24/2022 # Configure server parameters in Azure Database for MariaDB using the Azure portal + Azure Database for MariaDB supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted. >[!Note] |
mariadb | Howto Tls Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-tls-configurations.md | Last updated 06/24/2022 # Configuring TLS settings in Azure Database for MariaDB using Azure portal + This article describes how you can configure an Azure Database for MariaDB server to enforce minimum TLS version for connections to go through and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security. You can enforce TLS version for connecting to their Azure Database for MariaDB by setting the minimum TLS version for their database server. For example, setting the minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected. |
mariadb | Howto Troubleshoot Common Connection Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-common-connection-issues.md | Last updated 06/24/2022 # Troubleshoot connection issues to Azure Database for MariaDB + Connection problems may be caused by a variety of things, including: * Firewall settings |
mariadb | Howto Troubleshoot Query Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-query-performance.md | Last updated 06/24/2022 # How to use EXPLAIN to profile query performance in Azure Database for MariaDB + **EXPLAIN** is a handy tool to optimize queries. EXPLAIN statement can be used to get information about how SQL statements are executed. The following output shows an example of the execution of an EXPLAIN statement. ```sql |
mariadb | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/overview.md | Last updated 06/24/2022 # What is Azure Database for MariaDB? + Azure Database for MariaDB is a relational database service in the Microsoft cloud. Azure Database for MariaDB is based on the [MariaDB community edition](https://mariadb.org/download/) (available under the GPLv2 license) database engine, version 10.2 and 10.3. Azure Database for MariaDB delivers: |
mariadb | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md | |
mariadb | Quickstart Create Mariadb Server Database Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md | Last updated 06/24/2022 # Quickstart: Use an ARM template to create an Azure Database for MariaDB server + Azure Database for MariaDB is a managed service that you use to run, manage, and scale highly available MariaDB databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MariaDB server in the Azure portal, PowerShell, or Azure CLI. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] |
mariadb | Quickstart Create Mariadb Server Database Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md | Last updated 06/24/2022 # Quickstart: Use Bicep to create an Azure Database for MariaDB server + Azure Database for MariaDB is a managed service that you use to run, manage, and scale highly available MariaDB databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MariaDB server in PowerShell or Azure CLI. [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] |
mariadb | Quickstart Create Mariadb Server Database Using Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-cli.md | |
mariadb | Quickstart Create Mariadb Server Database Using Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-portal.md | |
mariadb | Quickstart Create Mariadb Server Database Using Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-powershell.md | |
mariadb | Reference Stored Procedures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/reference-stored-procedures.md | Last updated 06/24/2022 # Azure Database for MariaDB management stored procedures + Stored procedures are available on Azure Database for MariaDB servers to help manage your MariaDB server. This includes managing your server's connections, queries, and setting up Data-in Replication. ## Data-in Replication stored procedures |
mariadb | Sample Scripts Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/sample-scripts-azure-cli.md | Keywords: azure cli samples, azure cli code samples, azure cli script samples # Azure CLI samples for Azure Database for MariaDB + You can configure Azure SQL Database for MariaDB by using the <a href="/cli/azure">Azure CLI</a>. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Change Server Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-change-server-configuration.md | Last updated 01/26/2022 # List and update configurations of an Azure Database for MariaDB server using Azure CLI + This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for MariaDB server, and sets the *innodb_lock_wait_timeout* to a value that is other than the default one. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Create Server And Firewall Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md | Last updated 01/26/2022 # Create a MariaDB server and configure a firewall rule using the Azure CLI + This sample CLI script creates an Azure Database for MariaDB server and configures a server-level firewall rule. Once the script runs successfully, the MariaDB server is accessible by all Azure services and the configured IP address. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Create Server With Vnet Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md | Last updated 01/26/2022 # Create a MariaDB server and configure a vNet rule using the Azure CLI + This sample CLI script creates an Azure Database for MariaDB server and configures a vNet rule. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Point In Time Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-point-in-time-restore.md | Last updated 02/11/2022 # Restore an Azure Database for MariaDB server using Azure CLI + This sample CLI script restores a single Azure Database for MariaDB server to a previous point in time. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Scale Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-scale-server.md | Last updated 01/26/2022 # Monitor and scale an Azure Database for MariaDB server using Azure CLI + This sample CLI script scales compute and storage for a single Azure Database for MariaDB server after querying the metrics. Compute can scale up or down. Storage can only scale up. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Sample Server Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-server-logs.md | Last updated 01/26/2022 # Enable and download server slow query logs of an Azure Database for MariaDB server using Azure CLI + This sample CLI script enables and downloads the slow query logs of a single Azure Database for MariaDB server. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
mariadb | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md | |
mariadb | Select Right Deployment Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/select-right-deployment-type.md | Last updated 06/24/2022 # Choose the right MariaDB Server option in Azure + With Azure, your MariaDB server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure. When making your decision, consider the following two options: |
mariadb | Tutorial Design Database Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-cli.md | |
mariadb | Tutorial Design Database Using Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-portal.md | |
mariadb | Tutorial Design Database Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-powershell.md | |
mariadb | Whats Happening To Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/whats-happening-to-mariadb.md | + + Title: What's happening to Azure Database for MariaDB? +description: The Azure Database for MariaDB service is being deprecated. +++ Last updated : 09/19/2023++++++# What's happening to Azure Database for MariaDB? +++Azure Database for MariaDB is on the retirement path, and **Azure Database for MariaDB is scheduled for retirement by September 19, 2025**. ++As part of this retirement, there is no extended support for creating new MariaDB server instances from the Azure portal beginning **December 19, 2023**, if you still need to create MariaDB instances to meet business continuity needs, you can use [Azure CLI](/azure/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli) until **March 19, 2024**. ++We're investing in our flagship offering of Azure Database for MySQL - Flexible Server better suited for mission-critical workloads. Azure Database for MySQL - Flexible Server has better features, performance, an improved architecture, and more controls to manage costs across all service tiers compared to Azure Database for MariaDB. We encourage you to migrate to Azure Database for MySQL - Flexible Server before retirement to experience the new capabilities of Azure Database for MySQL - Flexible Server. ++Azure Database for MySQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Flexible Server, visit [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/overview). ++### Migrate from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server ++Learn how to [migrate from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server.](https://aka.ms/AzureMariaDBtoAzureMySQL) ++### Frequently Asked Questions (FAQs) ++**Q. Why is the Azure Database for MariaDB being retired?** ++A. Azure Database for MariaDB became Generally Available (GA) in 2018. However, given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the MariaDB offering needs to be retired and upgraded with a new architecture ΓÇô Azure Database for MySQL Flexible Server to bring you the best of Azure's open-source database platform. ++**Q. Why am I being asked to migrate to Azure Database for MySQL - Flexible Server?** ++A. There's a high application compatibility between Azure Database for MariaDB and Azure Database for MySQL, as MariaDB was forked from MySQL. [Azure Database for MySQL - Flexible Server](https://azure.microsoft.com/pricing/details/mysql/flexible-server/#overview) is the best platform for running all your MySQL workloads on Azure. Azure MySQL- Flexible server is both economical and provides better performance across all service tiers and more ways to control your costs for cheaper and faster disaster recovery: ++- More ways to optimize costs, including support for burstable tier compute options. ++- Improved performance for business-critical production workloads that require low latency, high concurrency, fast failover, and high scalability. ++- Improved uptime by configuring a hot standby on the same or a different zone and a one-hour time window for planned server maintenance. ++**Q. How soon must I migrate my MariaDB servers to a flexible server?** ++A. Azure Database for MariaDB is scheduled for retirement by **September 19, 2025**, so we strongly recommend migrating to Azure Database for MySQL - Flexible Server at your earliest opportunity to ensure ample time to run through the migration lifecycle, apply the benefits offered by Flexible Server, and ensure the continuity of your business. ++**Q. What happens to my existing Azure Database for MariaDB instances?** ++A. Your existing Azure Database for MariaDB workloads will continue to function as before and **will be officially supported until the sunset date**. However, updates have yet to be released for Azure Database for MariaDB, and we strongly advise you to start migrating to Azure Database for MySQL - Flexible Server at the earliest. ++**Q. Can I choose to continue running Azure Database for MariaDB beyond the sunset date?** ++A. Unfortunately, we don't plan to support Azure Database for MariaDB beyond the sunset date of September 19, 2025. Hence, we advise that you start planning your migration as soon as possible. ++**Q. After the Azure Database for MariaDB retirement announcement, what if I still need to create a new MariaDB server to meet my business needs?** ++A. As part of this retirement, we'll no longer support creating new MariaDB instances from the Azure portal beginning **December 19, 2023**. Suppose you still need to create MariaDB instances to meet business continuity needs. In that case, you can use [Azure CLI](/azure/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli) until **March 19, 2024**. ++**Q. How does the Azure Database for MySQL flexible server's 99.99% availability SLA differ from MariaDB?** ++A. Azure Database for MySQL - Flexible server zone-redundant deployment provides 99.99% availability with zonal-level resiliency, whereas MariaDB provides resiliency in a single availability zone. Flexible Server's High Availability (HA) architecture deploys a warm standby with redundant compute and storage (with each site's data stored in 3x copies) as compared to MariaDB's HA architecture, which doesn't have a passive hot standby to help recover from zonal failures. The flexible server's HA architecture enables reduced downtime during unplanned outages and planned maintenance. ++**Q. What migration options help me migrate to a flexible server?** ++A. Learn how to [migrate from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server.](https://aka.ms/AzureMariaDBtoAzureMySQL) ++**Q. I have further questions on retirement. How can I get assistance with it?** ++A. If you have questions, get answers from community experts in [Microsoft Q&A.](/answers/tags/56/azure-database-mariadb) If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest): ++- For _Issue type_, select **Technical**. +- For _Subscription_, select your subscription. +- For _Service_, select **My services**. +- For _Service type_, select **Azure Database for MariaDB**. +- For _Resource_, select your resource. +- For _Problem type_, select **Migration**. +- For _Problem subtype_, select **Migrating from Azure for MariaDB to Azure for MySQL Flexible Server**. ++For further questions, reach out to [AskAzureDBforMariaDB@service.microsoft.com](mailto:AskAzureDBforMariaDB@service.microsoft.com) ++### Next steps ++- [Migrate to Azure Database for MySQL - Flexible Server](https://aka.ms/AzureMariaDBtoAzureMySQL) +- [What is Flexible Server](/azure/mysql/flexible-server/overview) |
migrate | How To Upgrade Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md | -> [!NOTE] -> This feature is currently available only for [VMware agentless migration](tutorial-migrate-vmware.md). - ## Prerequisites - Ensure you have an existing Migrate project or [create](create-manage-projects.md) a project. -- Ensure you have discovered the servers according to [Discover servers in VMware environment](tutorial-discover-vmware.md) and replicated the servers as described in [Migrate VMware VMs](tutorial-migrate-vmware.md#replicate-vms). +- Ensure you have discovered the servers according to your [VMware](tutorial-discover-vmware.md), [Hyper-V](tutorial-discover-hyper-v.md), or [physical server](tutorial-discover-physical.md) environments and replicated the servers as described in [Migrate VMware VMs](tutorial-migrate-vmware.md#replicate-vms), [Migrate Hyper-V VMs](tutorial-migrate-hyper-v.md#migrate-vms), or [Migrate Physical servers](tutorial-migrate-physical-virtual-machines.md#migrate-vms) based on your environment. - Verify the operating system disk has enough [free space](/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements) to perform the in-place upgrade. The minimum disk space requirement is 32 GB.  - The upgrade feature only works for Windows Server Standard and Datacenter editions. - The upgrade feature does not work for non en-US language servers. |
migrate | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md | Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
migrate | Tutorial Migrate Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md | Before you begin this tutorial, you should: 1. Go to the already created project or [create a new project.](./create-manage-projects.md) 1. Verify permissions for your Azure account - Your Azure account needs permissions to create a VM, write to an Azure managed disk, and manage failover operations for the Recovery Services Vault associated with your Azure Migrate project. +> [!NOTE] +> If you're planning to upgrade your Windows operating system, Azure Migrate may download the Windows SetupDiag for error details in case upgrade fails. Ensure the VM created in Azure post the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). In case there is no access to SetupDiag, you may not be able to get detailed OS upgrade failure error codes but the upgrade can still proceed. ++ ## Download the provider For migrating Hyper-V VMs, the Migration and modernization tool installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes. Note that the [Azure Migrate appliance](migrate-appliance.md) isn't used for Hyper-V migration. Do a test migration as follows: ![Screenshot of Test migration screen.](./media/tutorial-migrate-hyper-v/test-migrate.png) 1. In **Test Migration**, select the Azure virtual network in which the Azure VM will be located after the migration. We recommend you use a non-production virtual network.+1. You have an option to upgrade the Windows Server OS during test migration. For Hyper-V VMs, automatic detection of OS is not yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version that you want to upgrade to. If the target version is available, it is processed accordingly. [Learn more](how-to-upgrade-windows.md). 1. The **Test migration** job starts. Monitor the job in the portal notifications. 1. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**. 1. After the test is done, right-click the Azure VM in **Replications**, and select **Clean up test migration**. After you've verified that the test migration works as expected, you can migrate 1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. - By default Azure Migrate shuts down the on-premises VM, and runs an on-demand replication to synchronize any VM changes that occurred since the last replication occurred. This ensures no data loss. - If you don't want to shut down the VM, select **No**.+1. You have an option to upgrade the Windows Server OS during migration. For Hyper-V VMs, automatic detection of OS is not yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version that you want to upgrade to. If the target version is available, it is processed accordingly. [Learn more](how-to-upgrade-windows.md). 1. A migration job starts for the VM. Track the job in Azure notifications. 1. After the job finishes, you can view and manage the VM from the **Virtual Machines** page. |
migrate | Tutorial Migrate Physical Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md | Last updated 07/26/2023 -# Migrate machines as physical servers to Azure +# Migrate machines as physical servers to Azure This article shows you how to migrate machines as physical servers to Azure, using the Migration and modernization tool. Migrating machines by treating them as physical servers is useful in a number of scenarios: Before you begin this tutorial, you should: - [Review](./agent-based-migration-architecture.md) the migration architecture. - [Review](../site-recovery/migrate-tutorial-windows-server-2008.md#limitations-and-known-issues) the limitations related to migrating Windows Server 2008 servers to Azure. +> [!NOTE] +> If you're planning to upgrade your Windows operating system, Azure Migrate may download the Windows SetupDiag for error details in case upgrade fails. Ensure the VM created in Azure post the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). In case there is no access to SetupDiag, you may not be able to get detailed OS upgrade failure error codes but the upgrade can still proceed. + ## Prepare Azure Prepare Azure for migration with the Migration and modernization tool. Do a test migration as follows: :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/test-migrate-inline.png" alt-text="Screenshot showing the result after clicking test migration." lightbox="./media/tutorial-migrate-physical-virtual-machines/test-migrate-expanded.png"::: 3. In **Test Migration**, select the Azure VNet in which the Azure VM will be located after the migration. We recommend you use a non-production VNet.+1. You have an option to upgrade the Windows Server OS during test migration. To upgrade, select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. [Learn more](how-to-upgrade-windows.md). 4. The **Test migration** job starts. Monitor the job in the portal notifications. 5. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**. 6. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**. After you've verified that the test migration works as expected, you can migrate > [!NOTE] > For minimal data loss, the recommendation is to bring the application down manually as part of the migration window (don't let the applications accept any connections) and then initiate the migration. The server needs to be kept running, so remaining changes can be synchronized before the migration is completed. +1. You have an option to upgrade the Windows Server OS during migration. To upgrade, select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. [Learn more](how-to-upgrade-windows.md). 4. A migration job starts for the VM. Track the job in Azure notifications. 5. After the job finishes, you can view and manage the VM from the **Virtual Machines** page. |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | |
nat-gateway | Tutorial Hub Spoke Nat Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md | Azure Firewall provides [2,496 SNAT ports per public IP address](../firewall/int NAT gateway can be integrated with Azure Firewall by configuring NAT gateway directly to the Azure Firewall subnet in order to provide a more scalable method of outbound connectivity. For production deployments, a hub and spoke network is recommended, where the firewall is in its own virtual network. The workload servers are peered virtual networks in the same region as the hub virtual network where the firewall resides. In this architectural setup, NAT gateway can provide outbound connectivity from the hub virtual network for all spoke virtual networks peered. + In this tutorial, you learn how to: > [!div class="checklist"] |
nat-gateway | Tutorial Nat Gateway Load Balancer Internal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md | SNAT is enabled for an internal backend pool via another public load balancer, n The NAT gateway integration replaces the need for the deployment of a public load balancer, network routing, or a public IP defined on a virtual machine in the backend pool. + In this tutorial, you learn how to: > [!div class="checklist"] |
nat-gateway | Tutorial Nat Gateway Load Balancer Public Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md | By default, an Azure Standard Load Balancer is secure. Outbound connectivity is The NAT gateway integration replaces the need for outbound rules for backend pool outbound SNAT. + In this tutorial, you learn how to: > [!div class="checklist"] |
networking | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md | Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
notification-hubs | Notification Hubs High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md | Title: Azure Notification Hubs high availability and cross-region disaster recovery (preview) + Title: Azure Notification Hubs high availability and cross-region disaster recovery description: Learn about high availability and cross-region disaster recovery options in Azure Notification Hubs. Last updated 09/11/2023 -# High availability for Azure Notification Hubs (preview) +# High availability for Azure Notification Hubs [Azure Notification Hubs][] provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises). This article describes the configuration options to achieve the availability characteristics required by your solution. For more information about our SLA, see the [Notification Hubs SLA][]. |
operator-nexus | Howto Baremetal Run Data Extract | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-data-extract.md | The current list of supported commands are Command Name: `mde-agent-information`\ Arguments: None +- Collect Dell Hardware Rollup Status\ + Command Name: `hardware-rollup-status`\ + Arguments: None + The command syntax is: ```azurecli-interactive az networkcloud baremetalmachine run-data-extract --name "bareMetalMachineName" --limit-time-seconds 600 ``` +This example executes the `hardware-rollup-status` command without arguments. ++```azurecli +az networkcloud baremetalmachine run-data-extract --name "bareMetalMachineName" \ + --resource-group "resourceGroupName" \ + --subscription "subscription" \ + --commands '[{"command":"hardware-rollup-status"}]' \ + --limit-time-seconds 600 +``` + In the response, the operation performs asynchronously and returns an HTTP status code of 202. See the **Viewing the output** section for details on how to track command completion and view the output file. ## Viewing the output Writing to /hostfs/tmp/runcommand Script execution result can be found in storage account: https://cmzhnh6bdsfsdwpbst.blob.core.windows.net/bmm-run-command-output/f5962f18-2228-450b-8cf7-cb8344fdss63b0-action-bmmdataextcmd.tar.gz?se=2023-07-26T19%3A07%3A22Z&sig=X9K3VoNWRFP78OKqFjvYoxubp65BbNTq%2BGnlHclI9Og%3D&sp=r&spr=https&sr=b&st=2023-07-26T15%3A07%3A22Z&sv=2019-12-12 ```++Data is collected with the `hardware-rollup-status` command and formatted as JSON to `/hostfs/tmp/runcommand/rollupStatus.json`. The JSON file is found +in the data extract zip file located in the storage account. ++```azurecli +====Action Command Output==== +Executing hardware-rollup-status command +Getting rollup status logs for b37dev03a1c002 +Writing to /hostfs/tmp/runcommand +++================================ +Script execution result can be found in storage account: +https://cmkfjft8twwpst.blob.core.windows.net/bmm-run-command-output/20b217b5-ea38-4394-9db1-21a0d392eff0-action-bmmdataextcmd.tar.gz?se=2023-09-19T18%3A47%3A17Z&sig=ZJcsNoBzvOkUNL0IQ3XGtbJSaZxYqmtd%3D&sp=r&spr=https&sr=b&st=2023-09-19T14%3A47%3A17Z&sv=2019-12-12 +``` |
postgresql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md | |
reliability | Migrate Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md | -Virtual Machine (VM) and Virtual Machine Scale Sets are zonal services, which means that VM resources can be deployed by using one of the following methods: +Virtual Machine (VM) and Virtual Machine Scale Sets are availability zone enabled services, which means that VM resources can be deployed by using one of the following methods: -- VM resources are deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements.+- **Zonal**: VM resources are deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. -- VM resources are replicated to one or more zones within the region to improve the resiliency of the application and data in a High Availability (HA) architecture.+- **Zone-redundant**: VM resources are replicated to one or more zones within the region to improve the resiliency of the application and data in a High Availability (HA) architecture. -When you migrate resources to availability zone support, we recommend that you select multiple zones for your new VMs and Virtual Machine Scale Sets, to ensure high-availability of your compute resources. +To ensure high-availability of your compute resources, we recommend that you select multiple zones for your new VMs and Virtual Machine Scale Sets when you migrate to availability zones. ++For more information on availability zone support for VM services, see [Reliability in Virtual Machines](./reliability-virtual-machines.md). For availability zone support for Virtual Machine scale sets, see [Reliability in Virtual Machine Scale Sets](./reliability-virtual-machine-scale-sets.md). ## Prerequisites The following requirements should be part of a disaster recovery strategy that h ## Next Steps -Learn more about: +- [Azure services and regions that support availability zones](availability-zones-service-support.md) +- [Reliability in Virtual Machines](./reliability-virtual-machines.md) +- [Reliability in Virtual Machine Scale Sets](./reliability-virtual-machine-scale-sets.md) -> [!div class="nextstepaction"] -> [Azure services and regions that support availability zones](availability-zones-service-support.md) |
reliability | Reliability Guidance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md | Azure reliability guidance contains the following: [Azure SQL](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage Mover](reliability-azure-storage-mover.md)|-[Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Virtual Machine Scale Sets](reliability-virtual-machine-scale-sets.md)| [Azure Virtual Machines](reliability-virtual-machines.md)| [Azure Virtual Machines Image Builder](reliability-image-builder.md)| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |
reliability | Reliability Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md | + + Title: Reliability in Azure Virtual Machine Scale Sets +description: Learn about reliability in Azure Virtual Machine Scale Sets. +++++ Last updated : 06/12/2023+++# Reliability in Virtual Machine Scale Sets ++This article contains [specific reliability recommendations](#reliability-recommendations) and information on [availability zones support](#availability-zone-support) for Virtual Machine Scale Sets. ++>[!NOTE] +>Virtual Machine Scale Sets can only be deployed into one region. If you want to deploy VMs across multiple regions, see [Virtual Machines-Disaster recovery: cross-region failover](./reliability-virtual-machines.md#disaster-recovery-and-business-continuity). ++For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). +++## Reliability recommendations ++This section contains recommendations for achieving resiliency and availability for your Azure Virtual Machine Scale Sets. +++### Reliability recommendations summary ++| Category | Priority |Recommendation | +||--|| +| [**High Availability**](#high-availability) |:::image type="icon" source="../reliability/media/icon-recommendation-high.svg":::| [Enable automatic repair policy](#-enable-automatic-repair-policy) | +| |:::image type="icon" source="../reliability/media/icon-recommendation-high.svg":::| [Deploy Virtual Machine Scale Sets across availability zones with Virtual Machine Scale Sets Flex](#-deploy-virtual-machine-scale-sets-across-availability-zones-with-virtual-machine-scale-sets-flex) | +| [**Scalability**](#scalability) |:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg":::| [VMSS-1: Deploy VMs with flexible orchestration mode](#-deploy-vms-with-flexible-orchestration-mode) | +| |:::image type="icon" source="../reliability/media/icon-recommendation-high.svg":::| [Configure Virtual Machine Scale Sets Autoscale to Automatic](#-configure-virtual-machine-scale-sets-autoscale-to-automatic) | +| |:::image type="icon" source="../reliability/media/icon-recommendation-low.svg":::| [Set Virtual Machine Scale Sets custom scale-in policies to default](#-set-virtual-machine-scale-sets-custom-scale-in-policies-to-default) | +| [**Disaster Recovery**](#disaster-recovery) |:::image type="icon" source="../reliability/media/icon-recommendation-low.svg":::| [Enable Protection Policy for all Virtual Machine Scale Set VMs](#-enable-protection-policy-for-all-virtual-machine-scale-set-vms) | +| [**Monitoring**](#monitoring) |:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg":::| [Enable Virtual Machine Scale Sets application health monitoring](#-enable-virtual-machine-scale-sets-application-health-monitoring) | +| [**System Efficiency**](#system-efficiency) |:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg":::| [Configure Allocation Policy Spreading algorithm to max spreading](#-configure-allocation-policy-spreading-algorithm-to-max-spreading) | +| [**Automation**](#automation) |:::image type="icon" source="../reliability/media/icon-recommendation-low.svg":::| [Set patch orchestration options to Azure-orchestrated](#-set-patch-orchestration-options-to-azure-orchestrated) | +++### High availability ++#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **Enable automatic repair policy** ++To achieve high availability for applications, [enable automatic instance repairs](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md#requirements-for-using-automatic-instance-repairs) to maintain a set of healthy VMs. When the [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) or [Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) find that an instance is unhealthy, automatic instance repair deletes the unhealthy instance and creates a new one to replace it. ++A grace period can be set using the property `automaticRepairsPolicy.gracePeriod`. The grace period, specified in minutes and in ISO 8601 format, can range between 10 to 90 minutes, and has a default value of 30 minutes. +++# [Azure Resource Graph](#tab/graph-4) +++- +++#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **Deploy Virtual Machine Scale Sets across availability zones with Virtual Machine Scale Sets Flex** ++When you create your Virtual Machine Scale Sets, use availability zones to protect your applications and data against unlikely datacenter failure. For more information, see [Availability zone support](#availability-zone-support). ++# [Azure Resource Graph](#tab/graph-4) +++- ++### Scalability ++#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **Deploy VMs with flexible orchestration mode** ++All VMs, including single instance VMs, should be deployed into a scale set using [flexible orchestration mode](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) to future-proof your application for scaling and availability. Flexible orchestration offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an availability zone. ++For more information on how to use scale sets appropriately, see [When to use Virtual Machine Scale Sets instead of VMs](../virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md#when-to-use-scale-sets-instead-of-virtual-machines) ++# [Azure Resource Graph](#tab/graph-1) +++- ++#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **Configure Virtual Machine Scale Sets Autoscale to Automatic** ++[Autoscale is a built-in feature of Azure Monitor](../azure-monitor/autoscale/autoscale-overview.md) that helps the performance and cost-effectiveness of your resources by adding and removing scale set VMs based on demand. In addition, you can choose to scale your resources manually to a specific instance count or in accordance with metrics thresholds. You can also schedule instance counts that scale during designated time windows. ++To learn how to enable automatic OS image upgrades, see [Azure Virtual Machine Scale Set automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). ++# [Azure Resource Graph](#tab/graph-2) +++- +++#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **Set Virtual Machine Scale Sets custom scale-in policies to default** +++The [Virtual Machine Scale Sets custom scale-in policy feature](../virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md) gives you a way to configure the order in which virtual machines are scaled-in. There are three scale-in policy configurations: ++- [Default](../virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring#default-scale-in-policy) +- [NewestVM](../virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring#newestvm-scale-in-policy) +- [OldestVM](../virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring#oldestvm-scale-in-policy) ++A Virtual Machine Scale Set deployment can be scaled-out or scaled-in based on an array of metrics, including platform and user-defined custom metrics. While a scale-out creates new virtual machines based on the scale set model, a scale-in affects running virtual machines that may have different configurations and/or functions as the scale set workload evolves. ++It's not necessary that you specify a scale-in policy if you only want the default ordering to be followed, as the default custom scale-in policy provides the best algorithm and flexibility for most of the scenarios. The default ordering is as follows: ++1. Balance virtual machines across availability zones (if the scale set is deployed with availability zone support). +1. Balance virtual machines across fault domains (best effort). +1. Delete virtual machine with the highest instance ID. ++Only use the *Newest* and *Oldest* policies when your workload requires that the oldest or newest VMs should be deleted after balancing across availability zones. ++>[!NOTE] +>Balancing across availability zones or fault domains doesn't move VMs across availability zones or fault domains. The balancing is achieved through the deletion of virtual machines from the unbalanced availability zones or fault domains until the distribution of virtual machines becomes balanced. +++# [Azure Resource Graph](#tab/graph-3) +++- +++++### Disaster recovery ++#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **Enable Protection Policy for all Virtual Machine Scale Set VMs** ++Use [Virtual Machine Scale Sets Protection Policy](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md) if you want specific VMs to be treated differently from the rest of the scale set instance. ++As your application processes traffic, there can be situations where you want specific VMs to be treated differently from the rest of the scale set instance. For example, certain VMs in the scale set could be performing long-running operations, and you donΓÇÖt want these VMs to be scaled-in until the operations complete. You might also have specialized a few VMs in the scale set to perform different tasks than other members of the scale set. You require these special VMs not to be modified with the other VMs in the scale set. Instance protection provides the extra controls to enable these and other scenarios for your application. ++# [Azure Resource Graph](#tab/graph-5) +++- +### Monitoring ++#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **Enable Virtual Machine Scale Sets application health monitoring** ++Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provides support for rolling upgrades, including: ++- [Automatic OS-Image Upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) +- [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md), which relies on health monitoring of individual VMs to upgrade your deployment. +- [Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) *or* [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md), which both monitors the application health of each VM in your scale set and [performs instance repairs using Automatic Instance Repairs](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md). +++# [Azure Resource Graph](#tab/graph-6) +++- +++### System Efficiency ++#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **Configure Allocation Policy Spreading algorithm to max spreading** ++With max spreading, the scale set spreads your VMs across as many fault domains as possible within each zone. This spreading could be across greater or fewer than five fault domains per zone. With static fixed spreading, the scale set spreads your VMs across exactly five fault domains per zone. If the scale set can't find five distinct fault domains per zone to satisfy the allocation request, the request fails. ++For more information, see [Spreading options](#spreading-options). ++# [Azure Resource Graph](#tab/graph-6) +++- ++### Automation ++#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **Set patch orchestration options to Azure-orchestrated** ++Enable automatic VM guest patching for your Azure VMs. Automatic VM guest patching helps ease update management by safely and automatically patching VMs to maintain security compliance, while limiting the blast radius of VMs. ++# [Azure Resource Graph](#tab/graph-6) +++- ++## Availability zone support +++With [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/flexible-virtual-machine-scale-sets.md), you can create and manage a group of load balanced VMs. The number of VMs can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update many VMs. There's no cost for the scale set itself. You only pay for each VM instance that you create. ++Virtual Machine Scale Sets supports both zonal and zone-redundant deployments within a region: ++- **Zonal deployment.** When you create a scale set in a single zone, you control which zone all the VMs of that set run in. The scale set is managed and autoscales only within that zone. ++- **Zone-redundant deployment.** A zone-redundant scale set lets you create a single scale set that spans multiple zones. By default, as VMs are created, they're evenly balanced across zones. +++### Prerequisites ++1. To use availability zones, your scale set must be created in a [supported Azure region](./availability-zones-service-support.md). ++1. All VMs - even single instance VMs - should be deployed into a scale set using [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode to future-proof your application for scaling and availability. +++### SLA ++Because availability zones are physically separate and provide distinct power sources, network, and cooling - service-level agreements (SLAs) are increased. For more information, see the [SLA for Microsoft Online Services](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services). ++#### Create a Virtual Machine Scale Set with availability zones enabled ++You can create a scale set that uses availability zones with one of the following methods: +++# [Azure portal](#tab/portal) ++The process to create a scale set that uses a zonal deployment is the same as detailed in the [getting started article](../virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md). When you select a supported Azure region, you can create a scale set in one or more available zones, as shown in the following example: ++![Create a scale set in a single availability zone](../virtual-machine-scale-sets/media/virtual-machine-scale-sets-use-availability-zones/vmss-az-portal.png) ++The scale set and supporting resources, such as the Azure load balancer and public IP address, are created in the single zone that you specify. ++# [Azure CLI](#tab/cli) +++### Zonal scale set ++The following example creates a single-zone scale set named *myScaleSet* in zone *1*: ++```azurecli ++az vmss create \ + --resource-group myResourceGroup \ + --name myScaleSet \ + --orchestration-mode flexible \ + --image <SKU Image> \ + --upgrade-policy-mode automatic \ + --admin-username azureuser \ + --generate-ssh-keys \ + --zones 1 +``` ++For a complete example of a single-zone scale set and network resources, see [our sample CLI script](../virtual-machine-scale-sets/scripts/cli-sample-single-availability-zone-scale-set.md#sample-script) ++### Zone-redundant scale set ++To create a zone-redundant scale set, you use a *Standard* SKU public IP address and load balancer. For enhanced redundancy, the *Standard* SKU creates zone-redundant network resources. For more information, see [Azure Load Balancer Standard overview](../load-balancer/load-balancer-overview.md) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md). ++To create a zone-redundant scale set, specify multiple zones with the `--zones` parameter. The following example creates a zone-redundant scale set named *myScaleSet* across zones *1,2,3*: ++```azurecli +az vmss create \ + --resource-group myResourceGroup \ + --name myScaleSet \ + --orchestration-mode flexible \ + --image <SKU Image> \ + --upgrade-policy-mode automatic \ + --admin-username azureuser \ + --generate-ssh-keys \ + --zones 1 2 3 +``` +++It may take a few minutes to create and configure all the scale set resources and VMs in the zone(s) that you specify. For a complete example of a zone-redundant scale set and network resources, see [our sample CLI script](../virtual-machine-scale-sets/scripts/cli-sample-zone-redundant-scale-set.md#sample-script). ++# [Azure PowerShell](#tab/powershell) +++### Zonal scale set ++The following example creates a single-zone scale set named *myScaleSet* in *East US 2* zone *1*. The Azure network resources for virtual network, public IP address, and load balancer are automatically created. When prompted, provide your own desired administrative credentials for the VMs in the scale set: ++```powershell +New-AzVmss ` + -ResourceGroupName "myResourceGroup" ` + -Location "EastUS2" ` + -OrchestrationMode "flexible" `` + -VMScaleSetName "myScaleSet" ` + -OrchestrationMode "Flexible" ` + -VirtualNetworkName "myVnet" ` + -SubnetName "mySubnet" ` + -PublicIpAddressName "myPublicIPAddress" ` + -LoadBalancerName "myLoadBalancer" ` + -UpgradePolicy "Automatic" ` + -Zone "1" +``` ++### Zone-redundant scale set ++To create a zone-redundant scale set, specify multiple zones with the `-Zone` parameter. The following example creates a zone-redundant scale set named *myScaleSet* across *East US 2* zones *1, 2, 3*. The zone-redundant Azure network resources for virtual network, public IP address, and load balancer are automatically created. When prompted, provide your own desired administrative credentials for the VMs in the scale set: ++```powershell +New-AzVmss ` + -ResourceGroupName "myResourceGroup" ` + -Location "EastUS2" ` + -OrchestrationMode "Flexible" `` + -VMScaleSetName "myScaleSet" ` + -VirtualNetworkName "myVnet" ` + -SubnetName "mySubnet" ` + -PublicIpAddressName "myPublicIPAddress" ` + -LoadBalancerName "myLoadBalancer" ` + -UpgradePolicy "Automatic" ` + -Zone "1", "2", "3" +``` ++# [Azure Resource Manager templates](#tab/resource) ++The process to create a scale set that uses an availability zone is the same as detailed in the getting started article for [Linux](../virtual-machine-scale-sets/quick-create-template-linux.md) or [Windows](../virtual-machine-scale-sets/quick-create-template-windows.md). To use availability zones, you must create your scale set in a supported Azure region. Add the `zones` property to the *Microsoft.Compute/virtualMachineScaleSets* resource type in your template and specify which zone to use (such as zone *1*, *2*, or *3*). +++### Single-zone scale set ++The following example creates a Linux single-zone scale set named *myScaleSet* in *East US 2* zone *1*: ++```json +{ + "type": "Microsoft.Compute/virtualMachineScaleSets", + "name": "myScaleSet", + "location": "East US 2", + "apiVersion": "2017-12-01", + "zones": ["1"], + "sku": { + "name": "Standard_A1", + "capacity": "2" + }, + "properties": { + "upgradePolicy": { + "mode": "Automatic" + }, + "virtualMachineProfile": { + "storageProfile": { + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage" + }, + "imageReference": { + "publisher": "myPublisher", + "offer": "myOffer", + "sku": "mySKU", + "version": "latest" + } + }, + "osProfile": { + "computerNamePrefix": "myvmss", + "adminUsername": "azureuser", + "adminPassword": "P@ssw0rd!" + } + } + } +} +``` ++For a complete example of a single-zone scale set and network resources, see [our sample Resource Manager template](https://github.com/Azure/vm-scale-sets/blob/master/z_deprecated/preview/zones/singlezone.json). ++### Zone-redundant scale set ++To create a zone-redundant scale set, specify multiple values in the `zones` property for the *Microsoft.Compute/virtualMachineScaleSets* resource type. The following example creates a zone-redundant scale set named *myScaleSet* across *East US 2* zones *1,2,3*: ++```json +{ + "type": "Microsoft.Compute/virtualMachineScaleSets", + "name": "myScaleSet", + "location": "East US 2", + "apiVersion": "2017-12-01", + "zones": [ + "1", + "2", + "3" + ] +} +``` +If you create a public IP address or a load balancer, specify the *"sku": { "name": "Standard" }"* property to create zone-redundant network resources. You also need to create a Network Security Group and rules to permit any traffic. For more information, see [Azure Load Balancer Standard overview](../load-balancer/load-balancer-overview.md) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md). ++For a complete example of a zone-redundant scale set and network resources, see [our sample Resource Manager template](https://github.com/Azure/vm-scale-sets/blob/master/z_deprecated/preview/zones/multizone.json). +++- ++### Zonal failover support ++Virtual Machine Scale Sets are created with five fault domains by default in Azure regions with no zones. For the regions that support availability zone deployment of Virtual Machine Scale Sets and this option is selected, the default value of the fault domain count is 1 for each of the zones. In this case, *FD=1* implies that the VM instances belonging to the scale set are spread across many racks on a best effort basis. For more information, see [Choosing the right number of fault domains for Virtual Machine Scale Set](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains). ++### Low-latency design ++It's recommended that you configure Virtual Machine Scale Sets with zone-redundancy. However, if your application has strict low latency requirements, you may need to implement a zonal for your scale sets VMs. With a zonal scale sets deployment, it's recommended that you create multiple scale set VMs across more than one zone. For example, you can create one scale sets instance that's pinned to zone 1 and one instance pinned to zone 2 or 3. You also need to use a load balancer or other application logic to direct traffic to the appropriate scale sets during a zone outage. ++>[!Important] +>If you opt out of zone-aware deployment, you forego protection from isolation of underlying faults. Opting out from availability zone configuration forces reliance on resources that don't obey zone placement and separation (including underlying dependencies of these resources). These resources shouldn't be expected to survive zone-down scenarios. Solutions that leverage such resources should define a disaster recovery strategy and configure a recovery of the solution in another region. ++### Safe deployment techniques ++To have more control over where you deploy your VMs, you should deploy zonal, instead of regional, scale set VMs. However, zonal VMs only provide zone isolation and not zone redundancy. To achieve full zone-redundancy with zonal VMs, there should be two or more VMs across different zones. ++It's also recommended that you use the max spreading deployment option for your zone-redundant VMs. For more information, see the [spreading options](#spreading-options). +++#### Spreading options ++When you deploy a scale set into one or more availability zones, you have the following spreading options (as of API version *2017-12-01*): ++- **Max spreading (platformFaultDomainCount = 1)**. Max spreading is the recommended deployment option, as it provides the best spreading in most cases. If you to spread replicas across distinct hardware isolation units, it's recommended that you spread across availability zones and utilize max spreading within each zone. + + With max spreading, the scale set spreads your VMs across as many fault domains as possible within each zone. This spreading could be across greater or fewer than five fault domains per zone. + + > [!NOTE] + > With max spreading, regardless of how many fault domains the VMs are spread across, you can only see one fault domain in both the scale set VM instance view and the instance metadata. The spreading within each zone is implicit. ++- **Static fixed spreading (platformFaultDomainCount = 5)**. With static fixed spreading, the scale set spreads your VMs exactly across five fault domains per zone. If the scale set can't find five distinct fault domains per zone to satisfy the allocation request, the request fails. ++- **Spreading aligned with managed disks fault domains (platformFaultDomainCount = 2 or 3)** You can consider aligning the number of scale set fault domains with the number of managed disks fault domains. This alignment can help prevent loss of quorum if an entire managed disks fault domain goes down. The fault domain count can be set to less than or equal to the number of managed disks fault domains available in each of the regions. To learn about the number of Managed Disks fault domains by region, see [insert doc here](link here). ++#### Zone balancing ++For scale sets deployed across multiple zones (zone-redundant), you can choose either *best effort zone balance* or *strict zone balance*. A scale set is considered "balanced" if each zone has the same number of VMs (plus or minus one VM) as all other zones in the scale set. For example: ++| Scale Set | VMs in Zone 1 | VMs in Zone 2 | VMs in Zone 3 | Zone Balancing | +| - | - | - | - | -- | +| Balanced scale set | 2 | 3 | 3 | This scale set is considered balanced. There's only one zone with a different VM count and it's only 1 less than the other zones. | +| Unbalanced scale set | 1 | 3 | 3 | This scale set is considered unbalanced. Zone 1 has 2 fewer VMs than zones 2 and 3. | ++It's possible that VMs in the scale set are successfully created, but extensions on those VMs fail to deploy. The VMs with extension failures are still counted when determining if a scale set is balanced. For instance, a scale set with *3 VMs* in **zone 1**, *3 VMs* in **zone 2**, and *3 VMs* in **zone 3** is considered balanced even if all extensions failed in zone 1 and all extensions succeeded in zones 2 and 3. ++With best-effort zone balance, the scale set attempts to scale in and out while maintaining balance. However, if for some reason the balancing isn't possible (for example, if one zone goes down, the scale set can't create a new VM in that zone), the scale set allows temporary imbalance to successfully scale in or out. On subsequent scale-out attempts, the scale set adds VMs to zones that need more VMs for the scale set to be balanced. Similarly, on subsequent scale in attempts, the scale set removes VMs from zones that need fewer VMs for the scale set to be balanced. With "strict zone balance", the scale set fails any attempts to scale in or out if doing so would cause unbalance. ++To use best-effort zone balance, set `zoneBalance` to *false*. The `zoneBalance` setting is the default in API version *2017-12-01*. To use strict zone balance, set `zoneBalance` to *true*. +++### Migrate to availability zone support ++To learn how to redeploy a regional scale set to availability zone support, see [Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support](./migrate-vm.md). +++## Additional guidance +++### Placement groups ++> [!IMPORTANT] +> Placement groups only apply to Virtual Machine Scale Sets running in Uniform orchestration mode. ++When you deploy a Virtual Machine Scale Set, you have the option to deploy with a single or multiple [placement groups](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) per availability zone. For regional scale sets, the choice is to have a single placement group in the region or to have multiple placement groups in the region. If the scale set property `singlePlacementGroup` is set to *false*, the scale set can be composed of multiple placement groups and has a range of 0-1000 VMs. When set to the default value of *true*, the scale set is composed of a single placement group and has a range of 0-100 VMs. For most workloads, we recommend multiple placement groups, which allows for greater scale. In API version *2017-12-01*, scale sets default to multiple placement groups for single-zone and cross-zone scale sets, but they default to single placement group for regional scale sets. ++## Next steps +> [!div class="nextstepaction"] +> [Reliability in Azure](/azure/reliability/availability-zones-overview) ++> [!div class="nextstepaction"] +> [Deploy applications on Virtual Machine Scale Sets](../virtual-machine-scale-sets/tutorial-install-apps-cli.md) ++> [!div class="nextstepaction"] +> [Use autoscale with Virtual Machine Scale Sets](../virtual-machine-scale-sets/tutorial-autoscale-cli.md). |
reliability | Reliability Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md | Last updated 07/18/2023 # Reliability in Virtual Machines -This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). +This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [disaster recovery and business continuity](#disaster-recovery-and-business-continuity). For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). ## Reliability recommendations+This section contains recommendations for achieving resiliency and availability for your Azure Virtual Machines. [!INCLUDE [Reliability recommendations](includes/reliability-recommendations-include.md)] For an architectural overview of reliability in Azure, see [Azure reliability](/ | Category | Priority |Recommendation | ||--||-| [**High Availability**](#high-availability) |:::image type="icon" source="media/icon-recommendation-high.svg":::| [VM-1: Run production workloads on two or more VMs using Azure Virtual Machine Scale Sets(VMSS) Flex](#-vm-1-run-production-workloads-on-two-or-more-vms-using-vmss-flex) | -||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-2: Deploy VMs across availability zones or use VMSS Flex with zones](#-vm-2-deploy-vms-across-availability-zones-or-use-vmss-flex-with-zones) | -||:::image type="icon" source="media/icon-recommendation-high.svg":::|[VM-3: Migrate VMs using availability sets to VMSS Flex](#-vm-3-migrate-vms-using-availability-sets-to-vmss-flex) | -||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-5: Use managed disks for VM disks](#-vm-5-use-managed-disks-for-vm-disks)| -|[**Disaster Recovery**](#disaster-recovery)| :::image type="icon" source="media/icon-recommendation-medium.svg"::: |[ VM-4: Replicate VMs using Azure Site Recovery](#-vm-4-replicate-vms-using-azure-site-recovery) | -||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-7: Backup data on your VMs with Azure Backup service](#-vm-7-backup-data-on-your-vms-with-azure-backup-service) | -|[**Performance**](#performance) |:::image type="icon" source="media/icon-recommendation-low.svg"::: | [VM-6: Host application and database data on a data disk](#-vm-6-host-application-and-database-data-on-a-data-disk)| -||:::image type="icon" source="media/icon-recommendation-high.svg"::: | [VM-8: Production VMs should be using SSD disks](#-vm-8-production-vms-should-be-using-ssd-disks)| -||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-10: Enable Accelerated Networking (AccelNet)](#-vm-10-enable-accelerated-networking-accelnet) | -||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-11: Accelerated Networking is enabled, make sure you update the GuestOS NIC driver every 6 months](#-vm-11-when-accelnet-is-enabled-you-must-manually-update-the-guestos-nic-driver) | -|[**Management**](#management)|:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-9: Watch for VMs in Stopped state](#-vm-9-review-vms-in-stopped-state) | -||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-22: Use maintenance configurations for the VM](#-vm-22-use-maintenance-configurations-for-the-vm) | -|[**Security**](#security)|:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-12: VMs should not have a Public IP directly associated](#-vm-12-vms-should-not-have-a-public-ip-directly-associated) | -||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-13: Virtual Network Interfaces have an NSG associated](#-vm-13-vm-network-interfaces-have-a-network-security-group-nsg-associated) | -||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-14: IP Forwarding should only be enabled for Network Virtual Appliances](#-vm-14-ip-forwarding-should-only-be-enabled-for-network-virtual-appliances) | -||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"](#-vm-17-network-access-to-the-vm-disk-should-be-set-to-disable-public-access-and-enable-private-access) | -||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-19: Enable disk encryption and data at rest encryption by default](#-vm-19-enable-disk-encryption-and-data-at-rest-encryption-by-default) | -|[**Networking**](#networking) | :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-15: Customer DNS Servers should be configured in the Virtual Network level](#-vm-15-dns-servers-should-be-configured-in-the-virtual-network-level) | -|[**Storage**](#storage) |:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-16: Shared disks should only be enabled in clustered servers](#-vm-16-shared-disks-should-only-be-enabled-in-clustered-servers) | -|[**Compliance**](#compliance)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-18: Ensure that your VMs are compliant with Azure Policies](#-vm-18-ensure-that-your-vms-are-compliant-with-azure-policies) | -|[**Monitoring**](#monitoring)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-20: Enable VM Insights](#-vm-20-enable-vm-insights) | -||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-21: Configure diagnostic settings for all Azure resources](#-vm-21-configure-diagnostic-settings-for-all-azure-resources) | +| [**High Availability**](#high-availability) |:::image type="icon" source="media/icon-recommendation-high.svg":::| [Run production workloads on two or more VMs using Azure Virtual Machine Scale Sets Flex](#-run-production-workloads-on-two-or-more-vms-using-virtual-machine-scale-sets-flex) | +||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[Deploy VMs across availability zones or use Virtual Machine Scale Sets Flex with zones](#-deploy-vms-across-availability-zones-or-use-virtual-machine-scale-sets-flex-with-zones) | +||:::image type="icon" source="media/icon-recommendation-high.svg":::|[Migrate VMs using availability sets to Virtual Machine Scale Sets Flex](#-migrate-vms-using-availability-sets-to-virtual-machine-scale-sets-flex) | +||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[Use managed disks for VM disks](#-use-managed-disks-for-vm-disks)| +|[**Disaster Recovery**](#disaster-recovery)| :::image type="icon" source="media/icon-recommendation-medium.svg"::: |[Replicate VMs using Azure Site Recovery](#-replicate-vms-using-azure-site-recovery) | +||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[Back up data on your VMs with Azure Backup service](#-back-up-data-on-your-vms-with-azure-backup-service) | +|[**Performance**](#performance) |:::image type="icon" source="media/icon-recommendation-low.svg"::: | [Host application and database data on a data disk](#-host-application-and-database-data-on-a-data-disk)| +||:::image type="icon" source="media/icon-recommendation-high.svg"::: | [Production VMs should be using SSD disks](#-production-vms-should-be-using-ssd-disks)| +||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[Enable Accelerated Networking (AccelNet)](#-enable-accelerated-networking-accelnet) | +||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[When AccelNet is enabled, you must manually update the GuestOS NIC drive](#-when-accelnet-is-enabled-you-must-manually-update-the-guestos-nic-driver) | +|[**Management**](#management)|:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-9: Watch for VMs in Stopped state](#-review-vms-in-stopped-state) | +||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[Use maintenance configurations for the VM](#-use-maintenance-configurations-for-the-vm) | +|[**Security**](#security)|:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VVMs shouldn't have a Public IP directly associated](#-vms-shouldnt-have-a-public-ip-directly-associated) | +||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[Virtual Network Interfaces have an NSG associated](#-vm-network-interfaces-have-a-network-security-group-nsg-associated) | +||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[IP Forwarding should only be enabled for Network Virtual Appliances](#-ip-forwarding-should-only-be-enabled-for-network-virtual-appliances) | +||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[Network access to the VM disk should be set to "Disable public access and enable private access"](#-network-access-to-the-vm-disk-should-be-set-to-disable-public-access-and-enable-private-access) | +||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[Enable disk encryption and data at rest encryption by default](#-enable-disk-encryption-and-data-at-rest-encryption-by-default) | +|[**Networking**](#networking) | :::image type="icon" source="media/icon-recommendation-low.svg"::: |[Customer DNS Servers should be configured in the Virtual Network level](#-dns-servers-should-be-configured-in-the-virtual-network-level) | +|[**Storage**](#storage) |:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[Shared disks should only be enabled in clustered servers](#-shared-disks-should-only-be-enabled-in-clustered-servers) | +|[**Compliance**](#compliance)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[Ensure that your VMs are compliant with Azure Policies](#-ensure-that-your-vms-are-compliant-with-azure-policies) | +|[**Monitoring**](#monitoring)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[Enable VM Insights](#-enable-vm-insights) | +||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[Configure diagnostic settings for all Azure resources](#-configure-diagnostic-settings-for-all-azure-resources) | ### High availability -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-1: Run production workloads on two or more VMs using VMSS Flex** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Run production workloads on two or more VMs using Virtual Machine Scale Sets Flex** -To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, it's recommended that you run production workloads on two or more VMs using VMSS Flex. +To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, it's recommended that you run production workloads on two or more VMs using Virtual Machine Scale Sets Flex. -To achieve this you can use: +To run production workloads, you can use: - [Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/overview) to create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.+ - **Availability zones**. For more information on availability zones and VMs, see [Availability zone support](#availability-zone-support). To achieve this you can use: - -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-2: Deploy VMs across availability zones or use VMSS Flex with zones** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: *Deploy VMs across availability zones or use Virtual Machine Scale Sets Flex with zones** When you create your VMs, use availability zones to protect your applications and data against unlikely datacenter failure. For more information about availability zones for VMs, see [Availability zone support](#availability-zone-support) in this document. For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zone-enabled). -For information on how to migrate your existing VMs to availability zone support, see [Availability zone support redeployment and migration](#availability-zone-redeployment-and-migration). +For information on how to migrate your existing VMs to availability zone support, see [migrate to availability zone support](#migrate-to-availability-zone-support). # [Azure Resource Graph](#tab/graph) For information on how to migrate your existing VMs to availability zone support - -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-3: Migrate VMs using availability sets to VMSS Flex** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Migrate VMs using availability sets to Virtual Machine Scale Sets Flex** -Availability sets will be retired in the near future. Modernize your workloads by migrating them from VMs to VMSS Flex. +Availability sets will be retired soon. Modernize your workloads by migrating them from VMs to Virtual Machine Scale Sets Flex. -With VMSS Flex, you can deploy your VMs in one of two ways: +With Virtual Machine Scale Sets Flex, you can deploy your VMs in one of two ways: - Across zones - In the same zone, but across fault domains (FDs) and update domains (UD) automatically. -In an N-tier application, it's recommended that you place each application tier into its own VMSS Flex. +In an N-tier application, it's recommended that you place each application tier into its own Virtual Machine Scale Sets Flex. # [Azure Resource Graph](#tab/graph) In an N-tier application, it's recommended that you place each application tier - -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-5: Use managed disks for VM disks** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: *Use managed disks for VM disks** To provide better reliability for VMs in an availability set, use managed disks. Managed disks are sufficiently isolated from each other to avoid single points of failure. Also, managed disks arenΓÇÖt subject to the IOPS limits of VHDs created in a storage account. To provide better reliability for VMs in an availability set, use managed disks. ### Disaster recovery -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-4: Replicate VMs using Azure Site Recovery** -When you replicate Azure VMs using Site Recovery, all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Replicate VMs using Azure Site Recovery** ++When you replicate Azure VMs using Site Recovery, all VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes, which gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. To learn how to run a disaster recovery drill, see [Run a test failover](/azure/site-recovery/site-recovery-test-failover-to-azure). To learn how to run a disaster recovery drill, see [Run a test failover](/azure/ -#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-7: Backup data on your VMs with Azure Backup service** +#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **Back up data on your VMs with Azure Backup service** The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. For more information, see [What is the Azure Backup Service](/azure/backup/backup-overview). The Azure Backup service provides simple, secure, and cost-effective solutions t ### Performance -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-6: Host application and database data on a data disk** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Host application and database data on a data disk** -A data disk is a managed disk thatΓÇÖs attached to a VM. Use the data disk to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Hosting your data on a data disk makes it easy to backup or restore your data. You can also migrate the disk without having to move the entire VM and Operating System. Also, you'll be able to select a different disk SKU, with different type, size, and performance that meet your requirements. For more information on data disks, see [Data Disks](/azure/virtual-machines/managed-disks-overview#data-disk). +A data disk is a managed disk thatΓÇÖs attached to a VM. Use the data disk to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Hosting your data on a data disk makes it easy to back up or restore your data. You can also migrate the disk without having to move the entire VM and Operating System. Also, you can select a different disk SKU, with different type, size, and performance that meet your requirements. For more information on data disks, see [Data Disks](/azure/virtual-machines/managed-disks-overview#data-disk). # [Azure Resource Graph](#tab/graph) A data disk is a managed disk thatΓÇÖs attached to a VM. Use the data disk to st -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-8: Production VMs should be using SSD disks** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Production VMs should be using SSD disks** Premium SSD disks offer high-performance, low-latency disk support for I/O-intensive applications and production workloads. Standard SSD Disks are a cost-effective storage option optimized for workloads that need consistent performance at lower IOPS levels. -It is recommended that you: +It's recommended that you: - Use Standard HDD disks for Dev/Test scenarios and less critical workloads at lowest cost. - Use Premium SSD disks instead of Standard HDD disks with your premium-capable VMs. For any Single Instance VM using premium storage for all Operating System Disks and Data Disks, Azure guarantees VM connectivity of at least 99.9%. For more information on Azure managed disks and disks types, see [Azure managed -### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-10: Enable Accelerated Networking (AccelNet)** +### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **Enable Accelerated Networking (AccelNet)** AccelNet enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types. For more information on Accelerated Networking, see [Accelerated Networking](/az -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-11: When AccelNet is enabled, you must manually update the GuestOS NIC driver** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **When AccelNet is enabled, you must manually update the GuestOS NIC driver** -When AccelNet is enabled, the default Azure Virtual Network interface in the GuestOS is replaced for a Mellanox interface. As a result, the GuestOS NIC driver is provided from Mellanox, a 3rd party vendor. Although Marketplace images maintained by Microsoft are offered with the latest version of Mellanox drivers, once the VM is deployed, you'll need to manually update GuestOS NIC driver every six months. +When AccelNet is enabled, the default Azure Virtual Network interface in the GuestOS is replaced for a Mellanox interface. As a result, the GuestOS NIC driver is provided from Mellanox, a third party vendor. Although Marketplace images maintained by Microsoft are offered with the latest version of Mellanox drivers, once the VM is deployed, you need to manually update GuestOS NIC driver every six months. # [Azure Resource Graph](#tab/graph) When AccelNet is enabled, the default Azure Virtual Network interface in the Gue ### Management -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-9: Review VMs in stopped state** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Review VMs in stopped state** VM instances go through different states, including provisioning and power states. If a VM is in a stopped state, the VM may be facing an issue or is no longer necessary and could be removed to help reduce costs. # [Azure Resource Graph](#tab/graph) VM instances go through different states, including provisioning and power state -#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-22: Use maintenance configurations for the VM** +#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Use maintenance configurations for the VM** To ensure that VM updates/interruptions are done in a planned time frame, use maintenance configuration settings to schedule and manage updates. For more information on managing VM updates with maintenance configurations, see [Managing VM updates with Maintenance Configurations](../virtual-machines/maintenance-configurations.md). To ensure that VM updates/interruptions are done in a planned time frame, use ma ### Security -#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-12: VMs should not have a Public IP directly associated** +#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VMs shouldn't have a Public IP directly associated** -If a VM requires outbound internet connectivity, it's recommended that you use NAT Gateway or Azure Firewall. NAT Gateway or Azure Firewall help to increase security and resiliency of the service, since both services have much higher availability and [Source Network Address Translation (SNAT)](/azure/load-balancer/load-balancer-outbound-connections) ports. For inbound internet connectivity, it's recommended that you use a load balancing solution such as Azure Load Balancer and Application Gateway. +If a VM requires outbound internet connectivity, it's recommended that you use NAT Gateway or Azure Firewall. NAT Gateway or Azure Firewall help to increase security and resiliency of the service, since both services have higher availability and [Source Network Address Translation (SNAT)](/azure/load-balancer/load-balancer-outbound-connections) ports. For inbound internet connectivity, it's recommended that you use a load balancing solution such as Azure Load Balancer and Application Gateway. # [Azure Resource Graph](#tab/graph) If a VM requires outbound internet connectivity, it's recommended that you use N -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-13: VM network interfaces have a Network Security Group (NSG) associated** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: *VM network interfaces have a Network Security Group (NSG) associated** -It's recommended that you associate a NSG to a subnet, or a network interface, but not both. Since rules in a NSG associated to a subnet can conflict with rules in a NSG associated to a network interface, you can have unexpected communication problems that require troubleshooting. For more information, see [Intra-Subnet traffic](/azure/virtual-network/network-security-group-how-it-works#intra-subnet-traffic). +It's recommended that you associate an NSG to a subnet, or a network interface, but not both. Since rules in an NSG associated to a subnet can conflict with rules in an NSG associated to a network interface, you can have unexpected communication problems that require troubleshooting. For more information, see [Intra-Subnet traffic](/azure/virtual-network/network-security-group-how-it-works#intra-subnet-traffic). # [Azure Resource Graph](#tab/graph) It's recommended that you associate a NSG to a subnet, or a network interface, b -#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-14: IP forwarding should only be enabled for network virtual appliances** +#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **IP forwarding should only be enabled for network virtual appliances** IP forwarding enables the virtual machine network interface to: To learn how to enable or disable IP forwarding, see [Enable or disable IP forwa -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Network access to the VM disk should be set to "Disable public access and enable private access"** It's recommended that you set VM disk network access to ΓÇ£Disable public access and enable private accessΓÇ¥ and create a private endpoint. To learn how to create a private endpoint, see [Create a private endpoint](/azure/virtual-machines/disks-enable-private-links-for-import-export-portal#create-a-private-endpoint). It's recommended that you set VM disk network access to ΓÇ£Disable public access -#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-19: Enable disk encryption and data at rest encryption by default** +#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **Enable disk encryption and data at rest encryption by default** There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host. For more information about managed disk encryption options, see [Overview of man ### Networking -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-15: DNS Servers should be configured in the Virtual Network level** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **DNS Servers should be configured in the Virtual Network level** Configure the DNS Server in the Virtual Network to avoid name resolution inconsistency across the environment. For more information on name resolution for resources in Azure virtual networks, see [Name resolution for VMs and cloud services](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat). Configure the DNS Server in the Virtual Network to avoid name resolution inconsi ### Storage -#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-16: Shared disks should only be enabled in clustered servers** +#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **Shared disks should only be enabled in clustered servers** -Azure shared disks is a feature for Azure managed disks that enables you to attach a managed disk to multiple VMs simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing clustered applications to Azure and should only be used in those situations where the disk will be assigned to more than one VM member of a cluster. +*Azure shared disks* is a feature of *Azure managed disks* that enables you to attach a managed disk to multiple VMs simultaneously. When you attach a managed disk to multiple VMs, you can either deploy new or migrate existing clustered applications to Azure. Shared disks should only be used in those situations where the disk is assigned to more than one VM member of a cluster. To learn more about how to enable shared disks for managed disks, see [Enable shared disk](/azure/virtual-machines/disks-shared-enable?tabs=azure-portal). To learn more about how to enable shared disks for managed disks, see [Enable sh ### Compliance -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-18: Ensure that your VMs are compliant with Azure Policies** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Ensure that your VMs are compliant with Azure Policies** ItΓÇÖs important to keep your virtual machine (VM) secure for the applications that you run. Securing your VMs can include one or more Azure services and features that cover secure access to your VMs and secure storage of your data. For more information on how to keep your VM and applications secure, see [Azure Policy Regulatory Compliance controls for Azure Virtual Machines](/azure/virtual-machines/security-controls-policy). ItΓÇÖs important to keep your virtual machine (VM) secure for the applications t ### Monitoring -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-20: Enable VM Insights** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Enable VM Insights** Enable [VM Insights](/azure/azure-monitor/vm/vminsights-overview) to get more visibility into the health and performance of your virtual machine. VM Insights gives you information on the performance and health of your VMs and virtual machine scale sets, by monitoring their running processes and dependencies on other resources. VM Insights can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues. Insights can also help you understand whether an issue is related to other dependencies. Enable [VM Insights](/azure/azure-monitor/vm/vminsights-overview) to get more vi -#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-21: Configure diagnostic settings for all Azure resources** +#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Configure diagnostic settings for all Azure resources** Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration. Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on and are one of the following types: Fore information, see [Diagnostic settings in Azure Monitor](/azure/azure-monito [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] -Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](availability-zones-service-support.md). The customer will be responsible for configuring and migrating their virtual machines for availability. Refer to the following readiness options below for availability zone enablement: +Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](availability-zones-service-support.md). The customer is responsible for configuring and migrating their virtual machines for availability. ++To learn more about availability zone readiness options, see: - See [availability options for VMs](../virtual-machines/availability.md) - Review [availability zone service and region support](availability-zones-service-support.md) Get started by creating a virtual machine (VM) with availability zone enabled fr ### Zonal failover support -Customers can set up virtual machines to failover to another zone using the Site Recovery service. For more information, see [Site Recovery](../site-recovery/site-recovery-overview.md). +You can set up virtual machines to fail over to another zone using the Site Recovery service. For more information, see [Site Recovery](../site-recovery/site-recovery-overview.md). ### Fault tolerance -Virtual machines can failover to another server in a cluster, with the VM's operating system restarting on the new server. Customers should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful. +Virtual machines can fail over to another server in a cluster, with the VM's operating system restarting on the new server. You should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful. For more information, see the [site recovery processes](../site-recovery/site-recovery-failover.md#before-you-start). ### Zone down experience -During a zone-wide outage, you should expect a brief degradation of performance until the virtual machine service self-healing re-balances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones. +During a zone-wide outage, you should expect a brief degradation of performance until the virtual machine service self-healing rebalances underlying capacity to adjust to healthy zones. Self-healing isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state compensates for a lost zone, using capacity from other zones. -Customers should also prepare for the possibility that there's an outage of an entire region. If there's a service disruption for an entire region, the locally redundant copies of your data would temporarily be unavailable. If geo-replication is enabled, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region. +You should also prepare for the possibility that there's an outage of an entire region. If there's a service disruption for an entire region, the locally redundant copies of your data would temporarily be unavailable. If geo-replication is enabled, three other copies of your Azure Storage blobs and tables are stored in a different region. When there's a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region. #### Zone outage preparation and recovery -The following guidance is provided for Azure virtual machines in the case of a service disruption of the entire region where your Azure virtual machine application is deployed: +The following guidance is provided for Azure virtual machines during a service disruption of the entire region where your Azure virtual machine application is deployed: - Configure [Azure Site Recovery](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-1-initiate-a-failover-by-using-azure-site-recovery) for your VMs - Check the [Azure Service Health Dashboard](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-2-wait-for-recovery) status if Azure Site Recovery hasn't been configured - Review how the [Azure Backup service](../backup/backup-azure-vms-introduction.md) works for VMs - See the [support matrix](../backup/backup-support-matrix-iaas.md) for Azure VM backups-- Determine which [VM restore option and scenario](../backup/about-azure-vm-restore.md) will work best for your environment+- Determine which [VM restore option and scenario](../backup/about-azure-vm-restore.md) works best for your environment ### Low-latency design Cross Region (secondary region), Cross Subscription (preview), and Cross Zonal ( ### Safe deployment techniques -When you opt for availability zones isolation, you should utilize safe deployment techniques for application code, as well as application upgrades. In addition to configuring Azure Site Recovery, below are recommended safe deployment techniques for VMs: +When you opt for availability zones isolation, you should utilize safe deployment techniques for application code and application upgrades. In addition to [configuring Azure Site Recovery](#zone-outage-preparation-and-recovery), and implement any one of the following safe deployment techniques for VMs: - [Virtual Machine Scale Sets](/azure/virtual-machines/flexible-virtual-machine-scale-sets) - [Azure Load Balancer](../load-balancer/load-balancer-overview.md) - [Azure Storage Redundancy](../storage/common/storage-redundancy.md) +As Microsoft periodically performs planned maintenance updates, there may be rare instances when these updates require a reboot of your virtual machine to apply the required updates to the underlying infrastructure. To learn more, see [availability considerations](../virtual-machines/maintenance-and-updates.md#availability-considerations-during-scheduled-maintenance) during scheduled maintenance. - As Microsoft periodically performs planned maintenance updates, there may be rare instances when these updates require a reboot of your virtual machine to apply the required updates to the underlying infrastructure. To learn more, see [availability considerations](../virtual-machines/maintenance-and-updates.md#availability-considerations-during-scheduled-maintenance) during scheduled maintenance. --Follow the health signals below for monitoring before upgrading your next set of nodes in another zone: +Before you upgrade your next set of nodes in another zone, you should perform the following tasks: -- Check the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) for the virtual machines service status for your expected regions-- Ensure that [replication](../site-recovery/azure-to-azure-quickstart.md) is enabled on your VMs+- Check the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) for the virtual machines service status for your expected regions. +- Ensure that [replication](../site-recovery/azure-to-azure-quickstart.md) is enabled on your VMs. -### Availability zone redeployment and migration +### Migrate to availability zone support -For migrating existing virtual machine resources to a zone redundant configuration, refer to the below resources: +To learn how to migrate a VM to availability zone support, see [Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support](./migrate-vm.md). -- Move a VM to another subscription or resource group- - [CLI](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli) - - [PowerShell](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-powershell) -- [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines)-- [Move Azure VMs to availability zones](../site-recovery/move-azure-vms-avset-azone.md)-- [Move region maintenance configuration resources](../virtual-machines/move-region-maintenance-configuration-resources.md) -## Disaster recovery: cross-region failover +## Disaster recovery and business continuity In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). -You can use Cross Region restore to restore Azure VMs via paired regions. With Cross Region restore, you can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options). +You can use Cross Region restore to restore Azure VMs via paired regions. With Cross Region restore, you can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more information on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options). -### Cross-region disaster recovery in multi-region geography +### Multi-region geography disaster recovery -In the case of a region-wide service disruption, Microsoft works diligently to restore the virtual machine service. However, you will still have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan). +In the case of a region-wide service disruption, Microsoft works diligently to restore the virtual machine service. However, you still must rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan). #### Outage detection, notification, and management When setting up disaster recovery for virtual machines, understand what [Azure S - [ARM template](../site-recovery/quickstart-create-vault-template.md) - Enable disaster recovery for [Linux virtual machines](../virtual-machines/linux/tutorial-disaster-recovery.md) - Enable disaster recovery for [Windows virtual machines](../virtual-machines/windows/tutorial-disaster-recovery.md)-- Failover virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md)-- Failover virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region)+- Fail over virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md) +- Fail over virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region) ### Single-region geography disaster recovery -With disaster recovery set up, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there. +With disaster recovery setup, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there. -When you replicate Azure VMs using [Site Recovery](../site-recovery/site-recovery-overview.md), all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. For more information, see [Run a disaster recovery drill to Azure](../site-recovery/tutorial-dr-drill-azure.md). +When you replicate Azure VMs using [Site Recovery](../site-recovery/site-recovery-overview.md), all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes, which grants you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. For more information, see [Run a disaster recovery drill to Azure](../site-recovery/tutorial-dr-drill-azure.md). For more information, see [Azure VMs architectural components](../site-recovery/azure-to-azure-architecture.md#architectural-components) and [region pairing](../virtual-machines/regions.md#region-pairs). ### Capacity and proactive disaster recovery resiliency -Microsoft and its customers operate under the Shared Responsibility Model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated. +Microsoft and its customers operate under the [Shared Responsibility Model](./overview.md#shared-responsibility). Shared responsibility means that for customer-enabled DR (customer-responsible services), you must address DR for any service they deploy and control. To ensure that recovery is proactive, you should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't preallocated. -For deploying virtual machines, customers can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone. +For deploying virtual machines, you can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains either within a region or within an availability zone. ## Additional guidance |
role-based-access-control | Custom Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles.md | The following table describes what the custom role properties mean. | `Description`</br>`description` | Yes | String | The description of the custom role. Can include letters, numbers, spaces, and special characters. Maximum number of characters is 2048. | | `Actions`</br>`actions` | Yes | String[] | An array of strings that specifies the control plane actions that the role allows to be performed. For more information, see [Actions](role-definitions.md#actions). | | `NotActions`</br>`notActions` | No | String[] | An array of strings that specifies the control plane actions that are excluded from the allowed `Actions`. For more information, see [NotActions](role-definitions.md#notactions). |-| `DataActions`</br>`dataActions` | No | String[] | An array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. If you create a custom role with `DataActions`, that role can't be assigned at the management group scope. For more information, see [DataActions](role-definitions.md#dataactions). | +| `DataActions`</br>`dataActions` | No | String[] | An array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. If you create a custom role with `DataActions`, that role can't be assigned at management group scope. For more information, see [DataActions](role-definitions.md#dataactions). | | `NotDataActions`</br>`notDataActions` | No | String[] | An array of strings that specifies the data plane actions that are excluded from the allowed `DataActions`. For more information, see [NotDataActions](role-definitions.md#notdataactions). | | `AssignableScopes`</br>`assignableScopes` | Yes | String[] | An array of strings that specifies the scopes that the custom role is available for assignment. Maximum number of `AssignableScopes` is 2,000. For more information, see [AssignableScopes](role-definitions.md#assignablescopes). | The following list describes the limits for custom roles. - Custom roles with `DataActions` can't be assigned at the management group scope. - Azure Resource Manager doesn't validate the management group's existence in the role definition's `AssignableScopes`. +> [!IMPORTANT] +> Custom roles with DataActions and management group AssignableScope is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++- You can create a custom role with `DataActions` and one management group in `AssignableScopes`. You can't assign the custom role at the management group scope itself; however, you can assign the custom role at the scope of the subscriptions within the management group. This can be helpful if you need to create a single custom role with `DataActions` that needs to be assigned in multiple subscriptions, instead of creating a separate custom role for each subscription. This preview isn't available in Azure Government or Microsoft Azure operated by 21Vianet. + For more information about custom roles and management groups, see [What are Azure management groups?](../governance/management-groups/overview.md#azure-custom-role-definition-and-assignment). ## Input and output formats |
role-based-access-control | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md | Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
route-server | Route Server Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md | No. By default, Azure Route Server doesn't propagate routes it receives from an ### When the same route is learned over ExpressRoute, VPN or SDWAN, which network is preferred? -ExpressRoute is preferred over VPN or SDWAN. +By default, the route that's learned over ExpressRoute is preferred over the ones learned over VPN or SDWAN. You can configure routing preference to influence Route Server route selection. For more information, see [Routing preference (preview)](hub-routing-preference.md) ### What are the requirements for an Azure VPN gateway to work with Azure Route Server? |
sap | Install Workloadzone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/install-workloadzone.md | -online version: https://github.com/Azure/sap-hana +online version: https://github.com/Azure/SAP-automation schema: 2.0.0 Previously updated : 10/21/2021 Last updated : 09/19/2023 install_workloadzone.sh [ -p or --parameterfile ] <String> ``` ## Description-The `install_workloadzone.sh` command deploys a new SAP workload zone. The workload zone contains the shared resources for all VMs. +The `install_workloadzone.sh` script deploys a new SAP workload zone. The workload zone contains the shared resources for all SAP VMs. ## Examples This example deploys the workload zone, as defined by the parameter files. The p ```bash cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE -export subscriptionId=<subscriptionID> -export appId=<appID> -export spnSecret="<password>" -export tenantId=<tenantID> -export keyvault=<keyvaultName> -export storageAccount=<storageaccountName> +export subscriptionId=<subscriptionID> +export appId=<appID> +export spnSecret="<password>" +export tenantId=<tenantID> +export keyvault=<keyvaultName> +export storageAccount=<storageaccountName> export statefileSubscription=<statefile_subscription> +export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation + ${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \ --parameter_file DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars \ --keyvault $keyvault \ |
sap | Prepare Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/prepare-region.md | -online version: https://github.com/Azure/sap-hana +online version: https://github.com/Azure/SAP-automation schema: 2.0.0 Previously updated : 10/21/2021 Last updated : 09/19/2023 Title: Prepare region+ Title: Deploy Control Plane description: Deploys the control plane (deployer, SAP library) using a shell script. -# prepare_region.sh +# deploy_controlplane.sh ## Synopsis-The `prepare_region.sh` script deploys the control plane, including the deployer VM, Azure Key Vault, and the SAP library. +The `deploy_controlplane.sh` script deploys the control plane, including the deployer VMs, Azure Key Vault, and the SAP library. -The deployer VM has installations of Ansible and Terraform. This VM deploys the SAP artifacts. +The deployer VM has installations of Ansible and Terraform. This VM is used to deploy the SAP systems. ## Syntax ```bash-prepare_region.sh [ --deployer_parameter_file ] <String> [ --library_parameter_file ] <String> ++deploy_controlplane.sh [ --deployer_parameter_file ] <String> [ --library_parameter_file ] <String> [[ --subscription] <String>] [[ --spn_id ] <String>] [[ --spn_secret ] <String>] [[ --tenant_id ] <String>] [[ --storageaccountname] <String>] [ --force ] [ --auto-approve ] ``` Deploys the control plane, which includes the deployer VM and the SAP library. F This example deploys the control plane, as defined by the parameter files. The process prompts you for the SPN details. ```bash-${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \ - --deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars +export ARM_SUBSCRIPTION_ID="<subscriptionId>" +export ARM_CLIENT_ID="<appId>" +export ARM_CLIENT_SECRET="<password>" +export ARM_TENANT_ID="<tenantId>" +export env_code="MGMT" +export region_code="WEEU" +export vnet_code="DEP01" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" ++az logout +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" ++sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ + --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \ + --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" ``` ### Example 2 ${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh This example deploys the control plane, as defined by the parameter files. The process adds the deployment credentials to the deployment's key vault. ```bash-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES ++export ARM_SUBSCRIPTION_ID="<subscriptionId>" +export ARM_CLIENT_ID="<appId>" +export ARM_CLIENT_SECRET="<password>" +export ARM_TENANT_ID="<tenantId>" +export env_code="MGMT" +export region_code="WEEU" +export vnet_code="DEP01" ++export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" az logout-az login --export subscriptionId=<subscriptionID> -export appId=<appID> -export spnSecret="<password>" -export tenantId=<tenantID> --${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \ - --deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars \ - --subscription $subscriptionId \ - --spn_id $appId \ - --spn_secret $spnSecret \ - --tenant_id $tenantId +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" +++cd ~/Azure_SAP_Automated_Deployment/WORKSPACES +++sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ + --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \ + --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" \ + --subscription "${ARM_SUBSCRIPTION_ID}" \ + --spn_id "${ARM_CLIENT_ID}" \ + --spn_secret "${ARM_CLIENT_SECRET}" \ + --tenant_id "${ARM_TENANT_ID}" ``` ## Parameters |
sap | Remove Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/remove-region.md | -online version: https://github.com/Azure/sap-hana +online version: https://github.com/Azure/SAP-automation schema: 2.0.0 Previously updated : 12/10/2021 Last updated : 09/19/2023 Title: Remove_region.sh+ Title: remove_controlplane.sh description: Removes the SAP Control Plane (Deployer, Library) using a shell script. -# Remove_region.sh +# remove_controlplane.sh ## Synopsis -Removes the control plane, including the deployer VM and the SAP library. It is important to remove the terraform deployed artifacts using Terraform to ensure that the removals are done correctly. +Removes the control plane, including the deployer VM and the SAP library. It's important to remove the terraform deployed artifacts using Terraform to ensure that the removals are done correctly. ## Syntax ```bash -remove_region.sh [-d or --deployer_parameter_file ] <String> [-l or --library_parameter_file ] <String> +remove_controlplane.sh [-d or --deployer_parameter_file ] <String> [-l or --library_parameter_file ] <String> ``` ## Description Removes the SAP control plane, including the deployer VM and the SAP library. ### Example 1 ```bash-${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_region.sh \ - --deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars +export ARM_SUBSCRIPTION_ID="<subscriptionId>" +export ARM_CLIENT_ID="<appId>" +export ARM_CLIENT_SECRET="<password>" +export ARM_TENANT_ID="<tenantId>" +export env_code="MGMT" +export region_code="WEEU" +export vnet_code="DEP01" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" ++az logout +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" ++sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh \ + --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \ + --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" + ``` ### Example 2 ```bash-${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_region.sh \ - --deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars \ - --subscription xxxxxxxxxxx - --storage_account mgmtweeutfstate### +export ARM_SUBSCRIPTION_ID="<subscriptionId>" +export ARM_CLIENT_ID="<appId>" +export ARM_CLIENT_SECRET="<password>" +export ARM_TENANT_ID="<tenantId>" +export env_code="MGMT" +export region_code="WEEU" +export vnet_code="DEP01" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" ++az logout +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" ++sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh \ + --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \ + --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" + --subscription xxxxxxxxxxx + --storage_account mgmtweeutfstate### ``` ## Parameters |
sap | Reference Bash | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/reference-bash.md | You can deploy all [SAP Deployment Automation Framework](deployment-framework.md ## Control plane operations -You can deploy or update the control plane by using the [prepare_region](bash/prepare-region.md) shell script. +You can deploy or update the control plane by using the [deploy_controlplane](bash/prepare-region.md) shell script. -Remove the control plane by using the [remove_region](bash/remove-region.md) shell script. +Remove the control plane by using the [remove_controlplane](bash/remove-region.md) shell script. You can bootstrap the deployer in the control plane by using the [install_deployer](bash/install-deployer.md) shell script. |
sap | Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md | A valid SAP user account (SAP-User or S-User account) with software download pri 1. Create the deployment folder and clone the repository. ```cloudshell-interactive- mkdir -p ~/Azure_SAP_Automated_Deployment + mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_ - cd ~/Azure_SAP_Automated_Deployment + git clone https://github.com/Azure/sap-automation-bootstrap.git config - git clone https://github.com/Azure/sap-automation.git + git clone https://github.com/Azure/sap-automation.git sap-automation - git clone https://github.com/Azure/sap-automation-samples.git + git clone https://github.com/Azure/sap-automation-samples.git samples ++ cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment/WORKSPACES + ``` 1. Optionally, validate the versions of Terraform and the Azure CLI available on your instance of Cloud Shell. A valid SAP user account (SAP-User or S-User account) with software download pri To run the automation framework, update to the following versions: - `az` version 2.4.0 or higher.- - `terraform` version 1.2.8 or higher. [Upgrade by using the Terraform instructions](https://www.terraform.io/upgrade-guides/0-12.html), as necessary. + - `terraform` version 1.5 or higher. [Upgrade by using the Terraform instructions](https://www.terraform.io/upgrade-guides/0-12.html), as necessary. ## Create a service principal If you don't assign the User Access Administrator role to the service principal, 1. Open Visual Studio Code from Cloud Shell. ```cloudshell-interactive- cd ~/Azure_SAP_Automated_Deployment/sap-automation-samples/Terraform + cd ~/Azure_SAP_Automated_Deployment/WORKSPACES code . ``` The sample SAP library configuration file `MGMT-NOEU-SAP_LIBRARY.tfvars` is in t 1. Create the deployer and the SAP library. Add the service principal details to the deployment key vault. ```bash- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES export subscriptionId="<subscriptionId>" export spn_id="<appId>" export spn_secret="<password>" export tenant_id="<tenantId>" export env_code="MGMT"+ export vnet_code="DEP00" export region_code="<region_code>" export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"- export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation-samples/Terraform/WORKSPACES" - export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" + export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" export ARM_SUBSCRIPTION_ID="${subscriptionId}" - ${DEPLOYMENT_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ - --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \ - --subscription "${subscriptionId}" \ - --spn_id "${spn_id}" \ - --spn_secret "${spn_secret}" \ - --tenant_id "${tenant_id}" \ + cd $CONFIG_REPO_PATH ++ ${DEPLOYMENT_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ + --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ + --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \ + --subscription "${subscriptionId}" \ + --spn_id "${spn_id}" \ + --spn_secret "${spn_secret}" \ + --tenant_id "${tenant_id}" \ --auto-approve ``` To connect to the deployer: To configure the deployer, run the following script: ```bash-mkdir -p ~/Azure_SAP_Automated_Deployment -cd ~/Azure_SAP_Automated_Deployment +mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_ -git clone https://github.com/Azure/sap-automation.git +git clone https://github.com/Azure/sap-automation.git sap-automation ++git clone https://github.com/Azure/sap-automation-samples.git samples cd sap-automation/deploy/scripts For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU - Select **Library resource group** > **State storage account** > **Containers** > `tfstate`. Copy the name of the deployer state file. - Following from the preceding example, the name of the blob is `MGMT-NOEU-DEP00-INFRASTRUCTURE.terraform.tfstate`. -1. If necessary, register the SPN. +1. If necessary, register the Service Principal. - The first time an environment is instantiated, an SPN must be registered. In this tutorial, the control plane is in the `MGMT` environment and the workload zone is in `DEV`. Therefore, an SPN must be registered for `DEV` at this time. + The first time an environment is instantiated, a Service Principal must be registered. In this tutorial, the control plane is in the `MGMT` environment and the workload zone is in `DEV`. Therefore, a Service Principal must be registered for the `DEV` environment. ```bash- export subscriptionId="<subscriptionId>" - export spn_id="<appID>" - export spn_secret="<password>" - export tenant_id="<tenant>" - export key_vault="<vaultID>" - export env_code="DEV" - export region_code="<region_code>" + export subscriptionId="<subscriptionId>" + export spn_id="<appID>" + export spn_secret="<password>" + export tenant_id="<tenant>" + export key_vault="<vaultID>" + export env_code="DEV" + export region_code="<region_code>" + export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" + export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" - ${DEPLOYMENT_REPO_PATH}/deploy/scripts/set_secrets.sh \ + ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/set_secrets.sh \ --environment "${env_code}" \ --region "${region_code}" \ --vault "${key_vault}" \ For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU 1. Connect to your deployer VM for the following steps. A copy of the repo is now there. -1. Go to the `sap-automation` folder and optionally refresh the repository. -- ```bash - cd ~/Azure_SAP_Automated_Deployment/sap-automation/ -- git pull - ``` --1. Go into the `WORKSPACES/LANDSCAPE` folder and copy the sample configuration files to use from the repository. -- ```bash - cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE -- cp -Rp ../../sap-automation/training-materials/WORKSPACES/LANDSCAPE/DEV-[REGION]-SAP01-INFRASTRUCTURE . - ``` - ## Deploy the workload zone Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy the SAP workload zone. Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th export sap_env_code="DEV" export region_code="<region_code>" export key_vault="<vaultID>"-+ + export deployer_vnet_code="DEP01" + export vnet_code="SAP02" + + export ARM_SUBSCRIPTION_ID="<subscriptionId>" + export ARM_CLIENT_ID="<appId>" + export ARM_CLIENT_SECRET="<password>" + export ARM_TENANT_ID="<tenantId>" + cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-SAP01-INFRASTRUCTURE-- ${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \ - --parameterfile ./${sap_env_code}-${region_code}-SAP01-INFRASTRUCTURE.tfvars \ - --deployer_environment "${deployer_env_code}" \ - --deployer_tfstate_key "${deployer_env_code}-${region_code}-DEP00-INFRASTRUCTURE.terraform.tfstate" \ - --keyvault "${key_vault}" \ - --storageaccountname "${tfstate_storage_account}" \ - --auto-approve + + export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" + export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" + + az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" + + cd "${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE" + parameterFile="${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" + deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-INFRASTRUCTURE.terraform.tfstate" + + $SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \ + --parameterfile "${parameterFile}" \ + --deployer_environment "${deployer_env_code}" \ + --deployer_tfstate_key "${deployerState}" \ + --keyvault "${key_vault}" \ + --storageaccountname "${tfstate_storage_account}" \ + --subscription "${ARM_SUBSCRIPTION_ID}" \ + --spn_id "${ARM_CLIENT_ID}" \ + --spn_secret "${ARM_CLIENT_SECRET}" \ + --tenant_id "${ARM_TENANT_ID}" ``` The workload zone deployment should start automatically. Connect to your deployer VM for the following steps. A copy of the repo is now t Go into the `WORKSPACES/SYSTEM` folder and copy the sample configuration files to use from the repository. -```bash -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM --cp -Rp ../../sap-automation/training-materials/WORKSPACES/SYSTEM/DEV-[REGION]-SAP01-X00 . -``` - ## Deploy the SAP system infrastructure After the workload zone is finished, you can deploy the SAP system infrastructure resources. The SAP system creates your VMs and supporting components for your SAP application. Use the [installer.sh](bash/installer.md) script to deploy the SAP system. Deploy the SAP system. ```bash -export sap_env_code="DEV" -export region_code="<region_code>" +export sap_env_code="DEV" +export region_code="<region_code>" +export vnet_code="SAP01" -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-SAP01-X00 +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" -${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \ - --parameterfile "${sap_env_code}-${region_code}-SAP01-X00.tfvars" \ - --type sap_system \ - --auto-approve +cd ${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-X00 ++${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \ + --parameterfile "${sap_env_code}-${region_code}-${vnet_code}-X00.tfvars" \ + --type sap_system ``` -The deployment command for the `northeurope` example will look like: +The deployment command for the `northeurope` example looks like: ```bash cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00 Go to the system deployment folder. cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00/ ``` -Make sure you have the following files in the current folders: `sap-parameters.yaml` and `SID_host.yaml`. +Make sure you have the following files in the current folders: `sap-parameters.yaml` and `X00_host.yaml`. For a standalone SAP S/4HANA system, there are eight playbooks to run in sequence. One way you can run the playbooks is to use the **Configuration** menu. This playbook does the SAP OS configuration setup on all the machines. The steps This playbook downloads the SAP software to the SCS virtual machine. -### Playbook: HANA DB install --This playbook installs the HANA database instances. - ### Playbook: SCS Install This playbook installs SAP central services. For highly available configurations, the playbook also installs the SAP ERS instance and configures Pacemaker. +### Playbook: HANA DB install ++This playbook installs the HANA database instances. + ### Playbook: DB load This playbook invokes the database load task from the primary application server. Go to the `DEV-NOEU-SAP01-X00` subfolder inside the `SYSTEM` folder. Then, run t ```bash export sap_env_code="DEV" export region_code="NOEU"+export vnet_code="SAP01" -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-SAP01-X00 +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-X00 ${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \- --parameterfile "${sap_env_code}-${region_code}-SAP01-X00.tfvars" \ + --parameterfile "${sap_env_code}-${region_code}-${vnet_code}-X00.tfvars" \ --type sap_system ``` Go to the `DEV-XXXX-SAP01-INFRASTRUCTURE` subfolder inside the `LANDSCAPE` folde export sap_env_code="DEV" export region_code="NOEU"+export vnet_code="SAP01" -cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-SAP01-INFRASTRUCTURE +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE -${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \ - --parameterfile ${sap_env_code}-${region_code}-SAP01-INFRASTRUCTURE.tfvars \ +${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \ + --parameterfile ${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ --type sap_landscape ``` Run the following command: ```bash export region_code="NOEU"+export env_code="MGMT" +export vnet_code="DEP00" -${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh \ - --deployer_parameter_file DEPLOYER/MGMT-${region_code}-DEP00-INFRASTRUCTURE/MGMT-${region_code}-DEP00-INFRASTRUCTURE.tfvars \ - --library_parameter_file LIBRARY/MGMT-${region_code}-SAP_LIBRARY/MGMT-${region_code}-SAP_LIBRARY.tfvars +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES +${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh \ + --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ + --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars ``` Verify that all resources are cleaned up. |
sap | Get Sap Installation Media | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md | The following components are necessary for the SAP installation. - `jq` version 1.6 - `ansible` version 2.11.12 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`, `S42022SPS00_V0001ms.yaml`) and dependent BOMs (`HANA_2_00_059_v0004ms.yaml`, `HANA_2_00_064_v0001ms.yaml`, `SUM20SP15_latest.yaml`, `SWPM20SP13_latest.yaml`). They provide the following information:+- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`, `S42022SPS00_v0001ms.yaml`) and dependent BOMs (`HANA_2_00_059_v0004ms.yaml`, `HANA_2_00_064_v0001ms.yaml`, `SUM20SP15_latest.yaml`, `SWPM20SP13_latest.yaml`). They provide the following information: - The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`) |
sap | High Availability Guide Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md | The following items are prefixed with either **[A]** - applicable to all nodes, 1. **[A]** Configure SWAP file + Create a swap file as defined in [Create a SWAP file for an Azure Linux VM](https://learn.microsoft.com/troubleshoot/azure/virtual-machines/create-swap-file-linux-vm) ```bash- sudo vi /etc/waagent.conf - - # Check if property ResourceDisk.Format is already set to y and if not, set it - ResourceDisk.Format=y - - # Set the property ResourceDisk.EnableSwap to y - # Create and use swapfile on resource disk. - ResourceDisk.EnableSwap=y - - # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB - # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon - # Size of the swapfile. - ResourceDisk.SwapSizeMB=2000 + #!/bin/sh ++ # Percent of space on the ephemeral disk to dedicate to swap. Here 30% is being used. Modify as appropriate. + PCT=0.3 ++ # Location of swap file. Modify as appropriate based on location of ephemeral disk. + LOCATION=/mnt ++ if [ ! -f ${LOCATION}/swapfile ] + then ++ # Get size of the ephemeral disk and multiply it by the percent of space to allocate + size=$(/bin/df -m --output=target,avail | /usr/bin/awk -v percent="$PCT" -v pattern=${LOCATION} '$0 ~ pattern {SIZE=int($2*percent);print SIZE}') + echo "$size MB of space allocated to swap file" ++ # Create an empty file first and set correct permissions + /bin/dd if=/dev/zero of=${LOCATION}/swapfile bs=1M count=$size + /bin/chmod 0600 ${LOCATION}/swapfile ++ # Make the file available to use as swap + /sbin/mkswap ${LOCATION}/swapfile + fi ++ # Enable swap + /sbin/swapon ${LOCATION}/swapfile + /sbin/swapon -a ++ # Display current swap status + /sbin/swapon -s ``` - Restart the Agent to activate the change + Make the file executable. ```bash- sudo service waagent restart + chmod +x /var/lib/cloud/scripts/per-boot/swap.sh ```+ Stop and start the VM. Stopping and starting the VM is only necessary the first time after you create the SWAP file. ### Installing SAP NetWeaver ASCS/ERS |
search | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
sentinel | Data Transformation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md | Only the following tables are currently supported for custom log ingestion: - [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) - [**ASimWebSessionLogs**](/azure/azure-monitor/reference/tables/asimwebsessionlogs) -## Known issues +## Limitations Ingestion-time data transformation currently has the following known issues for Microsoft Sentinel data connectors: - Data transformations using *workspace transformation DCRs* are supported only per table, and not per connector. - There can only be one workspace transformation DCR for an entire workspace. Within that DCR, each table can use a separate input stream with its own transformation. However, if you have two different MMA-based data connectors sending data to the *Syslog* table, they will both have to use the same input stream configuration in the DCR. + There can only be one workspace transformation DCR for an entire workspace. Within that DCR, each table can use a separate input stream with its own transformation. However, if you have two different MMA-based data connectors sending data to the *Syslog* table, they will both have to use the same input stream configuration in the DCR. Splitting data to multiple destinations (Log Analytics workspaces) with a workspace transformation DCR is not possible. - The following configurations are supported only via API: Ingestion-time data transformation currently has the following known issues for - KQL syntax: Not all operators are supported. For more information, see [**KQL limitations** and **Supported KQL features**](../azure-monitor/essentials/data-collection-transformations-structure.md#kql-limitations) in the Azure Monitor documentation. +- You can only send logs from one specific data source to one workspace. To send data from a single data source to multiple workspaces (destinations) with a standard DCR, please create one DCR per workspace. + ## Next steps [Get started configuring ingestion-time data transformation in Microsoft Sentinel](configure-data-transformation.md). |
service-bus-messaging | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md | Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
service-connector | How To Integrate Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md | +zone_pivot_group_filename: service-connector/zone-pivot-groups.json +zone_pivot_groups: howto-authtype # Integrate Azure Blob Storage with Service Connector -This page shows the supported authentication types and client types of Azure Blob Storage using Service Connector. You might still be able to connect to Azure Blob Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). +This page shows the supported authentication types, client types and sample codes of Azure Blob Storage using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample codes about how to make connection to the blob storage. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). ## Supported compute service Supported authentication and clients for App Service, Container Apps and Azure S | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -## Default environment variable names or application properties +## Default environment variable names or application properties and sample codes -Use the connection details below to connect compute services to Blob Storage. For each example below, replace the placeholder texts -`<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name. --### Secret / connection string --#### .NET, Java, Node.JS, Python -| Default environment variable name | Description | Example value | -||--|| -| AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | --#### Java - SpringBoot --| Application properties | Description | Example value | -|--|--|| -| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` | -| azure.storage.account-key | Your Blob Storage account key | `<account-key>` | -| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | +Reference the connection details and sample codes in following tables, accordings to your connection's authentication type and client type, to connect compute services to Azure Blob Storage. Please go to beginning of the documentation to choose authentication type. ### System-assigned managed identity+For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | ++#### Sample codes ++Follow these steps and sample codes to connect to Azure Blob Storage with system-assigned managed identity. +++ ### User-assigned managed identity +For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. + | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | | AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` | +#### Sample codes ++Follow these steps and sample codes to connect to Azure Blob Storage with user-assigned managed identity. +++++### Connection string ++For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. ++#### SpringBoot client type ++| Application properties | Description | Example value | +|--|--|| +| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` | +| azure.storage.account-key | Your Blob Storage account key | `<account-key>` | +| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | +++#### other client types +| Default environment variable name | Description | Example value | +||--|| +| AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | +++#### Sample codes ++Follow these steps and sample codes to connect to Azure Blob Storage with connection string. +++ ### Service principal +For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. + | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | Use the connection details below to connect compute services to Blob Storage. Fo | AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` | +#### Sample codes ++Follow these steps and sample codes to connect to Azure Blob Storage with service principal. + ## Next steps -Follow the tutorials listed below to learn more about Service Connector. +Follow the tutorials to learn more about Service Connector. > [!div class="nextstepaction"] > [Learn about Service Connector concepts](./concept-service-connector-internals.md) |
service-fabric | How To Deploy Custom Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-custom-image.md | -# Deploy a custom Windows virtual machine scale set image on new node types within a Service Fabric Managed Cluster (preview) +# Deploy a custom Windows or Azure Marketplace virtual machine scale set image on new node types within a Service Fabric Managed Cluster (preview) -Custom windows images are like marketplace images, but you create them yourself for each new node type within a cluster. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations. Once you create a custom windows image, you can then deploy to one or more new node types within a Service Fabric Managed Cluster. +Custom windows images are like marketplace images, but you create them yourself for each new node type within a cluster. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations. Once you create a custom windows image, you can then deploy to one or more new node types within a Service Fabric Managed Cluster. Customers can also use a marketplace image. Azure Marketplace images are clones of entire computer systems including operating system, application, and state information. Service Fabric managed clusters allow you to utilize these images for your node types within your Service Fabric managed cluster. ## Before you begin Ensure that you've [created a custom image](../virtual-machines/linux/tutorial-custom-images.md). Custom image is enabled with Service Fabric Managed Cluster (SFMC) API version 2022-08-01-preview and forward. To use custom images, you must grant SFMC First Party Azure Active Directory (Azure AD) App read access to the virtual machine (VM) Managed Image or Shared Gallery image so that SFMC has permission to read and create VM with the image. +If you have chosen to use an Azure Marketplace image, you need to [find and use the appropriate marketplace purchase plan information](../virtual-machines/windows/cli-ps-findimage.md). You can then specify a marketplace image and plan information when you create a VM. You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com) or the [Azure CLI](../virtual-machines/linux/cli-ps-findimage.md). + Check [Add a managed identity to a Service Fabric Managed Cluster node type](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md#prerequisites) as reference on how to obtain information about SFMC First Party Azure AD App and grant it access to the resources. Reader access is sufficient. `Role definition name: Reader` New-AzRoleAssignment -PrincipalId "<SFMC SPID>" -RoleDefinitionName "Reader" -Sc ## Use the ARM template -When you create a new node type, you will need to modify your ARM template with the new property: VmImageResourceId: <Image name>. The following is an example: +When you create a new node type, you'll need to modify your ARM template with the new property: VmImageResourceId: <Image name>. The following is an example: ```JSON { The vmImageResourceId will be passed along to the virtual machine scale set as a - Shared Gallery Image (Microsoft.Compute/galleries/images) - Shared Gallery Image Version (Microsoft.Compute/galleries/images/versions) +Service Fabric managed clusters also support marketplaces images that can be used on your virtual machine. Customers who would like to use a specific image from the marketplace can use the below configuration. ++ ```JSON + { + "name": "SF", + "apiVersion": "2023-08-01-preview", + "properties": { + "isPrimary" : true, + "vmSize": "Standard_D2", + "vmImagePlan": { + "name": "< image >", + "publisher": "<publisher name>", + "product": "<product name>" + }, + "vmInstanceCount": 5, + "dataDiskSizeGB": 100 + } + } + ``` ## Auto OS upgrade |
service-fabric | Managed Cluster Service Fabric Explorer Blocking Operation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/managed-cluster-service-fabric-explorer-blocking-operation.md | + + Title: Service Fabric Explorer blocking operations +description: Learn about the blocking operations in place to mitigate cluster desynchronization issues. +++++ Last updated : 09/15/2022+++# Service Fabric Explorer blocking operations ++When you create a Service Fabric managed cluster along with applications and services through ARM, portal, or Az cmdlets, ARM manages the cluster. Accordingly, these resources should have all their management operations performed at ARM level. Commands run directly against the cluster bypass ARM, whether they're made through a Service Fabric Explorer command or an SF cmdlet. Bypassing ARM can cause synchronization issues, as ARM isn't alerted to any changes that result from the operations. When the cluster is out of sync with its definition in ARM, there's a risk of degraded ability to manage the cluster safely and reliably. ++To help prevent synchronization issues, Service Fabric Explorer now blocks the management of ARM managed resources. ++## Service Fabric Explorer interface ++* Applications that ARM manages are now labeled in the list of applications. +* Application type versions that ARM manages are now labeled in the list of application type versions. +* Services that ARM manages are now labeled in the list. A banner is now shown if the service is managed in ARM. The following screen capture shows an ARM-managed service in Service Fabric explorer. ++## Best practices ++### Application type versions ++* To unprovision application type versions, use the Az PowerShell cmdlet [Remove-AzReource](/powershell/module/az.resources/remove-azresource). +* Use ARM templates or the [AzSF PowerShell cmdlet](/powershell/module/az.servicefabric/new-azservicefabricmanagedclusterapplication) to create applications. ++### Applications ++* Applications must be deleted through ARM or via the command line with [az resource](/cli/azure/resource#az-resource-delete). +* Use ARM templates or the [AzSF PowerShell cmdlet](/powershell/module/az.servicefabric/new-azservicefabricmanagedclusterapplication) to create applications. ++### Services ++* Scale actions must be done via ARM. +* Deletions must be done via the [Remove-AzResource cmdlet](/powershell/module/az.resources/remove-azresource). +* Use the [AzSF PowerShell cmdlet](/powershell/module/az.servicefabric/new-azservicefabricservice) to create services. ++## Next steps ++* Learn about [Service Fabric Explorer to visualize your cluster](service-fabric-visualizing-your-cluster.md). |
service-fabric | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md | |
service-fabric | Service Fabric Cross Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md | You don't need to configure the `FaultDomain` and `UpgradeDomain` overrides. ``` >[!NOTE]-> > * Service Fabric clusters should have at least one primary node type. The durability level of primary node types should be Silver or higher. > * An Availability Zone spanning virtual machine scale set should be configured with at least three Availability Zones, no matter the durability level.-> * An Availability Zone spanning virtual machine scale set with Silver or higher durability should have at least 15 VMs. +> * An Availability Zone spanning virtual machine scale set with Silver or higher durability should have at least 15 VMs ([5 per region](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster)). > * An Availaibility Zone spanning virtual machine scale set with Bronze durability should have at least six VMs. ### Enable support for multiple zones in the Service Fabric node type |
site-recovery | Site Recovery Runbook Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md | When a script runs, it injects a recovery plan context to the runbook. The conte The following example shows a context variable: +```yaml +{"RecoveryPlanName":"hrweb-recovery", + ```json { "RecoveryPlanName":"hrweb-recovery",+ "FailoverType":"Test", "FailoverDirection":"PrimaryToSecondary", "GroupId":"1", The following example shows a context variable: If you want to access all VMs in VMMap in a loop, you can use the following code: -``` ++```powershell $VMinfo = $RecoveryPlanContext.VmMap | Get-Member | Where-Object MemberType -EQ NoteProperty | select -ExpandProperty Name $vmMap = $RecoveryPlanContext.VmMap foreach($VMID in $VMinfo) $vmMap = $RecoveryPlanContext.VmMap } ``` - Aman Sharma's blog over at [Harvesting Clouds](http://harvestingclouds.com) has a useful example of a [recovery plan context script](http://harvestingclouds.com/post/script-sample-azure-automation-runbook-for-asr-recovery-plan/). -- ## Before you start - If you're new to Azure Automation, you can [sign up](https://azure.microsoft.com/services/automation/) and [download sample scripts](https://azure.microsoft.com/documentation/scripts/). For more information, see [Automation runbooks - known issues and limitations](../automation/automation-runbook-types.md#powershell-runbooks). Aman Sharma's blog over at [Harvesting Clouds](http://harvestingclouds.com) has All modules should be of compatible versions. The simplest way is to always use the latest versions of all modules. -- ## Customize the recovery plan 1. In the vault, select **Recovery Plans (Site Recovery)** Aman Sharma's blog over at [Harvesting Clouds](http://harvestingclouds.com) has 3. In **Insert action**, verify that **Script** is selected, and specify a name for the script (**Hello World**). 4. Specify an automation account and select a runbook. To save the script, select **OK**. The script is added to **Group 1: Post-steps**. - ## Reuse a runbook script You can use a single runbook script in multiple recovery plans, by using external variables. In this example, a script takes the input of a Network Security Group (NSG) and 2. Create a variable to store the resource group name for the NSG resource. Add a prefix to the variable name with the name of the recovery plan. - ![Create an NSG resource group name](media/site-recovery-runbook-automation-new/var2.png) + ![Create an NSG resource group name](media/site-recovery-runbook-automation-new/var2.png) ++3. In the script, use this reference code to get the variable values: + ![Create an NSG resource group name](media/site-recovery-runbook-automation-new/var2.png) 3. In the script, use this reference code to get the variable values: In this example, a script takes the input of a Network Security Group (NSG) and } ``` - For each recovery plan, create independent variables so that you can reuse the script. Add a prefix by using the recovery plan name. For a complete, end-to-end script for this scenario, review [this script](https://gallery.technet.microsoft.com/Add-Public-IP-and-NSG-to-a6bb8fee). - ### Use a complex variable to store more information In some scenarios you might not be able to create separate variables for each recovery plan. Consider a scenario in which you want a single script to assign a public IP address on specific VMs. In another scenario, you might want to apply different NSGs on different VMs (not on all VMs). Note that: To deploy sample scripts to your Automation account, select the **Deploy to Azur This video provides another example. It demonstrates how to recover a two-tier WordPress application to Azure: - ## Next steps - Learn about an [Azure Automation Run As account](../automation/manage-runas-account.md) - Review [Azure Automation sample scripts](https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=User&f%5B0%5D.Value=SC%20Automation%20Product%20Team&f%5B0%5D.Text=SC%20Automation%20Product%20Team).++- Also Review [A few tasks you might want to run during an Azure Site Recovery DR](https://github.com/WernerRall147/RallTheory/tree/main/AzureSiteRecoveryDRRunbooks) - [Learn more](site-recovery-failover.md) about running failovers.++ |
site-recovery | Vmware Azure Prepare Failback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-prepare-failback.md | Title: Prepare VMware VMs for reprotection and failback with Azure Site Recovery description: Prepare for fail back of VMware VMs after failover with Azure Site Recovery Previously updated : 12/02/2021 Last updated : 09/18/2023 -Before you continue, get a quick overview with this video about how to fail back from Azure to an on-premises site.<br /><br /> --## Reprotection/failback components +## Reprotection or failback components You need a number of components and settings in place before you can reprotect and fail back from Azure. A number of ports must be open for reprotection/failback. The following graphic ## Deploy a separate master target server -1. Note the master target server [requirements and limitations](#reprotectionfailback-components). +1. Note the master target server [requirements and limitations](#reprotection-or-failback-components). 2. Create a [Windows](site-recovery-plan-capacity-vmware.md#deploy-additional-master-target-servers) or [Linux](vmware-azure-install-linux-master-target.md) master target server, to match the operating system of the VMs you want to reprotect and fail back. 3. Make sure you don't use Storage vMotion for the master target server, or failback can fail. The VM machine can't start because the disks aren't available to it. - To prevent this, exclude the master target server from your vMotion list. A number of ports must be open for reprotection/failback. The following graphic ## Next steps -[Reprotect](vmware-azure-reprotect.md) a VM. +Learn how to [reprotect](vmware-azure-reprotect.md) a VM. |
spring-apps | Access App Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/access-app-virtual-network.md | ms.devlang: azurecli This article explains how to access an endpoint for your application in a private network. -When you assign an endpoint on an application in an Azure Spring Apps service instance deployed in your virtual network, the endpoint is a private fully qualified domain name (FQDN). The domain is only accessible in the private network. Apps and services use the application endpoint. They include the *Test Endpoint* described in [View apps and deployments](./how-to-staging-environment.md#view-apps-and-deployments). *Log streaming*, described in [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md), also works only within the private network. +When you assign an endpoint on an application in an Azure Spring Apps service instance deployed in your virtual network, the endpoint uses a private fully qualified domain name (FQDN). The domain is only accessible in the private network. Apps and services use the application endpoint. They include the *Test Endpoint* described in [View apps and deployments](./how-to-staging-environment.md#view-apps-and-deployments). *Log streaming*, described in [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md), also works only within the private network. ## Find the IP for your application -#### [Portal](#tab/azure-portal) +#### [Azure portal](#tab/azure-portal) -1. Select the virtual network resource you created as explained in [Deploy Azure Spring Apps in your Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md). +1. Go to the Azure Spring Apps service **Networking** page. -2. In the **Connected devices** search box, enter *kubernetes-internal*. +1. Select the **Vnet injection** tab. -3. In the filtered result, find the **Device** connected to the **Service Runtime Subnet** of the service instance, and copy its **IP Address**. In this sample, the IP Address is *10.1.0.7*. +1. In the **General info** section, find **Endpoint** and copy the **IP Address** value. The example in the following screenshot uses the IP address `10.0.1.6`: - > [!WARNING] - > Be sure that the IP Address belongs to **Service Runtime subnet** instead of **Spring Boot microservice apps subnet**. Subnet specifications are provided when you deploy an Azure Spring Apps instance. For more information, see the [Deploy an Azure Spring Apps instance](./how-to-deploy-in-azure-virtual-network.md#deploy-an-azure-spring-apps-instance) section of [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md). + :::image type="content" source="media/spring-cloud-access-app-vnet/find-ip-address.png" alt-text="Screenshot of the Azure portal that shows the Vnet injection Endpoint information." lightbox="media/spring-cloud-access-app-vnet/find-ip-address.png"::: - :::image type="content" source="media/spring-cloud-access-app-vnet/create-dns-record.png" alt-text="Screenshot of the Azure portal showing the Connected devices page for a virtual network, filtered for kubernetes-internal devices, with the IP Address for the service runtime subnet highlighted." lightbox="media/spring-cloud-access-app-vnet/create-dns-record.png"::: --#### [CLI](#tab/azure-CLI) +#### [Azure CLI](#tab/azure-CLI) Find the IP Address for your Spring Cloud services. Customize the value of your Azure Spring Apps instance name based on your real environment. If you have your own DNS solution for your virtual network, like Active Director The following procedure creates a private DNS zone for an application in the private network. -#### [Portal](#tab/azure-portal) +#### [Azure portal](#tab/azure-portal) 1. Open the Azure portal. From the top search box, search for **Private DNS zones**, and select **Private DNS zones** from the results. The following procedure creates a private DNS zone for an application in the pri 5. Select **Create**. -#### [CLI](#tab/azure-CLI) +#### [Azure CLI](#tab/azure-CLI) 1. Define variables for your subscription, resource group, and Azure Spring Apps instance. Customize the values based on your real environment. It may take a few minutes to create the zone. To link the private DNS zone to the virtual network, you need to create a virtual network link. -#### [Portal](#tab/azure-portal) +#### [Azure portal](#tab/azure-portal) 1. Select the private DNS zone resource you created previously: *private.azuremicroservices.io* To link the private DNS zone to the virtual network, you need to create a virtua 6. Select **OK**. -#### [CLI](#tab/azure-CLI) +#### [Azure CLI](#tab/azure-CLI) Link the private DNS zone you created to the virtual network holding your Azure Spring Apps service. az network private-dns link vnet create \ To use the private DNS zone to translate/resolve DNS, you must create an "A" type record in the zone. -#### [Portal](#tab/azure-portal) +#### [Azure portal](#tab/azure-portal) 1. Select the private DNS zone resource you created previously: *private.azuremicroservices.io*. To use the private DNS zone to translate/resolve DNS, you must create an "A" typ ![Add private DNS zone record](media/spring-cloud-access-app-vnet/private-dns-zone-add-record.png) -#### [CLI](#tab/azure-CLI) +#### [Azure CLI](#tab/azure-CLI) Use the [IP address](#find-the-ip-for-your-application) to create the A record in your DNS zone. az network private-dns record-set a add-record \ After following the procedure in [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md), you can assign a private FQDN for your application. -#### [Portal](#tab/azure-portal) +#### [Azure portal](#tab/azure-portal) 1. Select the Azure Spring Apps service instance deployed in your virtual network, and open the **Apps** tab in the menu on the left. After following the procedure in [Deploy Azure Spring Apps in a virtual network] 4. The assigned private FQDN (labeled **URL**) is now available. It can only be accessed within the private network, but not on the Internet. -#### [CLI](#tab/azure-CLI) +#### [Azure CLI](#tab/azure-CLI) Update your app to assign an endpoint to it. Customize the value of your app name based on your real environment. |
spring-apps | How To Deploy In Azure Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md | This section shows you to grant Azure Spring Apps the [Owner](../role-based-acce > [!NOTE] > The minimal required permissions are [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) and [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor). You can grant role assignments to both of them if you can't grant `Owner` permission.+> +> If you're using your own route table or a user defined route feature, you also need to grant Azure Spring Apps the same role assignments to your route tables. For more information, see the [Bring your own route table](#bring-your-own-route-table) section and [Control egress traffic for an Azure Spring Apps instance](how-to-create-user-defined-route-instance.md). ### [Azure portal](#tab/azure-portal) The route tables to which your custom vnet is associated must meet the following * You can't update the associated route table resource after cluster creation. While you can't update the route table resource, you can modify custom rules on the route table. * You can't reuse a route table with multiple instances due to potential conflicting routing rules. +## Using Custom DNS Servers ++Azure Spring Apps supports using custom DNS servers in your virtual network. ++If you don't specify custom DNS servers in your DNS Server Virtual Network setting, Azure Spring Apps will, by default, use the Azure DNS to resolve IP addresses. If your virtual network is configured with custom DNS settings, add Azure DNS IP `168.63.129.16` as the upstream DNS server in the custom DNS server. Azure DNS can resolve IP addresses for all the public FQDNs mentioned in [Customer responsibilities running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md). It can also resolve IP address for `*.svc.private.azuremicroservices.io` in your virtual network. ++If your custom DNS server can't add Azure DNS IP `168.63.129.16` as the upstream DNS server, use the following steps: ++* Ensure that your custom DNS server can resolve IP addresses for all the public FQDNs. For more information, see [Customer responsibilities running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md). +* Add the DNS record `*.svc.private.azuremicroservices.io` to the IP of your application. For more information, see the [Find the IP for your application](access-app-virtual-network.md#find-the-ip-for-your-application) section of [Access an app in Azure Spring Apps in a virtual network](access-app-virtual-network.md). + ## Next steps * [Troubleshooting Azure Spring Apps in VNET](troubleshooting-vnet.md) |
spring-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md | Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
spring-apps | Troubleshooting Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshooting-vnet.md | To set up the Azure Spring Apps service instance by using the Resource Manager t | `Resources created by Azure Spring Apps were disallowed by policy.` | Network resources are created when deploying Azure Spring Apps in your own virtual network. Be sure to check whether you have [Azure Policy](../governance/policy/overview.md) defined to block that creation. The error message lists the resources that weren't created. | | `Required traffic is not allowlisted.` | Be sure to check [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md) to ensure that the required traffic is allowlisted. | -## My application can't be registered +## My application can't be registered or it can't get settings from the config server ++The applications running inside the Azure Spring Apps user cluster need to access the Eureka Server and the Config Server in the system runtime cluster via the `<service-instance-name>.svc.private.azuremicroservices.io` domain. This problem occurs if your virtual network is configured with custom DNS settings. In this case, the private DNS zone used by Azure Spring Apps is ineffective. Add the Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server. +If your custom DNS server can't add the Azure DNS IP `168.63.129.16` as the upstream DNS server, then add the DNS record `*.svc.private.azuremicroservices.io` to the IP of your application. For more information, see the [Find the IP for your application](access-app-virtual-network.md#find-the-ip-for-your-application) section of [Access an app in Azure Spring Apps in a virtual network](access-app-virtual-network.md). + ## I can't access my application's endpoint or test endpoint in a virtual network If your virtual network is configured with custom DNS settings, be sure to add Azure DNS IP `168.63.129.16` as the upstream DNS server in the custom DNS server, if you haven't already. Then, proceed with the following instructions. -If your virtual network is not configured with custom DNS settings, or if your virtual network is configured with custom DNS settings and you've already added Azure DNS IP `168.63.129.16` as the upstream DNS server in the custom DNS server, then complete the following steps: +If your virtual network isn't configured with custom DNS settings, or if your virtual network is configured with custom DNS settings and you've already added Azure DNS IP `168.63.129.16` as the upstream DNS server in the custom DNS server, then complete the following steps: 1. Create a new private DNS zone `private.azuremicroservices.io`. 1. Link the private DNS zone to the virtual network. If your virtual network is not configured with custom DNS settings, or if your v For more information, see [Access your application in a private network](./access-app-virtual-network.md) +## I can't access my application's public endpoint from public network ++Azure Spring Apps supports exposing applications to the internet by using public endpoints. For more information, see [Expose applications on Azure Spring Apps to the internet from a public network](how-to-access-app-from-internet-virtual-network.md). ++If you're using a user defined route feature, some features aren't supported because of asymmetric routing. For unsupported features, see the following list: ++- Use the public network to access the application through the public endpoint. +- Use the public network to access the log stream. +- Use the public network to access the app console. ++For more information, see [Control egress traffic for an Azure Spring Apps instance](how-to-create-user-defined-route-instance.md). ++Similar limitations also apply to Azure Spring Apps when egress traffics are routed to a firewall. The problem occurs because both situations introduce asymmetric routing into the cluster. Packets arrive on the endpoint's public IP address but return to the firewall via the private IP address. So, the firewall must block such traffic. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). ++If you're routing egress traffics to a firewall but also need to expose the application to internet, use the expose applications to the internet with TLS Termination feature. For more information, see [Expose applications to the internet with TLS Termination at Application Gateway](expose-apps-gateway-tls-termination.md). + ## Other issues - [Access your application in a private network](access-app-virtual-network.md) |
spring-apps | Vnet Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md | The following list shows the resource requirements for Azure Spring Apps service | \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling the *Azure Container Registry* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling the *Azure Storage* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling the *Azure Event Hubs* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |-| global.prod.microsoftmetrics.com:443 and \*.livediagnostics.monitor.azure.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. | +| \*.prod.microsoftmetrics.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. | ## Azure Global required FQDN / application rules Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the | \*.azurecr.cn:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling the *Azure Container Registry* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.chinacloudapi.cn:443 and \*.core.chinacloudapi.cn:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling the *Azure Storage* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.chinacloudapi.cn:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling the *Azure Event Hubs* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |-| global.prod.microsoftmetrics.com:443 and \*.livediagnostics.monitor.azure.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. | +| \*.prod.microsoftmetrics.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. | ## Microsoft Azure operated by 21Vianet required FQDN / application rules Azure Firewall provides the FQDN tag `AzureKubernetesService` to simplify the fo | <i>*.live.ruxit.com</i> | TCP:443 | Required network of Dynatrace APM agents. | | <i>*.saas.appdynamics.com</i> | TCP:443/80 | Required network of AppDynamics APM agents, also see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges). | +## Azure Spring Apps optional FQDN for Application Insights ++You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or the Application Insights Agent to send data to the portal. For more information, see the [outgoing ports](../azure-monitor/app/ip-addresses.md#outgoing-ports) section of [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md). + ## Next steps - [Access your application in a private network](access-app-virtual-network.md) |
storage | Blob Storage Monitoring Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-storage-monitoring-scenarios.md | For the "when" portion of your audit, the `TimeGenerated` field shows when the l For the "what" portion of your audit, the `Uri` field shows the item was modified or read. For the "how" portion of your audit, the `OperationName` field shows which operation was executed.-+> [!TIP] +> For example, if you suspect that a blob or container has been deleted by mistake, then add a `where` clause that returns only log entries where the `OperationName` is set to either [Delete blob](/rest/api/storageservices/delete-blob) or [Delete Container](/rest/api/storageservices/delete-container). For the "who" portion of your audit, `AuthenticationType` shows which type of authentication was used to make a request. This field can show any of the types of authentication that Azure Storage supports including the use of an account key, a SAS token, or Azure Active Directory (Azure AD) authentication. +If the request is authorized by using Azure AD, you can use the `RequestObjectId` field to identify the "who". Shared Key and SAS authentication provide no means of auditing individual identities. In those cases, the `callerIPAddress` and `userAgentHeader` fields might help you to identify the source of the operation. If a SAS token was used to authorize an operation, you can identify that token, and if you've mapped tokens to token recipients at your end, you can identify which user, organization, or application has performed the operation. See [Identifying the SAS token used to authorize a request](#identifying-the-sas-token-used-to-authorize-a-request). + #### Identifying the security principal used to authorize a request If a request was authenticated by using Azure AD, the `RequesterObjectId` field provides the most reliable way to identify the security principal. You can find the friendly name of that security principal by taking the value of the `RequesterObjectId` field, and searching for the security principal in Azure AD page of the Azure portal. The following screenshot shows a search result in Azure AD. |
storage | Data Lake Storage Migrate Gen1 To Gen2 Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md | For more information, see [Manage Azure Data Lake Analytics using the Azure port File or directory names with only spaces or tabs, ending with a `.`, containing a `:`, or with multiple consecutive forward slashes (`//`) aren't compatible with Gen2. You need to rename these files or directories before you migrate. +For the better performance, consider delaying the migration for at least seven days from the time of the last delete operation. In a Gen1 account, deleted files become _soft_ deleted files, and the Garbage Collector won't remove them permanently until approximately seven days. All files, including soft deleted files, are processed during migration. If you wait until the Garbage Collector has permanently removed deleted files, your wait time can improve. + ## Step 5: Perform the migration Before you begin, review the two migration options below, and decide whether to only copy data from Gen1 to Gen2 (recommended) or perform a complete migration. When you copy the data over to your Gen2-enabled account, two factors that can a WebHDFS File System APIs of Gen1 will be supported on Gen2 but with certain deviations, and only limited functionality is supported via the compatibility layer. Customers should plan to leverage Gen2-specific APIs for better performance and features. +#### What happens to my Gen1 account after the retirement date? ++The account becomes inaccessible. You won't be able to: ++- Manage the account ++- Access data in the account ++- Receive service updates to Gen1 or Gen1 APIs, SDKs, or client tools ++- Access Gen1 customer support online, by phone or by email ++See [Action required: Switch to Azure Data Lake Storage Gen2 by 29 February 2024](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). + ## Next steps - Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md). |
storage | Upgrade To Data Lake Storage Gen2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2.md | -This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. The most popular capabilities include: +This article helps you to enable a hierarchical namespace and unlock capabilities such as file- and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. The most popular capabilities include: - Higher throughput, input/output operations per second (IOPS), and storage capacity limits. This article helps you to enable a hierarchical namespace and unlock capabilitie - Efficient query engine that transfers only the data required to perform a given operation. -- Security at the container, directory, and file-level.+- Security at the container, directory, and file level. -To learn more about them, see [Introduction to Azure Data Lake storage Gen2](data-lake-storage-introduction.md). +To learn more about them, see [Introduction to Azure Data Lake Storage Gen2](data-lake-storage-introduction.md). This article helps you evaluate the impact on workloads, applications, costs, service integrations, tools, features, and documentation. Make sure to review these impacts carefully. When you are ready to upgrade an account, see this step-by-step guide: [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). This article helps you evaluate the impact on workloads, applications, costs, se ## Impact on availability -Make sure to plan for some downtime in your account while the upgrade process completes. Write operations are disabled while your account is being upgraded. Read operations aren't disabled, but we strongly recommend that you suspend read operations as those operations might destabilize the upgrade process. +Make sure to plan for some downtime in your account while the upgrade process completes. Write operations are disabled while your account is being upgraded. Read operations aren't disabled, but we strongly recommend that you suspend read operations, as those operations might destabilize the upgrade process. ## Impact on workloads and applications Blob APIs work with accounts that have a hierarchical namespace, so most applica For a complete list of issues and workarounds, see [Known issues with Blob Storage APIs](data-lake-storage-known-issues.md#blob-storage-apis). -Any Hadoop workloads that use Windows Azure Storage Blob driver or [WASB](https://hadoop.apache.org/docs/current/hadoop-azure/https://docsupdatetracker.net/index.html) driver, must be modified to use the [Azure Blob File System (ABFS)](https://hadoop.apache.org/docs/stable/hadoop-azure/abfs.html) driver. Unlike the WASB driver that makes requests to the **Blob service** endpoint, the ABFS driver will make requests to the **Data Lake Storage** endpoint of your account. +Any Hadoop workloads that use the [Windows Azure Storage Blob driver (WASB)](https://hadoop.apache.org/docs/current/hadoop-azure/https://docsupdatetracker.net/index.html) driver, must be modified to use the [Azure Blob File System (ABFS)](https://hadoop.apache.org/docs/stable/hadoop-azure/abfs.html) driver. Unlike the WASB driver that makes requests to the **Blob service** endpoint, the ABFS driver will make requests to the **Data Lake Storage** endpoint of your account. ### Data Lake Storage endpoint Your upgraded account will have a Data Lake storage endpoint. You can find the U You don't have to modify your existing applications and workloads to use that endpoint. [Multiprotocol access in Data Lake Storage](data-lake-storage-multi-protocol-access.md) makes it possible for you to use either the Blob service endpoint or the Data Lake storage endpoint to interact with your data. -Azure services and tools (such as AzCopy) might use the Data Lake storage endpoint to interact with the data in your storage account. Also, you'll need to use this new endpoint for any operations that you perform by using the [Data Lake Storage Gen2 SDKs](data-lake-storage-directory-file-acl-dotnet.md), [PowerShell commands](data-lake-storage-directory-file-acl-powershell.md), or [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). +Azure services and tools (such as AzCopy) might use the Data Lake storage endpoint to interact with the data in your storage account. Also, you'll need to use this new endpoint for any operations that you perform by using the Data Lake Storage Gen2 [SDKs](data-lake-storage-directory-file-acl-dotnet.md), [PowerShell commands](data-lake-storage-directory-file-acl-powershell.md), or [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). ### Directories Your new account has a hierarchical namespace. That means that directories are n ### Blob metadata -Before migration, blob metadata is associated with the blob name along with it's entire virtual path. After migration, the metadata is associated only with the blob. The virtual path to the blob becomes a collection of directories. Metadata of a blob is not applied to any of those directories. +Before migration, blob metadata is associated with the blob name along with its entire virtual path. After migration, the metadata is associated only with the blob. The virtual path to the blob becomes a collection of directories. Metadata of a blob is not applied to any of those directories. ### Put operations -When you upload a blob, and the path that you specify includes a directory that doesn't exist, the operation creates that directory, and then adds a blob to it. This behavior is logical in the context of a hierarchical folder structure. In a Blob storage account that does not have a hierarchical namespace, the operation doesn't create a directory. Instead, the directory name is added to the blob's namespace. +When you upload a blob, and the path that you specify includes a directory that doesn't exist, the operation creates that directory, and then adds the blob to it. This behavior is logical in the context of a hierarchical folder structure. In a Blob storage account that does not have a hierarchical namespace, the operation doesn't create a directory. Instead, the directory name is added to the blob's name. ### List operations There is no cost to perform the upgrade. After you upgrade, the cost to store yo You can also use the **Storage Accounts** option in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the impact of costs after an upgrade. -Aside from pricing changes, consider the costs savings associated with Data Lake Storage Gen2 capabilities. Overall total of cost of ownership typically declines because of higher throughput and optimized operations. Higher throughput enables you to transfer more data in less time. A hierarchical namespace improves the efficiency of operations. +Aside from pricing changes, consider the cost savings associated with Data Lake Storage Gen2 capabilities. Overall total of cost of ownership typically declines because of higher throughput and optimized operations. Higher throughput enables you to transfer more data in less time. A hierarchical namespace improves the efficiency of operations. ## Impact on service integrations -While most Azure service integrations will continue to work after you've enable these capabilities, some of them remain in preview or not yet supported. See [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) to understand the current support for Azure service integrations with Data Lake Storage Gen2. +While most Azure service integrations will continue to work after you've enabled these capabilities, some of them remain in preview or not yet supported. See [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) to understand the current support for Azure service integrations with Data Lake Storage Gen2. -## Impact on tools, features and documentation +## Impact on tools, features, and documentation -After you upgrade, the way that interact with some features will change. This section describes those changes. +After you upgrade, the way you that interact with some features will change. This section describes those changes. -### Blob Storage feature support +### The Blob Storage feature support -While most of Blob storage features will continue to work after you've enable these capabilities, some of them remain in preview or not yet supported. +While most of the Blob storage features will continue to work after you've enabled these capabilities, some of them remain in preview or are not yet supported. See [Blob Storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md) to understand the current support for Blob storage features with Data Lake Storage Gen2. You don't have to use this new version. However, any operations that are applied ### Azure Lifecycle management -Policies that move or delete all of the blobs in a directory won't delete the directory that contains those blobs until the next day. That's because the directory can't be deleted until all of the blobs that are located in that directory are first removed. The next day, the directory will be removed. +It effectively explains that policies for moving or deleting all blobs in a directory won't delete the directory itself until all the blobs within it are removed, and the directory will be removed the next day. ### Event Grid If your applications use the Event Grid, you might have to modify those applicat ### Storage Explorer -The following buttons don't yet appear in the Ribbon of Azure Storage Explorer. +The following buttons don't yet appear in the Ribbon of Azure Storage Explorer: |Button|Reason| |--|--| |
storage | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md | Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
storage | Storage Use Azurite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md | Title: Use Azurite emulator for local Azure Storage development description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications. Previously updated : 07/11/2023 Last updated : 09/19/2023 When `--cert` is provided for a PFX file, you must provide a corresponding `--pw azurite --cert path/server.pfx --pwd pfxpassword ``` +#### HTTPS setup + For detailed information on creating PEM and PFX files, see [HTTPS Setup](https://github.com/Azure/Azurite/blob/master/README.md#https-setup). ### OAuth configuration azurite --oauth basic --cert path/server.pem --key path/key.pem > [!NOTE] > OAuth requires an HTTPS endpoint. Make sure HTTPS is enabled by providing `--cert` switch along with the `--oauth` switch. -Azurite supports basic authentication by specifying the `basic` parameter to the `--oauth` switch. Azurite performs basic authentication, like validating the incoming bearer token, checking the issuer, audience, and expiry. Azurite doesn't check the token signature or permissions. To learn more about authorization, see [Authorization for tools and SDKs](#authorization-for-tools-and-sdks). +Azurite supports basic authentication by specifying the `basic` parameter to the `--oauth` switch. Azurite performs basic authentication, like validating the incoming bearer token, checking the issuer, audience, and expiry. Azurite doesn't check the token signature or permissions. To learn more about authorization, see [Authorization for tools and SDKs](#connect-to-azurite-with-sdks-and-tools). ### Skip API Version Check azurite --skipApiVersionCheck azurite --disableProductStyleUrl ``` -## Authorization for tools and SDKs +## Connect to Azurite with SDKs and tools -Connect to Azurite from Azure Storage SDKs or tools, like [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/), by using any authentication strategy. Authentication is required. Azurite supports authorization with OAuth, Shared Key, and shared access signatures (SAS). Azurite also supports anonymous access to public containers. +You can connect to Azurite from Azure Storage SDKs, or tools like [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). Authentication is required, and Azurite supports authorization with OAuth, Shared Key, and shared access signatures (SAS). Azurite also supports anonymous access to public containers. -If you're using the Azure SDKs, start Azurite with the `--oauth basic and --cert --key/--pwd` options. To learn more about using Azurite with the Azure SDKs, see [Azure SDKs](#azure-sdks). +If you're using the Azure SDKs, start Azurite with the `--oauth basic`` and `--cert --key/--pwd` options. To learn more about using Azurite with the Azure SDKs, see [Azure SDKs](#azure-sdks). ### Well-known storage account and key For more information, see [Configure Azure Storage connection strings](storage-c ### Azure SDKs -To use Azurite with the [Azure SDKs](https://aka.ms/azsdk), use OAuth and HTTPS options: +To use Azurite with the [Azure SDKs](https://aka.ms/azsdk), use OAuth with HTTPS options: ```console azurite --oauth basic --cert certname.pem --key certname-key.pem ``` +To learn more about configuring OAuth for Azurite, see [OAuth configuration](#oauth-configuration). To learn about certificate configuration and HTTPS setup, see [Certificate configuration (HTTPS)](#certificate-configuration-https). + #### Azure Blob Storage To interact with Blob Storage resources, you can instantiate a `BlobContainerClient`, `BlobServiceClient`, or `BlobClient`. |
storage | Clone Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/clone-volume.md | + + Title: Clone persistent volumes in Azure Container Storage Preview +description: Clone persistent volumes in Azure Container Storage Preview. You can only clone volumes of the same size that are in the same storage pool. +++ Last updated : 09/18/2023++++# Clone persistent volumes in Azure Container Storage Preview +You can clone persistent volumes in [Azure Container Storage](container-storage-introduction.md). A cloned volume is a duplicate of an existing persistent volume. You can only clone volumes of the same size that are in the same storage pool. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). +- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support resizing volumes. ++## Clone a volume ++Follow the instructions below to clone a persistent volume. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-clonevolume.yaml`. ++1. Paste in the following code and save the file. A built-in storage class supports volume cloning, so for **dataSource** be sure to reference a PVC previously created by the Azure Container Storage storage class. For example, if you created the PVC for Azure Disks, it might be called `azurediskpvc`. For **storage**, specify the size of the original PVC. ++ ```yml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-acstor-cloning + spec: + accessModes: + - ReadWriteOnce + storageClassName: acstor-azuredisk + resources: + requests: + storage: 100Gi + dataSource: + kind: PersistentVolumeClaim + name: azurediskpvc + ``` ++1. Apply the YAML manifest file to clone the PVC. + + ```azurecli-interactive + kubectl apply -f acstor-clonevolume.yaml + ``` ++ You should see output similar to: + + ```output + persistentvolumeclaim/pvc-acstor-cloning created + ``` ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`. ++1. Paste in the following code and save the file. For **claimName**, be sure to reference the cloned PVC. ++ ```yml + kind: Pod + apiVersion: v1 + metadata: + name: fiopod2 + spec: + nodeSelector: + acstor.azure.com/io-engine: acstor + volumes: + - name: azurediskpv + persistentVolumeClaim: + claimName: pvc-acstor-cloning + containers: + - name: fio + image: nixery.dev/shell/fio + args: + - sleep + - "1000000" + volumeMounts: + - mountPath: "/volume" + name: azurediskpv + ``` ++1. Apply the YAML manifest file to deploy the new pod. + + ```azurecli-interactive + kubectl apply -f acstor-pod.yaml + ``` + + You should see output similar to the following: + + ```output + pod/fiopod2 created + ``` ++1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod: ++ ```azurecli-interactive + kubectl describe pod fiopod2 + kubectl describe pvc azurediskpvc + ``` + ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) |
storage | Container Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md | description: An overview of Azure Container Storage Preview, a service built nat Previously updated : 09/07/2023 Last updated : 09/18/2023 Azure Container Storage offers persistent volume support with ReadWriteOnce acce Based on feedback from customers, we've included the following capabilities in the Azure Container Storage Preview update: -- Scale up by resizing volumes backed by Azure Disks and NVMe storage pools without downtime-- Clone persistent volumes within a storage pool+- Scale up by [resizing volumes](resize-volume.md) backed by Azure Disks and NVMe storage pools without downtime +- [Clone persistent volumes](clone-volume.md) within a storage pool For more information on these features, email the Azure Container Storage team at azcontainerstorage@microsoft.com. |
storage | Install Container Storage Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md | description: Learn how to install Azure Container Storage Preview for use with A Previously updated : 09/07/2023 Last updated : 09/19/2023 Azure Container Service is a separate service from AKS, so you'll need to grant # [Azure CLI](#tab/cli) -Run the following commands to assign Contributor role to AKS managed identity. Remember to replace `<resource-group>` and `<cluster-name>` with your own values. +Run the following commands to assign Contributor role to AKS managed identity. Remember to replace `<resource-group>`, `<cluster-name>`, and `<azure-subscription-id>` with your own values. You can also narrow the scope to your resource group, for example `/subscriptions/<azure-subscription-id>/resourceGroups/<resource-group>`. ```azurecli-interactive export AKS_MI_OBJECT_ID=$(az aks show --name <cluster-name> --resource-group <resource-group> --query "identityProfile.kubeletidentity.objectId" -o tsv)-export AKS_NODE_RG=$(az aks show --name <cluster-name> --resource-group <resource-group> --query "nodeResourceGroup" -o tsv) -az role assignment create --assignee $AKS_MI_OBJECT_ID --role "Contributor" --resource-group "$AKS_NODE_RG" +az role assignment create --assignee $AKS_MI_OBJECT_ID --role "Contributor" --scope "/subscriptions/<azure-subscription-id>" ``` |
storage | Resize Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/resize-volume.md | + + Title: Resize persistent volumes in Azure Container Storage Preview without downtime +description: Resize persistent volumes in Azure Container Storage Preview without downtime. Scale up by expanding volumes backed by Azure Disk and NVMe storage pools. +++ Last updated : 09/13/2023++++# Resize persistent volumes in Azure Container Storage Preview +You can expand persistent volumes in [Azure Container Storage](container-storage-introduction.md) to scale up quickly and without downtime. ++Shrinking persistent volumes isn't currently supported. You can't expand a volume beyond the size limits of your storage pool. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). +- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support resizing volumes. ++## Expand a volume ++Follow these instructions to resize a persistent volume. A built-in storage class supports volume expansion, so be sure to reference a PVC previously created by an Azure Container Storage storage class. For example, if you created the PVC for Azure Disks, it might be called `azurediskpvc`. ++1. Run the following command to expand the PVC by increasing the `spec.resources.requests.storage` field. Replace `<pvc-name>` with the name of your PVC. Replace `<size-in-Gi>` with the new size, for example 100Gi. + + ```azurecli-interactive + kubectl patch pvc <pvc-name> --type merge --patch '{"spec": {"resources": {"requests": {"storage": "<size-in-Gi>"}}}}' + ``` + +1. Check the PVC to make sure the volume is expanded: + + ```azurecli-interactive + kubectl describe pvc <pvc-name> + ``` + +The output should reflect the new size. ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) |
storage | Use Container Storage With Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md | description: Configure Azure Container Storage Preview for use with Azure manage Previously updated : 09/07/2023 Last updated : 09/15/2023 First, create a storage pool, which is a logical grouping of storage for your Ku 1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`. -1. Paste in the following code and save the file. The storage pool **name** value can be whatever you want. +1. Paste in the following code and save the file. The storage pool **name** value can be whatever you want. For **skuName**, specify the level of performance and redundancy. Acceptable values are Premium_LRS, Standard_LRS, StandardSSD_LRS, UltraSSD_LRS, Premium_ZRS, PremiumV2_LRS, and StandardSSD_ZRS. For **storage**, specify the amount of storage capacity for the pool in Gi or Ti. ```yml- apiVersion: containerstorage.azure.com/v1alpha1 + apiVersion: containerstorage.azure.com/v1beta1 kind: StoragePool metadata: name: azuredisk namespace: acstor spec: poolType:- azureDisk: {} + azureDisk: + skuName: Premium_LRS resources:- requests: {"storage": 1Ti} + requests: + storage: 1Ti ``` 1. Apply the YAML manifest file to create the storage pool. |
stream-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
synapse-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
synapse-analytics | Sql Data Warehouse Tables Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity.md | In this article, you'll find recommendations and examples for using the IDENTITY A surrogate key on a table is a column with a unique identifier for each row. The key is not generated from the table data. Data modelers like to create surrogate keys on their tables when they design data warehouse models. You can use the IDENTITY property to achieve this goal simply and effectively without affecting load performance. > [!NOTE]-> In Azure Synapse Analytics, the IDENTITY value increases on its own in each distribution and does not overlap with IDENTITY values in other distributions. The IDENTITY value in Synapse is not guaranteed to be unique if the user explicitly inserts a duplicate value with ΓÇ£SET IDENTITY_INSERT ONΓÇ¥ or reseeds IDENTITY. For details, see [CREATE TABLE (Transact-SQL) IDENTITY (Property)](/sql/t-sql/statements/create-table-transact-sql-identity-property?view=azure-sqldw-latest&preserve-view=true). +> In Azure Synapse Analytics: +> - The IDENTITY value increases on its own in each distribution and does not overlap with IDENTITY values in other distributions. The IDENTITY value in Synapse is not guaranteed to be unique if the user explicitly inserts a duplicate value with ΓÇ£SET IDENTITY_INSERT ONΓÇ¥ or reseeds IDENTITY. For details, see [CREATE TABLE (Transact-SQL) IDENTITY (Property)](/sql/t-sql/statements/create-table-transact-sql-identity-property?view=azure-sqldw-latest&preserve-view=true). +> - UPDATE on distribution column does not guarantee IDENTITY value to be unique. Use [DBCC CHECKIDENT (Transact-SQL)](/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql?view=azure-sqldw-latest) after UPDATE on distribution column to verify uniqueness. ## Creating a table with an IDENTITY column |
update-center | Assessment Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md | Title: Assessment options in Update Manager (preview). -description: The article describes the assessment options available in Update Manager (preview). + Title: Assessment options in Update Manager. +description: The article describes the assessment options available in Update Manager. Last updated 05/23/2023 -# Assessment options in Update Manager (preview) +# Assessment options in Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article provides an overview of the assessment options available by Update Manager (preview). +This article provides an overview of the assessment options available by Update Manager. -Update Manager (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. +Update Manager provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. ## Periodic assessment - Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). + Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager. We recommend that you enable this property on your machines as it allows Update Manager to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). :::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png"::: ## Check for updates now/On-demand assessment -Update Manager (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). +Update Manager allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). ## Update assessment scan You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available. In the **Scheduling** section, you can either **create a maintenance configurati ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). * To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Configure Wu Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/configure-wu-agent.md | Title: Configure Windows Update settings in Azure Update Manager (preview) -description: This article tells how to configure Windows update settings to work with Azure Update Manager (preview). + Title: Configure Windows Update settings in Azure Update Manager +description: This article tells how to configure Windows update settings to work with Azure Update Manager. Last updated 05/02/2023 -# Configure Windows update settings for Azure Update Manager (preview) +# Configure Windows update settings for Azure Update Manager -Azure Update Manager (preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by: +Azure Update Manager relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by: - Local Group Policy Editor - Group Policy - PowerShell - Directly editing the Registry -The Update Manager (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window. +The Update Manager respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window. For additional recommendations on setting up WSUS in your Azure subscription and to secure your Windows virtual machines up to date, review [Plan your deployment for updating Windows virtual machines in Azure using WSUS](/azure/architecture/example-scenario/wsus). ## Pre-download updates -To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager (preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager (preview) +To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager You can enable this setting in PowerShell: By default, the Windows Update client is configured to provide updates only for Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update Manager (preview) (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.+- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change. ```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") Use one of the following options to perform the settings change at scale: $ServiceManager.AddService2($ServiceId,7,"") ``` -- For servers running Server 2016 or later which are not using Update Manager (preview) scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).+- For servers running Server 2016 or later which are not using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store). ## Configure a Windows server for Microsoft updates If your machine is patched using Automation Update management, and has Automatic ## Make WSUS configuration settings -Update Manager (preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS. +Update Manager supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS. To restrict machines to the internal update service, see [do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#do-not-connect-to-any-windows-update-internet-locations). |
update-center | Deploy Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md | Title: Deploy updates and track results in Azure Update Manager (preview) -description: This article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines. + Title: Deploy updates and track results in Azure Update Manager +description: This article details how to use Azure Update Manager in the Azure portal to deploy updates and view results for supported machines. Last updated 08/08/2023 -# Deploy updates now and track results with Azure Update Manager (preview) +# Deploy updates now and track results with Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article describes how to perform an on-demand update on a single virtual machine (VM) or multiple VMs by using Azure Update Manager (preview). +This article describes how to perform an on-demand update on a single virtual machine (VM) or multiple VMs by using Azure Update Manager. See the following sections for more information: See the following sections for more information: ## Supported regions -Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions). +Update Manager is available in all [Azure public regions](support-matrix.md#supported-regions). ## Configure reboot settings The registry keys listed in [Configure automatic updates by editing the registry ## Install updates on a single VM -You can install updates from **Overview** or **Machines** on the **Update Manager (preview)** page or from the selected VM. +You can install updates from **Overview** or **Machines** on the **Update Manager** page or from the selected VM. # [From Overview pane](#tab/install-single-overview) To install one-time updates on a single VM: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. On **Update Manager (preview)** > **Overview**, select your subscription and select **One-time update** to install updates. +1. On **Update Manager** > **Overview**, select your subscription and select **One-time update** to install updates. :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows an example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: To install one-time updates on a single VM: If your deployment is meant to apply only for a select set of updates, it's necessary to clear all the preselected update classifications when you configure the **Inclusion/exclusion** updates described in the following steps. This action ensures only the updates you've specified to include in this deployment are installed on the target machine. > [!NOTE]- > - **Selected Updates** shows a preview of OS updates that you can install based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you've chosen to install a specific update category, where the OS updates applicable might vary as new packages or KB IDs might be available for the category. - > - Update Manager (preview) doesn't support driver updates. + > - **Selected Updates** shows a preview of OS updates that you can install based on the last OS update assessment information available. If the OS update assessment information in Update Manager is obsolete, the actual updates installed would vary. Especially if you've chosen to install a specific update category, where the OS updates applicable might vary as new packages or KB IDs might be available for the category. + > - Update Manager doesn't support driver updates. - Select **Include update classification**. Select the appropriate classifications that must be installed on your machines. :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot that shows update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png"::: - - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, use `3103696` or `3134815`. For Windows, you can refer to the [MSRC webpage](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base release. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, use `kernel*`, `glibc`, or `libc=1.0.1`. Based on the options specified, Update Manager (preview) shows a preview of OS updates under the **Selected Updates** section. + - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, use `3103696` or `3134815`. For Windows, you can refer to the [MSRC webpage](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base release. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, use `kernel*`, `glibc`, or `libc=1.0.1`. Based on the options specified, Update Manager shows a preview of OS updates under the **Selected Updates** section. - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend selecting this option because updates that aren't displayed here might be installed, as newer updates might be available. - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date**. Select the date and select **Add** > **Next**. To install one-time updates on a single VM: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. On **Update Manager (preview)** > **Machine**, select your subscription, select your machine, and select **One-time update** to install updates. +1. On **Update Manager** > **Machine**, select your subscription, select your machine, and select **One-time update** to install updates. 1. Select **Install now** to proceed with installing updates. You can schedule updates. 1. Sign in to the [Azure portal](https://portal.azure.com). -1. On **Update Manager (preview)** > **Overview**, select your subscription and select **One-time update** > **Install now** to install updates. +1. On **Update Manager** > **Overview**, select your subscription and select **One-time update** > **Install now** to install updates. :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: After your scheduled deployment starts, you can see its status on the **History* :::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot that shows update history." lightbox="./media/deploy-updates/updates-history-expanded.png"::: -**Windows update history** currently doesn't show the updates that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update Manager (preview)** > **Manage** > **History**. +**Windows update history** currently doesn't show the updates that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update Manager** > **Manage** > **History**. A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, which is represented as **Operation ID**. It's listed along with **Status**, **Updates Installed**, and **Time** details. You can filter the results listed in the grid. Select any one of the update deployments from the list to open the **Update depl ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot issues with Azure Update Manager (preview)](troubleshoot.md). +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot issues with Azure Update Manager](troubleshoot.md). |
update-center | Dynamic Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md | Title: An overview of dynamic scoping (preview) -description: This article provides information about dynamic scoping (preview), its purpose and advantages. + Title: An overview of Dynamic Scoping +description: This article provides information about Dynamic Scoping, its purpose and advantages. Last updated 07/05/2023 -# About Dynamic Scoping (preview) +# About Dynamic Scoping **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers. -Dynamic scoping (preview) is an advanced capability of schedule patching that allows users to: +Dynamic Scoping is an advanced capability of schedule patching that allows users to: - Group machines based on criteria such as subscription, resource group, location, resource type, OS Type, and Tags. This becomes the definition of the scope. - Associate the scope to a schedule/maintenance configuration to apply updates at scale as per a pre-defined scope. The criteria will be evaluated at the scheduled run time, which will be the fina ## Permissions -For dynamic scoping (preview) and configuration assignment, ensure that you have the following permissions: +For Dynamic Scoping and configuration assignment, ensure that you have the following permissions: - Write permissions to create or modify a schedule. - Read permissions to assign or read a schedule. |
update-center | Guidance Migration Automation Update Management Azure Update Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-automation-update-management-azure-update-manager.md | + + Title: Guidance to move virtual machines from Automation Update Management to Azure Update Manager +description: Guidance overview on migration from Automation Update Management to Azure Update Manager +++ Last updated : 09/14/2023++++# Guidance to move virtual machines from Automation Update Management to Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides guidance to move virtual machines from Automation Update Management to Azure Update Manager. ++Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It is an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale. ++For the Azure Update Manager, both AMA and MMA aren't a requirement to manage software update workflows as it relies on the Microsoft Azure VM Agent for Azure VMs and Azure connected machine agent for Arc-enabled servers. When you perform an update operation for the first time on a machine, an extension is pushed to the machine and it interacts with the agents to assess missing updates and install updates. +++> [!NOTE] +> - If you are using Azure Automation Update Management Solution, we recommend that you don't remove MMA agents from the machines without completing the migration to Azure Update Manager for the machine's patch management needs. If you remove the MMA agent from the machine without moving to Azure Update Manager, it would break the patching workflows for that machine. +> +> - All capabilities of Azure Automation Update Management will be available on Azure Update Manager before the deprecation date. ++## Guidance to move virtual machines from Automation Update Management to Azure Update Manager ++Guidance to move various capabilities is provided in table below: ++**S.No** | **Capability** | **Automation Update Management** | **Azure Update Manager** | **Steps using Azure portal** | **Steps using API/script** | + | | | | | | +1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | +2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine?view=azps-10.2.0) | +3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboarding-to-schedule-using-policy) | [Create a static scope](manage-vms-programmatically.md) | +4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) | +5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | 1. [Remove machines from solution](../automation/update-management/remove-feature.md#remove-management-of-vms) </br> 2. [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> 3. [Unlink workspace from Automation account](../automation/update-management/remove-feature.md#unlink-workspace-from-automation-account) </br> 4. [Cleanup Automation account](../automation/update-management/remove-feature.md#cleanup-automation-account) | NA | +6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA | +7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you use Automation runbooks once they are available. | | | +8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. |We recommend that you use alerts once thy are available. | | | +++ +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [Check update compliance](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-center | Guidance Migration Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-azure.md | Deploy software updates (install patches) | Provides three modes of deploying up ## Manage software updates using Azure Update Manager -1. Sign in to the [Azure portal](https://portal.azure.com) and search for Azure Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and search for Azure Update Manager. :::image type="content" source="./media/guidance-migration-azure/update-manager-service-selection-inline.png" alt-text="Screenshot of selecting the Azure Update Manager from Azure portal." lightbox="./media/guidance-migration-azure/update-manager-service-selection-expanded.png"::: -1. In the **Azure Update Manager (Preview)** home page, under **Manage** > **Machines**, select your subscription to view all your machines. +1. In the **Azure Update Manager** home page, under **Manage** > **Machines**, select your subscription to view all your machines. 1. Filter as per the available options to know the status of your specific machines. :::image type="content" source="./media/guidance-migration-azure/filter-machine-status-inline.png" alt-text="Screenshot of selecting the filters in Azure Update Manager to view the machines." lightbox="./media/guidance-migration-azure/filter-machine-status-expanded.png"::: |
update-center | Manage Arc Enabled Servers Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md | Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager (preview) -description: This article tells how to use Azure Update Manager (preview) using REST API with Azure Arc-enabled servers. + Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager +description: This article tells how to use Azure Update Manager using REST API with Azure Arc-enabled servers. -This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager (preview) in Azure. If you're new to Azure Update Manager (preview) and you want to learn more, see [overview of Update Manager (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md). +This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager in Azure. If you're new to Azure Update Manager and you want to learn more, see [overview of Update Manager](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md). -Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure). +Update Manager in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure). -Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager (preview) virtual machine extension. +Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager virtual machine extension. ## Update assessment DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Manage Dynamic Scoping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-dynamic-scoping.md | Title: Manage various operations of dynamic scoping (preview). -description: This article describes how to manage dynamic scoping (preview) operations + Title: Manage various operations of Dynamic Scoping. +description: This article describes how to manage Dynamic Scoping operations This article describes how to view, add, edit and delete a dynamic scope (previe ## Add a Dynamic scope (preview) To add a Dynamic scope to an existing configuration, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**. To add a Dynamic scope to an existing configuration, follow these steps: To view the list of Dynamic scopes (preview) associated to a given maintenance configuration, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update Manager (preview)**. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update Manager**. 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. 1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration. ## Edit a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon. To view the list of Dynamic scopes (preview) associated to a given maintenance c ## Delete a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**. ## View patch history of a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope. |
update-center | Manage Multiple Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md | Title: Manage multiple machines in Azure Update Manager (preview) -description: This article explains how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. + Title: Manage multiple machines in Azure Update Manager +description: This article explains how to use Azure Update Manager in Azure to manage multiple supported machines and view their compliance state in the Azure portal. Last updated 05/02/2023 -# Manage multiple machines with Azure Update Manager (preview) +# Manage multiple machines with Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT] > For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md). -This article describes the various features that Azure Update Manager (preview) offers to manage the system updates on your machines. By using the Update Manager (preview), you can: +This article describes the various features that Azure Update Manager offers to manage the system updates on your machines. By using the Update Manager, you can: - Quickly assess the status of available operating system updates. - Deploy updates. This article describes the various features that Azure Update Manager (preview) Instead of performing these actions from a selected Azure VM or Azure Arc-enabled server, you can manage all your machines in the Azure subscription. -## View Update Manager (preview) status +## View Update Manager status 1. Sign in to the [Azure portal](https://portal.azure.com). -1. To view update assessment across all machines, including Azure Arc-enabled servers, go to **Update Manager (preview)**. +1. To view update assessment across all machines, including Azure Arc-enabled servers, go to **Update Manager**. :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot that shows the Update Manager Overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png"::: Instead of performing these actions from a selected Azure VM or Azure Arc-enable - **Filters**: Use filters to focus on a subset of your resources. The selectors above the tiles return **Subscription**, **Resource group**, **Resource type** (Azure VMs and Azure Arc-enabled servers), **Location**, and **OS** type (Windows or Linux) based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource. - **Update status of machines**: Shows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected. According to the classification selection, the tile is updated. - The graph provides a snapshot for all your machines in your subscription, regardless of whether you've used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days. + The graph provides a snapshot for all your machines in your subscription, regardless of whether you've used Update Manager for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days. From the assessment data available, machines are classified into the following categories: Instead of performing these actions from a selected Azure VM or Azure Arc-enable ## Summary of machine status -Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). +Update Manager in Azure enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager. This section shows how you can filter information to understand the update status of your machine resources. For multiple machines, you can see how to begin an update assessment, begin an update deployment, and manage their update settings. - On the **Update Manager (preview)** page, select **Machines** from the left menu. + On the **Update Manager** page, select **Machines** from the left menu. - :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot that shows the Update Manager (preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png"::: + :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot that shows the Update Manager Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png"::: The table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment: For machines that haven't had a compliance assessment scan for the first time, y :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot that shows an assessment banner on the Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png"::: -Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status and update history, configure its patch orchestration options, and begin an update deployment. +Select a machine from the list to open Update Manager scoped to that machine. Here, you can view its detailed assessment status and update history, configure its patch orchestration options, and begin an update deployment. ### Deploy the updates You can create a recurring update deployment for your machines. Select your mach ## Update deployment history -Update Manager (preview) enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. On the **Update Manager (preview)** page, select **History** from the left menu. +Update Manager enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager. You can filter information to understand the update assessment and deployment history for multiple machines. On the **Update Manager** page, select **History** from the left menu. ## Update deployment history by machines When you select any one maintenance run ID record, you can view an expanded stat The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports the Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md). -When the Resource Graph Explorer opens, it's automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review [Overview of query logs in Azure Update Manager (preview)](query-logs.md) to learn about the log records and their properties, and the sample queries included. +When the Resource Graph Explorer opens, it's automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager. Ensure that you review [Overview of query logs in Azure Update Manager](query-logs.md) to learn about the log records and their properties, and the sample queries included. ## Next steps * To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md).-* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md). +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). |
update-center | Manage Update Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md | Title: Manage update configuration settings in Azure Update Manager (preview) -description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview). + Title: Manage update configuration settings in Azure Update Manager +description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager. -This article describes how to configure update settings from Azure Update Manager (preview) to control the update settings on your Azure virtual machines (VMs) and Azure Arc-enabled servers for one or more machines. +This article describes how to configure update settings from Azure Update Manager to control the update settings on your Azure virtual machines (VMs) and Azure Arc-enabled servers for one or more machines. ## Configure settings on a single VM To configure update settings on your machines on a single VM: -You can schedule updates from **Overview** or **Machines** on the **Update Manager (preview)** page or from the selected VM. +You can schedule updates from **Overview** or **Machines** on the **Update Manager** page or from the selected VM. # [From Overview pane](#tab/manage-single-overview) You can schedule updates from **Overview** or **Machines** on the **Update Manag The following update settings are available for configuration for the selected machines: - **Periodic assessment**: The periodic assessment is set to run every 24 hours. You can either enable or disable this setting.- - **Hotpatch**: You can enable [hotpatching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition VMs. Hotpatching is a new way to install updates on supported Windows Server Azure Edition VMs that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable, or reset this setting. + - **Hotpatch**: You can enable [hotpatching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition VMs. Hotpatching is a new way to install updates on supported Windows Server Azure Edition VMs that doesn't require a reboot after installation. You can use Update Manager to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable, or reset this setting. - **Patch orchestration** option provides: - **Customer Managed Schedules (preview)**: Enables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties, `Patch mode = Azure-orchestrated` and `BypassPlatformSafetyChecksOnUserSchedule = TRUE`, on your behalf after receiving your consent. A notification appears to confirm that the update settings are successfully chan ## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Azure Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.-* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot issues with Update Manager (preview)](troubleshoot.md). +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot issues with Update Manager](troubleshoot.md). |
update-center | Manage Updates Customized Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md | Title: Overview of customized images in Azure Update Manager (preview) + Title: Overview of customized images in Azure Update Manager description: This article describes customized image support, how to register and validate customized images for public preview, and limitations. This article describes customized image support, how to enable a subscription, a ## Asynchronous check to validate customized image support -If you're using Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching. +If you're using Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching. With marketplace images, support is validated even before Update Manager operation is triggered. Here, there are no preexisting validations in place and the Update Manager operations are triggered. Only their success or failure determines support. |
update-center | Manage Vms Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md | Title: Programmatically manage updates for Azure VMs -description: This article tells how to use Azure Update Manager (preview) in Azure using REST API with Azure virtual machines. +description: This article tells how to use Azure Update Manager in Azure using REST API with Azure virtual machines. -This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager (preview) in Azure. If you're new to Update Manager (preview) and you want to learn more, see [overview of Azure Update Manager (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md). +This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager in Azure. If you're new to Update Manager and you want to learn more, see [overview of Azure Update Manager](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md). -Azure Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/). +Azure Update Manager in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/). -Support for Azure REST API to manage Azure VMs is available through the Update Manager (preview) virtual machine extension. +Support for Azure REST API to manage Azure VMs is available through the Update Manager virtual machine extension. ## Update assessment DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Manage Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md | Title: Create reports using workbooks in Azure Update Manager (preview). + Title: Create reports using workbooks in Azure Update Manager. description: This article describes how to create and manage workbooks for VM insights. Last updated 05/23/2023 -# Create reports in Azure Update Manager (preview) +# Create reports in Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. This article describes how to create a workbook and how to edit a workbook to cr ## Create a workbook -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). -1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. +1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager| Workbooks|Gallery. 1. Select **Quick start** tile > **Empty** or alternatively, you can select **+New** to create a workbook. 1. Select **+Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook. This article describes how to create a workbook and how to edit a workbook to cr 1. Select **Done Editing**. ## Edit a workbook-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). -1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery. -1. Select **Update Manager** tile > **Overview** to view the Update Manager (preview)|Workbooks|Overview page. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. +1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager| Workbooks|Gallery. +1. Select **Update Manager** tile > **Overview** to view the Update Manager|Workbooks|Overview page. 1. Select your subscription, and select **Edit** to enable the edit mode for all the four options. - Machines overall status & configuration |
update-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md | Title: Azure Update Manager (preview) overview -description: The article tells what Azure Update Manager (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments. + Title: Azure Update Manager overview +description: The article tells what Azure Update Manager in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments. Last updated 07/05/2023 -# About Azure Update Manager (preview) +# About Azure Update Manager > [!Important]-> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update manager (preview) is the v2 version of Automation Update management and the future of Update management in Azure. Azure Update Manager (preview) is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md). -> - Guidance for migrating from Automation Update management to Update manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Update Manager (preview). +> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update Manager is the v2 version of Automation Update management and the future of Update management in Azure. Azure Update Manager is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md). +> - Guidance for migrating from Automation Update management to Update Manager will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Update Manager. -Update Manager (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update Manager (preview) to make real-time updates or schedule them within a defined maintenance window. +Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update Manager to make real-time updates or schedule them within a defined maintenance window. -You can use the Update Manager (preview) in Azure to: +You can use the Update Manager in Azure to: - Oversee update compliance for your entire fleet of machines in Azure, on-premises, and other cloud environments. - Instantly deploy critical updates to help secure your machines. You can use the Update Manager (preview) in Azure to: We also offer other capabilities to help you manage updates for your Azure Virtual Machines (VM) that you should consider as part of your overall update management strategy. Review the Azure VM [Update options](../virtual-machines/updates-maintenance-overview.md) to learn more about the options available. -Before you enable your machines for Update Manager (preview), make sure that you understand the information in the following sections. +Before you enable your machines for Update Manager, make sure that you understand the information in the following sections. > [!IMPORTANT]-> - Update Manager (preview) doesnΓÇÖt store any customer data. -> - Update Manager (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA). +> - Update Manager doesnΓÇÖt store any customer data. +> - Update Manager can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA). > - While update manager is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Key benefits -Update Manager (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below: +Update Manager has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below: - Provides native experience with zero on-boarding. - Built as native functionality on Azure Compute and Azure Arc for Servers platform for ease of use. Update Manager (preview) has been redesigned and doesn't depend on Azure Automat - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules. - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month. -The following diagram illustrates how Update Manager (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux. +The following diagram illustrates how Update Manager assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux. ![Update Manager workflow](./media/overview/update-management-center-overview.png) -To support management of your Azure VM or non-Azure machine, Update Manager (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update manager (preview) operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The Update Manager (preview) extension is installed and managed using the following: +To support management of your Azure VM or non-Azure machine, Update Manager relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update Manager operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The Update Manager extension is installed and managed using the following: - [Azure virtual machine Windows agent](../virtual-machines/extensions/agent-windows.md) or [Azure virtual machine Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. - [Azure arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers. - The extension agent installation and configuration are managed by the Update Manager (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager (preview) extension runs code locally on the machine to interact with the operating system, and it includes: + The extension agent installation and configuration are managed by the Update Manager. There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager extension runs code locally on the machine to interact with the operating system, and it includes: - Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. - Initiating the download and installation of approved updates with Windows Update client or Linux package manager. -All assessment information and update installation results are reported to Update Manager (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results. +All assessment information and update installation results are reported to Update Manager from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results. -The machines assigned to Update Manager (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in Update Manager (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository. +The machines assigned to Update Manager report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in Update Manager might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository. >[!NOTE]-> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with Update Manager (preview). +> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with Update Manager. ## Prerequisites-Along with the prerequisites listed below, see [support matrix](support-matrix.md) for Update Manager (preview). +Along with the prerequisites listed below, see [support matrix](support-matrix.md) for Update Manager. ### Role Arc enabled server | [Azure Connected Machine Resource Administrator](../azure-a ### Permissions -You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the Update Manager (preview). +You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the Update Manager. **Actions** |**Permission** |**Scope** | | | | You need the following permissions to create and manage update deployments. The For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). > [!NOTE]-> Currently, Update Manager (preview) has the following limitations regarding the operating system support: +> Currently, Update Manager has the following limitations regarding the operating system support: > - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview). +> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager. > -> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). [Learn more](support-matrix.md#supported-operating-systems). +> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager. [Learn more](support-matrix.md#supported-operating-systems). ## VM Extensions To view the available extensions for a VM in the Azure portal, follow these step ### Network planning -To prepare your network to support Update Manager (preview), you may need to configure some infrastructure components. +To prepare your network to support Update Manager, you may need to configure some infrastructure components. For Windows machines, you must allow traffic to any endpoints required by Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) (WSUS) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). |
update-center | Periodic Assessment At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md | Title: Enable periodic assessment using policy -description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview). +description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager. -This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, Update Manager (preview) fetches updates on your machine once every 24 hours. +This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, Update Manager fetches updates on your machine once every 24 hours. ## Enable Periodic assessment for your Azure machines using Policy You can monitor compliance of resources under **Compliance** and remediation sta ## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.-* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Prerequsite For Schedule Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md | Title: Configure schedule patching on Azure VMs to ensure business continuity in Azure Update Manager (preview). -description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager (preview). + Title: Configure schedule patching on Azure VMs to ensure business continuity in Azure Update Manager. +description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. Last updated 05/09/2023 Additionally, in some instances, when you remove the schedule from a VM, there i To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps: -1. Go to **Update Manager (preview)** home page and select **Machines** tab. +1. Go to **Update Manager** home page and select **Machines** tab. 1. In **Patch orchestration** filter, select **Azure Managed - Safe Deployment**. 1. Use the **Select all** option to select the machines and then select **Export to CSV**. 1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry. You can update the patch orchestration option for existing VMs that either alrea To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Go to **Update Manager (preview)**, select **Update Settings**. +1. Go to **Update Manager**, select **Update Settings**. 1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select *Customer Managed Schedules* and then select **Save**. To update the patch mode, follow these steps: To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Go to **Update Manager (preview)**, select **Update Settings**. +1. Go to **Update Manager**, select **Update Settings**. 1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select ***Azure Managed - Safe Deployment*** and then select **Save**. Scenario 8 | No | False | No | Neither the autopatch nor the schedule patch will ## Next steps -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/query-logs.md | Title: Query logs and results from Update Manager (preview) -description: The article provides details on how you can review logs and search results from update manager (preview) in Azure using Azure Resource Graph + Title: Query logs and results from Update Manager +description: The article provides details on how you can review logs and search results from Update Manager in Azure using Azure Resource Graph Last updated 04/21/2022 -# Overview of query logs in Azure Update Manager (preview) +# Overview of query logs in Azure Update Manager -Logs created from operations like update assessments and installations are stored by Update Manager (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources. +Logs created from operations like update assessments and installations are stored by Update Manager in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources. Azure Resource Graph's query language is based on the [Kusto query language](../governance/resource-graph/concepts/query-language.md) used by Azure Data Explorer. -The article describes the structure of the logs from Update Manager (preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs. +The article describes the structure of the logs from Update Manager and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs. ## Log structure -Update Manager (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph. +Update Manager sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph. ### Patch assessment results If the `PROPERTIES` property for the resource type is `patchassessmentresults/so |`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null.| |`classifications` |Category of which the specific update belongs to as per the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of category, then the value is `Others` (for Linux) or `Updates` (for Windows Server). | |`rebootRequired` |Value indicates if the specific update requires the OS to reboot to complete the installation. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't require a reboot, then the value is `false`.|-|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if Update Manager (preview) can reboot the target machine. | +|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if Update Manager can reboot the target machine. | |`patchName` |Name or label for the specific update generated by the machine's OS package manager or update service.| |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service.| |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`.| If the `PROPERTIES` property for the resource type is `patchinstallationresults/ |`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null. | |`classifications` |Category that the specific update belongs to as per the OS vendor. As provided by machine's OS update service or package manager. If your OS package manager or update service, doesn't provide the detail of category, then the value of the field will be Others (for Linux) and Updates (for Windows Server). | |`rebootRequired` |Flag to specify if the specific update requires the OS to reboot to complete installation. As provided by machine's OS update service or package manager. If your OS package manager or update service doesn't provide information regarding need of OS reboot, then the value of the field will be set to 'false'. |-|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing Update Manager (preview) to reboot the OS. | +|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing Update Manager to reboot the OS. | |`patchName` |Name or Label for the specific update as provided by the machine's OS package manager or update service. | |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service. | |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`. | If the `PROPERTIES` property for the resource type is `configurationassignments` ## Next steps - For details of sample queries, see [Sample query logs](sample-query-logs.md).-- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview).+- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md | Title: Quickstart - deploy updates in using update manager in the Azure portal -description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager (preview) using the Azure portal. +description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager using the Azure portal. Last updated 04/21/2022 -Using the Update Manager (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually. +Using the Update Manager you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually. This quickstart details you how to perform manual assessment and apply updates on a selected Azure virtual machine(s) or Arc-enabled server on-premises or in cloud environments. This quickstart details you how to perform manual assessment and apply updates o ## Check updates -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. SelectΓÇ»**Getting started**, **On-demand assessment and updates**, selectΓÇ»**Check for updates**. For the assessed machines that are reporting updates, you can configure [hotpatc To configure the settings on your machines, follow these steps: -1. In **Update Manager (preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. +1. In **Update Manager|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. In the **Change update settings** page, by default **Properties** is selected. 1. Select from the list of update settings to apply them to the selected machines. To configure the settings on your machines, follow these steps: As per the last assessment performed on the selected machines, you can now select resources and machines to install the updates -1. In the **Update Manager (preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. +1. In the **Update Manager|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. 1. In the **Install one-time updates** page, select one or more machines from the list in the **Machines** tab and click **Next**. |
update-center | Sample Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/sample-query-logs.md | Title: Sample query logs and results from Azure Update Manager (preview) -description: The article provides details of sample query logs from Azure Update Manager (preview) in Azure using Azure Resource Graph + Title: Sample query logs and results from Azure Update Manager +description: The article provides details of sample query logs from Azure Update Manager in Azure using Azure Resource Graph maintenanceresources ``` ## Next steps-- Review logs and search results from Update Manager (preview) in Azure using [Azure Resource Graph](query-logs.md).-- Troubleshoot issues in Update Manager (preview), see the [Troubleshoot](troubleshoot.md).+- Review logs and search results from Update Manager in Azure using [Azure Resource Graph](query-logs.md). +- Troubleshoot issues in Update Manager, see the [Troubleshoot](troubleshoot.md). |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | Title: Scheduling recurring updates in Azure Update Manager (preview) -description: The article details how to use Azure Update Manager (preview) in Azure to set update schedules that install recurring updates on your machines. + Title: Scheduling recurring updates in Azure Update Manager +description: The article details how to use Azure Update Manager in Azure to set update schedules that install recurring updates on your machines. Last updated 05/30/2023 -You can use Update Manager (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. +You can use Update Manager in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. -Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). ## Prerequisites for scheduled patching -1. See [Prerequisites for Update Manager (preview)](./overview.md#prerequisites) +1. See [Prerequisites for Update Manager](./overview.md#prerequisites) 1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules (Preview)**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. > [!Note] The following are the recommended limits for the mentioned indicators: ## Schedule recurring updates on single VM >[!NOTE]-> You can schedule updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM. +> You can schedule updates from the Overview or Machines blade in Update Manager page or from the selected VM. # [From Overview blade](#tab/schedule-updates-single-overview) To schedule recurring updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update Manager (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**. +1. In **Update Manager**, **Overview**, select your **Subscription**, and select **Schedule updates**. 1. In **Create new maintenance configuration**, you can create a schedule for a single VM. To schedule recurring updates on a single VM, follow these steps: 1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note] - > Update Manager (preview) doesn't support driver updates. + > Update Manager doesn't support driver updates. 1. In the **Tags** page, assign tags to maintenance configurations. To schedule recurring updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**. +1. In **Update Manager**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**. 1. In **Create new maintenance configuration**, you can create a schedule for a single VM, assign machine and tags. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. To schedule recurring updates at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update Manager (preview)**, **Overview**, select your **Subscription** and select **Schedule updates**. +1. In **Update Manager**, **Overview**, select your **Subscription** and select **Schedule updates**. 1. In the **Create new maintenance configuration** page, you can create a schedule for multiple machines. To schedule recurring updates at scale, follow these steps: 1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note] - > Update Manager (preview) doesn't support driver updates. + > Update Manager doesn't support driver updates. 1. In the **Tags** page, assign tags to maintenance configurations. To schedule recurring updates at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**. +1. In **Update Manager**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**. In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. You can create a new Guest OS update maintenance configuration or modify an exis ## Onboarding to Schedule using Policy -The update Manager (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. +The Update Manager allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. > [!NOTE] > This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching. You can check the deployment status and history of your maintenance configuratio ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md | Title: Azure Update Manager (preview) support matrix + Title: Azure Update Manager support matrix description: Provides a summary of supported regions and operating system settings. -# Support matrix for Azure Update Manager (preview) +# Support matrix for Azure Update Manager -This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers. +This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers. ## Update sources supported -**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations) +**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations) -**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager (preview) depend on where the machines are configured to report. +**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager depend on where the machines are configured to report. ## Types of updates supported ### Operating system updates-Update Manager (preview) supports operating system updates for both Windows and Linux. +Update Manager supports operating system updates for both Windows and Linux. > [!NOTE]-> Update Manager (preview) doesn't support driver Updates. +> Update Manager doesn't support driver Updates. ### First party updates on Windows By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. Use one of the following options to perform the settings change at scale: ## Supported regions -Update Manager (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager (preview). +Update Manager will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager. # [Azure virtual machine](#tab/azurevm) -Update Manager (preview) is available in all Azure public regions where compute virtual machines are available. +Update Manager is available in all Azure public regions where compute virtual machines are available. # [Azure Arc-enabled servers](#tab/azurearc)-Update Manager (preview) is supported in the following regions currently. It implies that VMs must be in below regions: +Update Manager is supported in the following regions currently. It implies that VMs must be in below regions: **Geography** | **Supported Regions** | Africa | South Africa North Asia Pacific | East Asia </br> South East Asia-Australia | Australia East +Australia | Australia East </br> Australia Southeast Brazil | Brazil South Canada | Canada Central </br> Canada East Europe | North Europe </br> West Europe France | France Central India | Central India Japan | Japan East Korea | Korea Central+Norway | Norway East Sweden | Sweden Central Switzerland | Switzerland North United Kingdom | UK South </br> UK West United States | Central US </br> East US </br> East US 2</br> North Central US < > [!NOTE] > - All operating systems are assumed to be x64. x86 isn't supported for any operating system.-> - Update Manager (preview) doesn't support CIS hardened images. +> - Update Manager doesn't support CIS hardened images. # [Azure VMs](#tab/azurevm-os) > [!NOTE] > Currently, Update Manager has the following limitation regarding the operating system support: -> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview). +> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager. >-> For the above limitation, we recommend that you use [Automation Update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). +> For the above limitation, we recommend that you use [Automation Update management](../automation/update-management/overview.md) till the support is available in Update Manager. ### Marketplace/PIR images The following table lists the operating systems for marketplace images that aren ### Custom images -We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update Manager (preview) to manage updates on custom images. +We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update Manager to manage updates on custom images. |**Windows Operating System**| |-- | The table lists the operating systems supported on [Azure Arc-enabled servers](. |**Operating System**| |-|+ | Amazon Linux 2023 | | Windows Server 2012 R2 and higher (including Server Core) | | Windows Server 2008 R2 SP1 with PowerShell enabled and .NET Framework 4.0+ | | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS | The following table lists the operating systems that aren't supported: | Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](/azure/aks/node-updates-kured).| -As the Update Manager (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md). +As the Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md). ## Next steps |
update-center | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md | Title: Troubleshoot known issues with Azure Update Manager (preview) -description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager (preview). + Title: Troubleshoot known issues with Azure Update Manager +description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager. Last updated 05/30/2023 -# Troubleshoot issues with Azure Update Manager (preview) +# Troubleshoot issues with Azure Update Manager -This article describes the errors that might occur when you deploy or use Update Manager (preview), how to resolve them and the known issues and limitations of scheduled patching. +This article describes the errors that might occur when you deploy or use Update Manager, how to resolve them and the known issues and limitations of scheduled patching. ## General troubleshooting Setting a longer time range for maximum duration when triggering an [on-demand u ## Next steps -* To learn more about Azure Update Manager (preview), see the [Overview](overview.md). -* To view logged results from all your machines, see [Querying logs and results from update Manager (preview)](query-logs.md). +* To learn more about Azure Update Manager, see the [Overview](overview.md). +* To view logged results from all your machines, see [Querying logs and results from Update Manager](query-logs.md). |
update-center | Tutorial Dynamic Grouping For Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/tutorial-dynamic-grouping-for-scheduled-patching.md | Title: Schedule updates on Dynamic scoping (preview). + Title: Schedule updates on Dynamic Scoping. description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale. Last updated 07/05/2023 To create a dynamic scope, follow these steps: #### [Azure portal](#tab/az-portal) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager. 1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**. 1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers). 1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview). |
update-center | Update Manager Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/update-manager-faq.md | + + Title: Azure Update Manager FAQ +description: This article gives answers to frequently asked questions about Azure Update Manager ++ Last updated : 09/14/2023+++#Customer intent: As an implementer, I want answers to various questions. +++# Azure Update Manager frequently asked questions ++This FAQ is a list of commonly asked questions about Azure Update Manager. If you have any other questions about its capabilities, go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so that it's found quickly and easily. ++## What are the benefits of using Azure Update Manager over Automation Update Management? ++Azure Update Manager offers several benefits over the Automation Update Management solution. [Learn more](overview.md#key-benefits). +Following are few benefits: +- Native experience with zero onboarding, no dependency on other services like Automation and Log Analytics. +- On-demand operations to enable you to take immediate actions like Patch Now and Assess Now. +- Enhanced flexibility with options like [Automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](/windows-server/get-started/hotpatch) or custom maintenance schedules. +- Granular access control at a VM level. +- Support for Azure Policy. +++## LA agent (also known as MMA) is retiring and will be replaced with AMA, can customers continue to use Automation Update Management with AMA? ++Azure Update Manager doesn't rely on MMA or AMA. Customers need to move from Automation Update Management to Azure Update Manager as LA agent is retiring. However, note that customers shouldn't remove MMA agent from machines using Automation Update Management before migrating to Azure Update Manager or else Automation Update Management solution will not work. + ++## Will I be charged if I migrate to Azure Update Manager? +Azure Update Manager is free of charge for Azure machines. Azure Arc-enabled machines are charged up to $5/server/month prorated at a daily level (@0.167/server/day). Example: if your Arc machines are turned off (not connected to Azure) for 20 days out 30 days of a month, then you pay only for 10 days when periodic assessment runs on your machine. So, you will pay approximately 0.167*10=$1.67/server/month for those Arc machines. ++## How is Azure Update Manager price calculated for Arc-enabled machines? +Azure Update Manager is free for machines hosted on Azure or Azure Stack HCI. For Arc-enabled servers, it's chargeable up to $5/server/month. It's charged at a daily prorated value of 0.16/server/day. It means that your Arc-enabled machine would only be charged for the days when it's considered managed by Azure Update Manager. ++> [!NOTE] +> A machine is considered managed by Update Management in a day if the following two conditions are met: +> +> 1. If the machine has **Connected** status for Arc at the time of operation (patched on demand or through a scheduled job/assessed on demand or through periodic assessment) or for a specific time of the day (in case it is associated with a schedule, even if no operations are performed on the day). +> +> 1. **A patch now or assess now operation is triggered for the machine in the day** or **the machine is assessed for pending patches through periodic assessment on the day**, or **the machine is associated with an active schedule on the day either statically or dynamically**. ++Following are the cases when Arc-enabled servers wouldn't be charged by Azure Update ++- As additional value added to the Arc ESUs, patch management using Azure Update Manager for machines enabled for extended support via Arc would be provided at no extra charge. +- Arc-enabled machines present in subscriptions enabled for Microsoft Defender for Servers Plan 2 would be provided at no additional charge. For all other Microsoft Defender for Cloud plans, Arc-enabled machines would be charged by Update Manager. ++## If I migrate to AMA while I'm still using Automation Update Management, will my solution break? ++Yes, MMA is a prerequisite for Automation Update Management to work. The ideal thing to do would be to migrate to the new Azure Update Manager and then make the move from MMA to AMA. The new Update Manager doesn't rely on MMA or AMA. ++## How does the new Azure Update Manager work on machines? ++Whenever you trigger any Azure Update Manager operation on your machine, it pushes an extension on your machine that interacts with the VM agent (for Azure machine) or Arc agent (for Arc-enabled machines) to fetch and install updates. ++## Can I configure my machines to fetch updates from WSUS (Windows) and private repository (Linux)? ++By default, Azure Update Manager relies on Windows Update (WU) client running on your machine to fetch updates. You can configure WU client to fetch updates from Windows Update/Microsoft Update repository. Updates for Microsoft first party products are published on Microsoft Update repository. For more information, see how to [enable updates for Microsoft first party updates](configure-wu-agent.md#enable-updates-for-other-microsoft-products). ++Similarly for Linux, you can fetch updates by pointing your machine to a public repository or clone a private repository that regularly pulls updates from the upstream. In a nutshell, Azure Update Manager honors machine settings and installs updates accordingly. ++## Where is updates data stored in Azure Update Manager? ++All Azure Update Manager data is stored in Azure Resource Graph (ARG) which is free of cost. It is unlike Automation Update Management that used to store data in Log Analytics and the customers had to pay for update data stored. ++## Are all the operating systems supported in Automation Update Management supported by Azure Update Manager? ++We have tried our best to maintain the Operating Support parity. Read in detail about [Azure Update Manager OS support](support-matrix.md). ++## Will I lose my Automation Update Management update related data if I migrate to Azure Update Manager? ++We won't migrate updates related data to Azure Resource Graph, however you can refer to your historical data in Log Analytics workspace that you were using in Automation Update Management. ++## Is the new Azure Update Manager dependent on Azure Automation and Log Analytics? ++No, it's a native capability on a virtual machine. ++## Do I need AMA for the new Azure Update Manager? ++No, it's a native capability on a virtual machine and doesn't rely on MMA or AMA. ++## If I have been using pre and post-script or alerting capability in Automation Update management, would I be provided with migration guidance? ++Yes, when these features become available in Azure Update Manager, we publish migration guidance for them as well. ++## I have some reports/dashboards built for Automation Update Management, how do I migrate those? ++You can build dashboards/reports on Azure Resource Graph (ARG) data. For more information, see [how to query ARG data](query-logs.md) and [sample queries](sample-query-logs.md). You can build workbooks on ARG data. We have a few built-in workbooks that you can modify as per your use case or create a new one. For more information on [how to create reports using workbooks](manage-workbooks.md). ++## I have been using saved searches in Automation Update Management for schedules, how do I migrate to Azure Update Manager? ++You can resolve machines manually for those saved searches, Arc-enable them and then use dynamic scoping feature to define the same scope of machines. [Learn more](manage-dynamic-scoping.md) ++## I'm a Defender for Server customer and use update recommendations powered by Azure Update Manager namely periodic assessment should be enabled on your machines and system updates should be installed on your machines. Would I be charged for Azure Update Manager? ++If you have purchased a Defender for Servers Plan 2, then you won't have to pay to remediate the unhealthy resources for the above two recommendations. But if you're using any other Defender for server plan for your Arc machines, then you would be charged for those machines at the daily prorated $0.167/server by Azure Update Manager. ++## I have been using Automation Update Management for free on Arc machines, would I have to pay to use UMC on those machines? ++We'll provide Azure Update Manager for free for one year (starting from when Azure Update Manager goes GA) to all subscriptions that were using Automation Update Management on Arc-enabled machines for free. Post this period, machines are charged. ++## Does Azure Update Manager support integration with Azure Lighthouse? ++Azure Update Manager doesn't support Azure Lighthouse integration officially. However, you can try to check if the integration works on your dev environment. ++## I have been using Automation Update Management for client operating system like Windows 10, 11. Would I be able to migrate to Azure Update Manager? ++Automation Update Management never officially supported client devices. [Learn more](../automation/update-management/operating-system-requirements.md#unsupported-operating-systems) We maintain the same stance for the new Azure Update Manager. Intune is the suggested solution from Microsoft for client devices. ++## I'm using Automation Update Management on sovereign clouds; will I get region support in the new Azure Update Manager? ++Yes, support is made available for sovereign clouds supported in Automation Update Management. ++## Is the new Azure Update Manager compatible with SCCM? ++Azure Update Manager isn't compatible with SCCM unlike Automation Update Management. ++## I have machines across multiple subscriptions in Automation Update Management, is this scenario supported in Azure Update Manager? ++Yes, Azure Update Manager supports multi-subscription scenarios. ++## Are there programmatic ways of onboarding Azure Update Manager? ++Yes, Azure Update Manager supports REST API, CLI and PowerShell for [Azure machines](manage-vms-programmatically.md) and [Arc-enabled machines](manage-arc-enabled-servers-programmatically.md). ++## Is Arc-connectivity a prerequisite for using Azure Update Manager on hybrid machines? ++Yes, Arc connectivity is a prerequisite for using Azure Update Manager on hybrid machines. ++## Does Azure Update Manager support Azure Policy? ++Yes, unlike Automation Update Management, the new Azure Update Manager supports update features via policies. For more information, see[how to enable periodic assessment at scale using policy](periodic-assessment-at-scale.md) and [how to enable schedules on your machines at scale using Policy](scheduled-patching.md#onboarding-to-schedule-using-policy) + + +## Next steps ++- [An overview of Azure Update Manager](overview.md) +- [What's new in Azure Update Manager](whats-new.md) |
update-center | Updates Maintenance Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md | Title: Updates and maintenance in Azure Update Manager (preview). -description: The article describes the updates and maintenance options available in Azure Update Manager (preview). + Title: Updates and maintenance in Azure Update Manager. +description: The article describes the updates and maintenance options available in Azure Update Manager. Last updated 05/23/2023 -# Update options in Azure Update Manager (preview) +# Update options in Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article provides an overview of the various update and maintenance options available by Update Manager (preview). +This article provides an overview of the various update and maintenance options available by Update Manager. -Update Manager (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on. +Update Manager provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on. ## Update Now/One-time update -Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm). +Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm). ## Scheduled patching You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications. -Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. > [!NOTE] This mode of patching allows operating system to automatically install updates a Hotpatching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hotpatch for new virtual machines](../automanage/automanage-hotpatch.md) -Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-a-single-vm) +Hotpatching property is available as a setting in Update Manager which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-a-single-vm) :::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png"::: ## Next steps -* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). +* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-center | View Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md | Title: Check update compliance in Azure Update Manager (preview) -description: The article details how to use Azure Update Manager (preview) in the Azure portal to assess update compliance for supported machines. + Title: Check update compliance in Azure Update Manager +description: The article details how to use Azure Update Manager in the Azure portal to assess update compliance for supported machines. Last updated 05/31/2023 -# Check update compliance with Azure Update Manager (preview) +# Check update compliance with Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager (preview). +This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager. ## Check updates on single VM >[!NOTE]-> You can check the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM. +> You can check the updates from the Overview or Machines blade in Update Manager page or from the selected VM. # [From Overview blade](#tab/singlevm-overview) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. In Update Manager, **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. 1. In **Select resources and check for updates**, choose the machine for which you want to check the updates and select **Check for updates**. This article details how to check the status of available updates on a single VM 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines. +1. In Update Manager, **Machines**, select your **Subscription** to view all your machines. 1. Select your machine from the checkbox and select **Check for updates**, **Assess now** or alternatively, you can select your machine, in **Updates Preview**, select **Assess updates**, and in **Trigger assess now**, select **OK**. To check the updates on your machines at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. In Update Manager, **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. 1. In **Select resources and check for updates**, choose your machines for which you want to check the updates and select **Check for updates**. To check the updates on your machines at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines. +1. In Update Manager, **Machines**, select your **Subscription** to view all your machines. 1. Select the **Select all** to choose all your machines and select **Check for updates**. 1. Select **Assess now** to perform the assessment. - A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager (preview) | Machines** page is refreshed to display the updates. + A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager | Machines** page is refreshed to display the updates. > [!NOTE]-> In update Manager (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. +> In Update Manager, you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. ## Next steps * Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).-* To view the update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager (preview). +* To view the update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager. |
update-center | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md | Title: What's new in Azure Update Manager (preview) -description: Learn about what's new and recent updates in the Azure Update Manager (preview) service. + Title: What's new in Azure Update Manager +description: Learn about what's new and recent updates in the Azure Update Manager service. -# What's new in Azure Update Manager (Preview) +# What's new in Azure Update Manager -[Azure Update Manager (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update Manager (preview). +[Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update Manager. ## August 2023 ### New region support -Azure Update Manager (preview) is now available in Canada East and Sweden Central regions for Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). +Azure Update Manager is now available in Canada East and Sweden Central regions for Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ### SQL Server patching (preview) Dynamic scope (preview) is an advanced capability of schedule patching. You can ### Customized image support -Update Manager (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems). +Update Manager now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems). ### Multi-subscription support -The limit on the number of subscriptions that you can manage to use the Update Manager (preview) portal has now been removed. You can now manage all your subscriptions using the update Manager (preview) portal. +The limit on the number of subscriptions that you can manage to use the Update Manager portal has now been removed. You can now manage all your subscriptions using the Update Manager portal. ## April 2023 A new patch orchestration - **Customer Managed Schedules (Preview)** is introduc ### New region support -Update Manager (preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). +Update Manager now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ## October 2022 |
update-center | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md | Title: What's upcoming in Azure Update Manager (preview) -description: Learn about what's upcoming and updates in the Update manager (preview) service. + Title: What's upcoming in Azure Update Manager +description: Learn about what's upcoming and updates in the Update Manager service. -# What are the upcoming features in Azure Update Manager (preview) +# What are the upcoming features in Azure Update Manager -The primary [what's New in Azure Update Manager (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features. +The primary [what's New in Azure Update Manager](whats-new.md) contains updates of feature releases and this article lists all the upcoming features. ## Expanded support for Operating system and VM images |
update-center | Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md | -Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager (preview). +Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager. ## Key benefits - Provides a canvas for data analysis and creation of visual reports |
virtual-desktop | Create Custom Image Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-custom-image-templates.md | Title: Use Custom image templates to create custom images (preview) - Azure Virtual Desktop -description: Learn how to use Custom image templates to create custom images when deploying session hosts in Azure Virtual Desktop. + Title: Use custom image templates to create custom images - Azure Virtual Desktop +description: Learn how to use custom image templates to create custom images when deploying session hosts in Azure Virtual Desktop. Previously updated : 04/05/2023 Last updated : 09/08/2023 -# Use Custom image templates to create custom images in Azure Virtual Desktop (preview) --> [!IMPORTANT] -> Custom image templates in Azure Virtual Desktop is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Use custom image templates to create custom images in Azure Virtual Desktop Custom image templates in Azure Virtual Desktop enable you to easily create a custom image that you can use when deploying session host virtual machines (VMs). Using custom images helps you to standardize the configuration of your session host VMs for your organization. Custom image templates are built on [Azure Image Builder](../virtual-machines/image-builder-overview.md) and tailored for Azure Virtual Desktop. Before you can create a custom image template, you need to meet the following pr - A resource group to store custom image templates, and images. If you specify your own resource group for Azure Image Builder to use, then it needs to be empty before the image build starts. -- A [user-assigned Managed Identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). We recommend you create one specifically to use with custom image templates.+- A [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). We recommend you create one specifically to use with custom image templates. - [Create a custom role](../role-based-access-control/custom-roles.md) in Azure role-based access control (RBAC) with the following permissions as *actions*: Before you can create a custom image template, you need to meet the following pr "Microsoft.Compute/images/delete" ``` -- [Assign the custom role to the Managed Identity](../role-based-access-control/role-assignments-portal-managed-identity.md#user-assigned-managed-identity). This should be scoped appropriately for your deployment, ideally to the resource group you use store custom image templates.+- [Assign the custom role to the managed identity](../role-based-access-control/role-assignments-portal-managed-identity.md#user-assigned-managed-identity). This should be scoped appropriately for your deployment, ideally to the resource group you use store custom image templates. - *Optional*: If you want to distribute your image to Azure Compute Gallery, [create an Azure Compute Gallery](../virtual-machines/create-gallery.md), then [create a VM image definition](../virtual-machines/image-version.md). When you create a VM image definition in the gallery you need to specify the *generation* of the image you intend to create, either *generation 1* or *generation 2*. The generation of the image you want to use as the source image needs to match the generation specified in the VM image definition. Don't create a *VM image version* at this stage. This will be done by Azure Virtual Desktop. To create a custom image using the Azure portal: | Subscription | Select the subscription you want to use from the list. | | Resource group | Select an existing resource group. | | Location | Select a region from the list where the custom image template will be created. |- | Managed Identity | Select the Managed Identity to use for creating the custom image template. | + | Managed identity | Select the managed identity to use for creating the custom image template. | Once you've completed this tab, select **Next**. To create a custom image using the Azure portal: | Parameter | Value/Description | |--|--|- | Resource group | Select an existing resource group from the list for the managed image.<br /><br />If you choose a different resource group to the one you selected on the **Basics** tab, you'll also need to add the same role assignment for the Managed Identity. | + | Resource group | Select an existing resource group from the list for the managed image.<br /><br />If you choose a different resource group to the one you selected on the **Basics** tab, you'll also need to add the same role assignment for the managed identity. | | Image name | Select an existing managed image from the list or select **Create a managed image**. | | Location | Select the Azure region from the list for the managed image. | | Run output name | Enter a run output name for the image. This is a free text field. | To create a custom image using the Azure portal: |--|--| | Build timeout (minutes) | Enter the [maximum duration to wait](../virtual-machines/linux/image-builder-json.md#properties-buildtimeoutinminutes) while building the image template (includes all customizations, validations, and distributions). | | Build VM size | Select a size for the temporary VM created and used to build the template. You need to select a [VM size that matches the generation](../virtual-machines/generation-2.md) of your source image. |- | OS disk size GB) | Select the resource group you assigned the Managed Identity to.<br /><br />Alternatively, if you assigned the Managed Identity to the subscription, you can create a new resource group here. | + | OS disk size (GB) | Select the resource group you assigned the managed identity to.<br /><br />Alternatively, if you assigned the managed identity to the subscription, you can create a new resource group here. | | Staging group | Enter a name for a new resource group you want Azure Image Builder to use to create the Azure resources it needs to create the image. If you leave this blank Azure Image Builder creates its own default resource group. |- | Virtual network | Select an existing virtual network for the VM used to build the template. This is optional. If you don't select an existing virtual network, a temporary one is created, along with a public IP address for the temporary VM. | + | Build VM managed identity | Select a user-assigned managed identity if you want the build VM to authenticate with other Azure services. For more information, see [User-assigned identity for the Image Builder Build VM](../virtual-machines/linux/image-builder-json.md#user-assigned-identity-for-the-image-builder-build-vm). | + | Virtual network | Select an existing virtual network for the VM used to build the template. If you don't select an existing virtual network, a temporary one is created, along with a public IP address for the temporary VM. | | Subnet | If you selected an existing virtual network, select a subnet from the list. | Once you've completed this tab, select **Next**. To create a custom image using the Azure portal: 1. Select **+Add built-in script**. - 1. Select the scripts you want to use from the list, and complete any required information. + 1. Select the scripts you want to use from the list, and complete any required information. Built-in scripts include restarts where needed. 1. Select **Save**. |
virtual-desktop | Custom Image Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/custom-image-templates.md | Title: Custom image templates (preview) - Azure Virtual Desktop -description: Learn about Custom image templates in Azure Virtual Desktop, where you can create custom images that you can use when deploying session host virtual machines. + Title: Custom image templates - Azure Virtual Desktop +description: Learn about custom image templates in Azure Virtual Desktop, where you can create custom images that you can use when deploying session host virtual machines. Previously updated : 04/05/2023 Last updated : 09/08/2023 -# Custom image templates in Azure Virtual Desktop (preview) --> [!IMPORTANT] -> Custom image templates in Azure Virtual Desktop is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Custom image templates in Azure Virtual Desktop Custom image templates in Azure Virtual Desktop enable you to easily create a custom image that you can use when deploying session host virtual machines (VMs). Using custom images helps you to standardize the configuration of your session host VMs for your organization. Custom image templates are built on [Azure Image Builder](../virtual-machines/image-builder-overview.md) and tailored for Azure Virtual Desktop. The source image must be [supported for Azure Virtual Desktop](prerequisites.md# - An existing managed image. - An existing custom image template. -We've added several built-in scripts available for you to use that configures some of the most popular features and settings when using Azure Virtual Desktop. You can also add your own custom scripts to the template, as long as they're hosted at a publicly available location, such as GitHub or a web service. You need to specify a duration for the build, so make sure you allow enough time for your scripts to complete. Here are some examples of the built-in scripts you can add to a custom image template: +We've added several built-in scripts available for you to use that configures some of the most popular features and settings when using Azure Virtual Desktop. You can also add your own custom scripts to the template, as long as they're hosted at a publicly available location, such as GitHub or a web service. You need to specify a duration for the build, so make sure you allow enough time for your scripts to complete. Built-in scripts include restarts where needed. ++Here are some examples of the built-in scripts you can add to a custom image template: - Install language packs. - Set the default language of the operating system. We've added several built-in scripts available for you to use that configures so - Enable FSLogix with Kerberos. - Enable [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks). - Enable [screen capture protection](screen-capture-protection.md).-- Configure Teams optimizations +- Configure [Teams optimizations](teams-on-avd.md). - Configure session timeouts. - Add or remove Microsoft Office applications.-- Apply Windows Updates+- Apply Windows Updates. -When the custom image is being created and distributed, Azure Image Builder uses a user-assigned Managed Identity. Azure Image Builder uses this Managed Identity to create several resources in your subscription, such as a resource group, a VM used to build the image, Key Vault, and a storage account. The VM needs internet access to download the built-in scripts or your own scripts that you added. The built-in scripts are stored in the *RDS-templates* GitHub repository at [https://github.com/Azure/RDS-Templates](https://github.com/Azure/RDS-Templates). +When the custom image is being created and distributed, Azure Image Builder uses a user-assigned managed identity. Azure Image Builder uses this managed identity to create several resources in your subscription, such as a resource group, a VM used to build the image, Key Vault, and a storage account. The VM needs internet access to download the built-in scripts or your own scripts that you added. The built-in scripts are stored in the *RDS-templates* GitHub repository at [https://github.com/Azure/RDS-Templates](https://github.com/Azure/RDS-Templates). You can choose whether you want the VM to connect to an existing virtual network and subnet, which will enable the VM to have access to other resources you may have available to that virtual network. If you don't specify an existing virtual network, a temporary virtual network, subnet, and public IP address are created for use by the VM. For more information on networking options, see [Azure VM Image Builder networking options](../virtual-machines/linux/image-builder-networking.md). |
virtual-desktop | Troubleshoot Custom Image Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-custom-image-templates.md | Title: Troubleshoot Custom image templates (preview) - Azure Virtual Desktop -description: Troubleshoot Custom image templates in Azure Virtual Desktop. + Title: Troubleshoot custom image templates - Azure Virtual Desktop +description: Troubleshoot custom image templates in Azure Virtual Desktop. Previously updated : 04/05/2023 Last updated : 09/08/2023 -# Troubleshoot Custom image templates in Azure Virtual Desktop (preview) --> [!IMPORTANT] -> Custom image templates in Azure Virtual Desktop is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Troubleshoot custom image templates in Azure Virtual Desktop Custom image templates in Azure Virtual Desktop enable you to easily create a custom image that you can use when deploying session host virtual machines (VMs). This article helps troubleshoot some issues you could run into. The generation for the source image is shown when you select the image you want ## PrivateLinkService Network Policy is not disabled for the given subnet If you receive the error message starting **PrivateLinkService Network Policy is not disabled for the given subnet**, you need to disable the *private service policy* on the subnet. For more information, see [Disable private service policy on the subnet](../virtual-machines/windows/image-builder-vnet.md#disable-private-service-policy-on-the-subnet).++## Removing the Microsoft Store app ++Removing or uninstalling the Microsoft Store app is not supported. Learn how to [Configure access to the Microsoft Store](/windows/configuration/stop-employees-from-using-microsoft-store). ++## Issues installing or enabling additional languages on Windows 10 images ++Additional languages can be added by custom image templates, which uses the [Install-Language PowerShell cmdlet](/powershell/module/languagepackmanagement/install-language). If you have issues installing or enabling additional languages on Windows 10 Enterprise and Windows 10 Enterprise multi-session images, ensure that: ++- You haven't disabled installing language packs by group policy on your image. The policy setting can be found at the following locations: ++ - **Computer Configuration** > **Administrative Templates** > **Control Panel** > **Regional and Language Options** > **Restrict Language Pack and Language Feature Installation** ++ - **User Configuration** > **Administrative Templates** > **Control Panel** > **Regional and Language Options** > **Restrict Language Pack and Language Feature Installation** ++- Your session hosts can connect to Windows Update to download languages and latest cumulative updates. ++## Is Trusted Launch or are Confidential VMs supported? ++As custom image templates is based on Azure Image Builder, support for Trusted Launch or Confidential VMs is inherited. For more information, see [Confidential VM and Trusted Launch support](../virtual-machines/image-builder-overview.md#confidential-vm-and-trusted-launch-support). |
virtual-machine-scale-sets | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md | |
virtual-machines | Features Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md | Extension packages are downloaded from the Azure Storage extension repository. E If you use a [supported version of the Azure VM Agent](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), you don't need to allow access to Azure Storage in the VM region. You can use the VM Agent to redirect the communication to the Azure fabric controller for agent communications (via the `HostGAPlugin` feature through the privileged channel on private IP address [168.63.129.16](/azure/virtual-network/what-is-ip-address-168-63-129-16)). If you're on an unsupported version of the VM Agent, you need to allow outbound access to Azure Storage in that region from the VM. > [!IMPORTANT]-> If you block access to IP address 168.63.129.16 by using the guest firewall or via a proxy, extensions fail. Failure occurs even if you use a supported version of the VM Agent or you configure outbound access. Ports 80, 443, and 32526 are required. +> If you block access to IP address 168.63.129.16 by using the guest firewall or via a proxy, extensions fail. Failure occurs even if you use a supported version of the VM Agent or you configure outbound access. Ports 80 and 32526 are required. Agents can only be used to download extension packages and report status. For example, if an extension installation needs to download a script from GitHub (Custom Script Extension) or requires access to Azure Storage (Azure Backup), then you need to open other firewall or network security group (NSG) ports. Different extensions have different requirements because they're applications in their own right. For extensions that require access to Azure Storage or Azure Active Directory, you can allow access by using Azure NSG [service tags](/azure/virtual-network/network-security-groups-overview#service-tags). |
virtual-machines | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
virtual-machines | Share Gallery Community | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md | There are three main ways to share images in an Azure Compute Gallery, depending ## Limitations for images shared to the community There are some limitations for sharing your gallery to the community:-- You can't convert an existing private gallery to Community gallery.+- You can't convert an existing private gallery(RBAC enabled gallery) to Community gallery. - You can't use a third party image from Marketplace and publish it to the community. For a list of approved operating system base images, please see: [approved base images](https://go.microsoft.com/fwlink/?linkid=2245050). - Encrypted images are not supported - Image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available. |
virtual-machines | Updates Maintenance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/updates-maintenance-overview.md | Enabling [automatic VM guest patching](automatic-vm-guest-patching.md) for your You can use [Update Management in Azure Automation](../automation/update-management/overview.md?context=/azure/virtual-machines/context/context) to manage operating system updates for your Windows and Linux virtual machines in Azure, in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. -## Update Manager (preview) +## Update Manager -[Update Manager (preview)](../update-center/overview.md) is a new-age unified service in Azure to manage and govern updates (Windows and Linux), both on-premises and other cloud platforms, across hybrid environments from a single dashboard. The new functionality provides native and out-of-the-box experience, granular access controls, flexibility to create schedules or take action now, ability to check updates automatically and much more. The enhanced functionality ensures that the administrators have visibility into the health of all systems in the environment. For more information, see [key benefits](../update-center/overview.md#key-benefits). +[Update Manager](../update-center/overview.md) is a new-age unified service in Azure to manage and govern updates (Windows and Linux), both on-premises and other cloud platforms, across hybrid environments from a single dashboard. The new functionality provides native and out-of-the-box experience, granular access controls, flexibility to create schedules or take action now, ability to check updates automatically and much more. The enhanced functionality ensures that the administrators have visibility into the health of all systems in the environment. For more information, see [key benefits](../update-center/overview.md#key-benefits). ## Maintenance control |
virtual-machines | Quick Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-terraform.md | In this article, you learn how to: > * Create an association between the network security group and the network interface using [azurerm_network_interface_security_group_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_security_group_association). > * Generate a random value for a unique storage account name using [random_id](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id). > * Create a storage account for boot diagnostics using [azurerm_storage_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account).-> * Create a Windows VM with an IIS web server using [azurerm_windows_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine). +> * Create a Windows VM with an IIS web server using [azurerm_windows_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_virtual_machine). > * Create a Windows VM extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension). ## Prerequisites |
virtual-network | Configure Public Ip Bastion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-bastion.md | In this section, you create an Azure Bastion host. You select the IP address you 2. In the search box at the top of the portal, enter **Bastion**. -3. In the search results, select **Bastions**. +3. In the search results, select **Bastion**. 4. Select **+ Create**. In this section, you create an Azure Bastion host. You select the IP address you | **Instance details** | | | Name | Enter **myBastionHost**. | | Region | Select **(US) West US 2**. |+ | Tier | Select **Basic**. | | **Configure virtual network** | |- | Virtual network | Select **Create new**. </br> Enter **myVNet** in **Name**. </br> Leave the default address space of **10.4.0.0/16**. </br> Leave the default subnet of **10.4.0.0/24**. </br> In the text box under the **default** subnet, enter **AzureBastionSubnet**. </br> In address range, enter **10.4.1.0/27**. </br> Select **OK**. | + | Virtual network | Select **Create new**. </br> Enter **myVNet** in **Name**. </br> Leave the default address space of **10.4.0.0/16**. </br> Leave the default subnet of **10.4.0.0/24**. </br> In the text box under the **default** subnet, enter **AzureBastionSubnet**. </br> In address range, enter **10.4.1.0/26**. </br> Select **OK**. | | Subnet | Select **AzureBastionSubnet**. | | **Public IP address** | | | Public IP address | Select **Use existing**. | |
virtual-network | Configure Public Ip Vpn Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vpn-gateway.md | A VPN gateway is a virtual network gateway used to send encrypted traffic betwee VPN gateway supports standard and basic SKU public IP addresses depending on the SKU of the VPN gateway. Public IP prefixes aren't supported. -In this article, you'll learn how to create a VPN gateway using an existing public IP in your subscription. +In this article, you learn how to create a VPN gateway using an existing public IP in your subscription. ## Prerequisites In this article, you'll learn how to create a VPN gateway using an existing publ ## Create VPN gateway using existing public IP -In this section, you'll create a VPN gateway. You'll select the IP address you created in the prerequisites as the public IP for the VPN gateway. +In this section, you create a VPN gateway. You select the IP address you created in the prerequisites as the public IP for the VPN gateway. ### Create virtual network 1. Sign in to the [Azure portal](https://portal.azure.com). -2. In the search box at the top of the portal, enter **Virtual network**. +1. In the search box at the top of the portal, enter **Virtual network**. -3. In the search results, select **Virtual networks**. +1. In the search results, select **Virtual networks**. -4. Select **+ Create**. +1. Select **+ Create**. -5. In **Create virtual network**, enter or select the following information. +1. In **Create virtual network**, enter or select the following information. | Setting | Value | | - | -- | In this section, you'll create a VPN gateway. You'll select the IP address you c | Name | Enter **myVNet**. | | Region | Select **West US 2**. | -6. Select the **Review + create** tab, or select the blue **Review + create** button. +1. Select the **Review + create** tab, or select the blue **Review + create** button. 7. Select **Create**. |
virtual-network | Custom Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md | When ready, you can issue the command to have your range advertised from Azure a * The advertisements of IPs from a custom IP prefix over an Azure ExpressRoute Microsoft peering isn't currently supported. +* Custom IP prefixes don't support Reverse DNS lookup using Azure-owned zones; customers must onboard their own Reverse Zones to Azure DNS + * Once provisioned, custom IP prefix ranges can't be moved to another subscription. Custom IP address prefix ranges can't be moved within resource groups in a single subscription. It's possible to derive a public IP prefix from a custom IP prefix in another subscription with the proper permissions as described [here](manage-custom-ip-address-prefix.md#permissions). * IPs brought to Azure may have a delay of up to a week before they can be used for Windows Server Activation. |
virtual-network | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/13/2023 Last updated : 09/19/2023 |
virtual-network | Tutorial Restrict Network Access To Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md | |