Updates from: 11/29/2023 02:12:11
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 11/27/2023
This reference article provides a version-based description of Document Intelligence feature and capability releases, changes, updates, and enhancements.
+#### November 2023 (preview) release
++
+### [**.NET (C#)**](#tab/csharp)
+
+* Document Intelligence **1.0.0-beta.1**
+* **Targets REST API 2023-10-31-preview by default**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.DocumentIntelligence/1.0.0-beta.1)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/documentintelligence/Azure.AI.DocumentIntelligence/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/documentintelligence/Azure.AI.DocumentIntelligence/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* Document Intelligence **1.0.0-beta.1**
+* **Targets REST API 2023-10-31-preview by default**
+
+[**Package (MVN)**](https://repo1.maven.org/maven2/com/azure/azure-ai-documentintelligence/1.0.0-beta.1/)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav#azure-documentintelligence-client-library-for-java)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/azure-ai-documentintelligence_1.0.0-beta.1/sdk/documentintelligence/azure-ai-documentintelligence/src/samples#examples)
+
+### [**JavaScript**](#tab/javascript)
+
+* Document Intelligence **1.0.0-beta.1**
+* **Targets REST API 2023-10-31-preview by default**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure-rest/ai-document-intelligence)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/documentintelligence/ai-document-intelligence-rest#readme)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/documentintelligence/ai-document-intelligence-rest/samples/v1-beta/typescript)
+
+### [**Python**](#tab/python)
+
+* Document Intelligence **1.0.0b1**
+* **Targets REST API 2023-10-31-preview by default**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-documentintelligence/)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/7c42462ac662522a6fd21b17d2a20f4cd40d0356/sdk/documentintelligence/azure-ai-documentintelligence/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/7c42462ac662522a6fd21b17d2a20f4cd40d0356/sdk/documentintelligence/azure-ai-documentintelligence/samples)
+++ #### August 2023 (GA) release ### [**C#**](#tab/csharp)
-* **Version 4.1.0 (2023-08-10)**
-* **Targets API version2023-07-31 by default**
-* **Version 2023-02-28-preview is no longer supported**
+* Form Recognizer **4.1.0 (2023-08-10)**
+* **Targets REST API 2023-07-31 by default**
+* **REST API target 2023-02-28-preview is no longer supported**
* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#breaking-changes-1) [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
This reference article provides a version-based description of Document Intellig
### [**Java**](#tab/java)
-* **4.1.0 (2023-08-10)**
-* **Targets API version 2023-07-31 by default**
-* **Version 2023-02-28-preview is no longer supported**
+* Form Recognizer **4.1.0 (2023-08-10)**
+* **Targets REST API 2023-07-31 by default**
+* **REST API target 2023-02-28-preview is no longer supported**
* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-jav#breaking-changes) [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
This reference article provides a version-based description of Document Intellig
### [**JavaScript**](#tab/javascript)
-* **Version 5.0.0 (2023-08-08)**
-* **Targets API version 2023-07-31 by default**
-* **Version 2023-02-28-preview is no longer supported**
+* Form Recognizer **5.0.0 (2023-08-08)**
+* **Targets REST API 2023-07-31 by default**
+* **REST API target 2023-02-28-preview is no longer supported**
* [**Breaking changes**](https://github.com/witemple-msft/azure-sdk-for-js/blob/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#breaking-changes) [**Changelog/Release History**](https://github.com/witemple-msft/azure-sdk-for-js/blob/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
This reference article provides a version-based description of Document Intellig
### [**Python**](#tab/python)
-* **Version 3.3.0 (2023-08-08)**
-* **Targets API version 2023-07-31 by default**
-* **Version 2023-02-28-preview is no longer supported**
+* Form Recognizer **3.3.0 (2023-08-08)**
+* **Targets REST API 2023-07-31 by default**
+* **REST API target 2023-02-28-preview is no longer supported**
* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#breaking-changes) [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
This release includes the following updates:
### [**C#**](#tab/csharp)
-* **Version 4.1.0-beta.1 (2023-04-13**)
+* Form Recognizer **4.1.0-beta.1 (2023-04-13**)
* **Targets 2023-02-28-preview by default** * **No breaking changes**
This release includes the following updates:
### [**Java**](#tab/java)
-* **Version 4.1.0-beta.1 (2023-04-12**)
+* Form Recognizer **4.1.0-beta.1 (2023-04-12**)
* **Targets 2023-02-28-preview by default** * **No breaking changes** - [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12) [**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
This release includes the following updates:
### [**JavaScript**](#tab/javascript)
-* **Version 4.1.0-beta.1 (2023-04-11**)
+* Form Recognizer **4.1.0-beta.1 (2023-04-11**)
* **Targets 2023-02-28-preview by default** * **No breaking changes**
This release includes the following updates:
### [**Python**](#tab/python)
-* **Version 3.3.0b1 (2023-04-13**)
+* Form Recognizer **3.3.0b1 (2023-04-13**)
* **Targets 2023-02-28-preview by default** * **No breaking changes**
This release includes the following updates:
This release includes the following updates: > [!IMPORTANT]
-> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier.
+> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API v3.0 GA, released 2022-08-31. These clients are no longer supported by APIs 2020-06-30-preview or earlier.
### [**C#**](#tab/csharp)
-* **Version 4.0.0 GA (2022-09-08)**
+* Form Recognizer **4.0.0 GA (2022-09-08)**
* **Supports REST API v3.0 and v2.0 clients** [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
This release includes the following updates:
### [**Java**](#tab/java)
-* **Version 4.0.0 GA (2022-09-08)**
+* Form Recognizer **4.0.0 GA (2022-09-08)**
* **Supports REST API v3.0 and v2.0 clients** [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
This release includes the following updates:
### [**JavaScript**](#tab/javascript)
-* **Version 4.0.0 GA (2022-09-08)**
+* Form Recognizer **4.0.0 GA (2022-09-08)**
* **Supports REST API v3.0 and v2.0 clients** [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
This release includes the following updates:
> [!NOTE] > Python 3.7 or later is required to use this package.
-* **Version 3.2.0 GA (2022-09-08)**
+* Form Recognizer **3.2.0 GA (2022-09-08)**
* **Supports REST API v3.0 and v2.0 clients** [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
This release includes the following updates:
### [**C#**](#tab/csharp)
-**Version 4.0.0-beta.5 (2022-08-09)**
-**Supports REST API 2022-06-30-preview clients**
+* Form Recognizer **4.0.0-beta.5 (2022-08-09)**
+* **Supports REST API 2022-06-30-preview clients**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
This release includes the following updates:
### [**Java**](#tab/java)
-**Version 4.0.0-beta.6 (2022-08-10)**
-**Supports REST API 2022-06-30-preview and earlier clients**
+* Form Recognizer **4.0.0-beta.6 (2022-08-10)**
+* **Supports REST API 2022-06-30-preview and earlier clients**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
This release includes the following updates:
### [**JavaScript**](#tab/javascript)
-**Version 4.0.0-beta.6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
+* Form Recognizer **4.0.0-beta.6 (2022-08-09)**
+* **Supports REST API 2022-06-30-preview and earlier clients**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
This release includes the following updates:
> [!IMPORTANT] > Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
-**Version 3.2.0b6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
+* Form Recognizer **3.2.0b6 (2022-08-09)**
+* **Supports REST API 2022-06-30-preview and earlier clients**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
This release includes the following updates:
### [**C#**](#tab/csharp)
-**Version 4.0.0-beta.4 (2022-06-08)**
+* Form Recognizer **4.0.0-beta.4 (2022-06-08)**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
This release includes the following updates:
### [**Java**](#tab/java)
-**Version 4.0.0-beta.5 (2022-06-07)**
+* Form Recognizer **4.0.0-beta.5 (2022-06-07)**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
This release includes the following updates:
### [**JavaScript**](#tab/javascript)
-**Version 4.0.0-beta.4 (2022-06-07)**
+* Form Recognizer **4.0.0-beta.4 (2022-06-07)**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
This release includes the following updates:
### [**Python**](#tab/python)
-**Version 3.2.0b5 (2022-06-07**
+* Form Recognizer **3.2.0b5 (2022-06-07**
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_KEY"), api_version="2023-10-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
- )
+)
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
OpenAI uses the `model` keyword argument to specify what model to use. Azure Ope
```python completion = client.completions.create( model='gpt-3.5-turbo-instruct',
- prompt="<prompt>)
+ prompt="<prompt>")
) chat_completion = client.chat.completions.create(
ai-services Real Time Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md
To get started, make sure you have the following prerequisites:
- **Your speech resource key and region:** After your Speech resource is deployed, select **Go to resource** to view and manage keys. For more information about Azure AI services resources, see [Get the keys for your resource](/azure/ai-services/multi-service-resource?pivots=azportal&tabs=windows#get-the-keys-for-your-resource). - If you build an application of real time avatar: - **Communication resource:** Create a [Communication resource](https://portal.azure.com/#create/Microsoft.Communication) in the Azure portal (for real-time avatar synthesis only).
- - You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/ai-services/speech-service/quickstarts/setup-platform#install-the-speech-sdk-for-javascript) to generate the relay token with the endpoint and connection string filled.
+ - You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/communication-services/quickstarts/relay-token) to generate the relay token with the endpoint and connection string filled.
## Set up environment
ai-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-text-apis.md
public class TranslatorText {
```javascript const axios = require('axios').default;
- const { v4: uuidv4 } = require('uuid');
+const { v4: uuidv4 } = require('uuid');
- let key = "<your-translator-key>";
- let endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<your-translator-key>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- let location = "<YOUR-RESOURCE-LOCATION>";
+// location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("from", "en");
+params.append("to", "sw");
+params.append("to", "it");
axios({ baseURL: endpoint,
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['sw', 'it']
- },
+ params: params,
data: [{ 'text': 'Hello, friend! What did you do today?' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("to", "en");
+params.append("to", "it");
+ axios({ baseURL: endpoint, url: '/translate',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'to': ['en', 'it']
- },
+ params: params,
data: [{ 'text': 'Halo, rafiki! Ulifanya nini leo?' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+ axios({ baseURL: endpoint, url: '/detect',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0'
- },
+ params: params,
data: [{ 'text': 'Hallo Freund! Was hast du heute gemacht?' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("to", "th");
+params.append("toScript", "latn");
+ axios({ baseURL: endpoint, url: '/translate',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'to': 'th',
- 'toScript': 'latn'
- },
+ params: params,
data: [{ 'text': 'Hello, friend! What did you do today?' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("language", "th");
+params.append("fromScript", "thai");
+params.append("toScript", "latn");
+ axios({ baseURL: endpoint, url: '/transliterate',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'language': 'th',
- 'fromScript': 'thai',
- 'toScript': 'latn'
- },
+ params: params,
data: [{ 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("to", "es");
+params.append("includeSentenceLength", true);
+ axios({ baseURL: endpoint, url: '/translate',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'to': 'es',
- 'includeSentenceLength': true
- },
+ params: params,
data: [{ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+ axios({ baseURL: endpoint, url: '/breaksentence',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0'
- },
+ params: params,
data: [{ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("from", "en");
+params.append("to", "es");
+ axios({ baseURL: endpoint, url: '/dictionary/lookup',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
+ params: params,
data: [{ 'text': 'sunlight' }],
let endpoint = "https://api.cognitive.microsofttranslator.com";
// This is required if using an Azure AI multi-service resource. let location = "<YOUR-RESOURCE-LOCATION>";
+let params = new URLSearchParams();
+params.append("api-version", "3.0");
+params.append("from", "en");
+params.append("to", "es");
+ axios({ baseURL: endpoint, url: '/dictionary/examples',
axios({
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
+ params: params,
data: [{ 'text': 'sunlight', 'translation': 'luz solar'
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
__Outbound__ service tag rules:
__Inbound__ service tag rules: * `AzureMachineLearning`
-> [!NOTE]
-> For an Azure AI resource using a managed virtual network, a private endpoint is automatically created for a connection if the target resource is an Azure Private Link supported resource (Key Vault, Storage Account, Container Registry, Azure AI, Azure OpenAI, Azure Cognitive Search). For more on connections, see [How to add a new connection in Azure AI Studio](connections-add.md).
- ## List of scenario specific outbound rules ### Scenario: Access public machine learning packages
When you create a private endpoint, you provide the _resource type_ and _subreso
When you create a private endpoint for Azure AI dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure AI.
+A private endpoint is automatically created for a connection if the target resource is an Azure resource listed above. A valid target ID is expected for the private endpoint. A valid target ID for the connection can be the ARM ID of a parent resource. The target ID is also expected in the target of the connection or in `metadata.resourceid`. For more on connections, see [How to add a new connection in Azure AI Studio](connections-add.md).
+ ## Pricing The Azure AI managed VNet feature is free. However, you're charged for the following resources that are used by the managed VNet:
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
When you [create a new connection](#create-a-new-connection), you enter the foll
- ## Next steps - [Connections in Azure AI Studio](../concepts/connections.md) - [How to create vector indexes](../how-to/index-add.md)
+- [How to configure a managed network](configure-managed-network.md)
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Az
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/01/2023 Last updated : 11/24/2023 # Use Azure Blob storage Container Storage Interface (CSI) driver
Azure Blob storage CSI driver supports the following features:
- You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - Perform the steps in this [link][csi-blob-storage-open-source-driver-uninstall-steps] if you previously installed the [CSI Blob Storage open-source driver][csi-blob-storage-open-source-driver] to access Azure Blob storage from your cluster.
+> [!NOTE]
+> If the blobfuse-proxy is not enabled during the installation of the open source driver, the uninstallation of the open source driver will disrupt existing blobfuse mounts. However, NFS mounts will remain unaffected.
## Enable CSI driver on a new or existing AKS cluster
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 11/04/2023 Last updated : 11/28/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
az aks update --name $clusterName \
The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster.
-### Kubenet Cluster Upgrade
+### Kubenet Cluster Upgrade (Preview)
++
+You must register the `Microsoft.ContainerService` `AzureOverlayDualStackPreview` feature flag.
Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 09/06/2023 Last updated : 11/28/2023 # Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
For more information on Kubernetes volumes, see [Storage options for application
- To create an ADLS account using the driver in dynamic provisioning, specify `isHnsEnabled: "true"` in the storage class parameters. - To enable blobfuse access to an ADLS account in static provisioning, specify the mount option `--use-adls=true` in the persistent volume.
+ - If you are going to enable a storage account with Hierarchical Namespace, existing persistent volumes should be remounted with `--use-adls=true` mount option.
## Dynamically provision a volume
A persistent volume claim (PVC) uses the storage class object to dynamically pro
kind: PersistentVolumeClaim metadata: name: azure-blob-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
spec: accessModes: - ReadWriteMany
- storageClassName: my-blobstorage
+ storageClassName: azureblob-nfs-premium
resources: requests: storage: 5Gi
The following YAML creates a pod that uses the persistent volume claim **azure-b
volumeMounts: - mountPath: "/mnt/blob" name: volume
+ readOnly: false
volumes: - name: volume persistentVolumeClaim:
In this example, the following manifest configures mounting a Blob storage conta
protocol: nfs tags: environment=Development volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ mountOptions:
+ - nconnect=4
``` 2. Create the storage class with the [kubectl apply][kubectl-apply] command:
The following example demonstrates how to mount a Blob storage container as a pe
- ReadWriteMany persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion storageClassName: azureblob-nfs-premium
+ mountOptions:
+ - nconnect=4
csi: driver: blob.csi.azure.com
- readOnly: false
# make sure volumeid is unique for every identical storage blob container in the cluster # character `#` and `/` are reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
+ volumeHandle: account-name_container-name
volumeAttributes: resourceGroup: resourceGroupName storageAccount: storageAccountName
Kubernetes needs credentials to access the Blob storage container created earlie
- --file-cache-timeout-in-seconds=120 csi: driver: blob.csi.azure.com
- readOnly: false
# volumeid has to be unique for every identical storage blob container in the cluster # character `#`and `/` are reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
+ volumeHandle: account-name_container-name
volumeAttributes: containerName: containerName nodeStageSecretRef:
The following YAML creates a pod that uses the persistent volume or persistent v
volumeMounts: - name: blob01 mountPath: "/mnt/blob"
+ readOnly: false
volumes: - name: blob01 persistentVolumeClaim:
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 04/11/2023 Last updated : 11/28/2023 # Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)
After you create the persistent volume claim, you must verify it has a status of
volumeMounts: - mountPath: "/mnt/azure" name: volume
+ readOnly: false
volumes: - name: volume persistentVolumeClaim:
When you create an Azure disk for use with AKS, you can create the disk resource
storageClassName: managed-csi csi: driver: disk.csi.azure.com
- readOnly: false
volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk volumeAttributes: fsType: ext4
When you create an Azure disk for use with AKS, you can create the disk resource
volumeMounts: - name: azure mountPath: /mnt/azure
+ volumeMounts
volumes: - name: azure persistentVolumeClaim:
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 10/05/2023 Last updated : 11/28/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
The following YAML creates a pod that uses the persistent volume claim *my-azure
volumeMounts: - mountPath: /mnt/azure name: volume
+ readOnly: false
volumes: - name: volume persistentVolumeClaim:
Kubernetes needs credentials to access the file share created in the previous st
storageClassName: azurefile-csi csi: driver: file.csi.azure.com
- readOnly: false
volumeHandle: unique-volumeid # make sure this volumeid is unique for every identical share in the cluster volumeAttributes: resourceGroup: resourceGroupName # optional, only set this when storage account is not in the same resource group as node
spec:
volumeMounts: - name: azure mountPath: /mnt/azure
+ readOnly: false
volumes: - name: azure csi: driver: file.csi.azure.com
- readOnly: false
volumeAttributes: secretName: azure-secret # required shareName: aksshare # required
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Ser
description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. Previously updated : 09/12/2023 Last updated : 11/24/2023 # Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS)
az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName
``` > [!IMPORTANT]
-> Ensure your AKS cluster identity has **read** permission of DiskEncryptionSet
+> Make sure that the DiskEncryptionSet is located in the same region as your AKS cluster and that the AKS cluster identity has **read** access to the DiskEncryptionSet.
## Grant the DiskEncryptionSet access to key vault
If you have already provided a disk encryption set during cluster creation, encr
> [!IMPORTANT] > Ensure you have the proper AKS credentials. The managed identity needs to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you'll get an error suggesting that the managed identity does not have permissions.
+To assign the AKS cluster identity the Contributor role for the diskencryptionset, execute the following commands:
+
+```azurecli-interactive
+aksIdentity=$(az aks show -g $RG_NAME -n $CLUSTER_NAME --query "identity.principalId")
+az role assignment create --role "Contributor" --assignee $aksIdentity --scope $diskEncryptionSetId
+```
+ Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed. ```yaml
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/07/2023 Last updated : 11/20/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
parameters:
protocol: nfs mountOptions: - nconnect=4
+ - noresvport
+ - actimeo=30
- rsize=262144 - wsize=262144 ```
parameters:
protocol: nfs mountOptions: - nconnect=4
+ - noresvport
+ - actimeo=30
``` After editing and saving the file, create the storage class with the [kubectl apply][kubectl-apply] command:
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
The AKS Linux Extension is an Azure VM extension that installs and configures mo
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.-- [Local-gadget](https://inspektor-gadget.io/docs/): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.
+- [ig](https://inspektor-gadget.io/docs/latest/ig/): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues.
These tools help provide observability around many node health related problems, such as:
aks Kubelogin Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelogin-authentication.md
+
+ Title: Using Kubelogin with Azure Kubernetes Service (AKS)
+description: Learn about using Kubelogin to enable all of the supported Azure Active Directory authentication methods with Azure Kubernetes Service (AKS).
+ Last updated : 11/28/2023+++
+# Use Kubelogin with Azure Kubernetes Service (AKS)
+
+Kubelogin is a client-go credential [plugin][client-go-cred-plugin] that implements Microsoft Entra ID authentication. This plugin provides features that are not available in kubectl.
+
+Azure Kubernetes Service (AKS) clusters integrated with Microsoft Entra ID, running Kubernetes versions 1.24 and higher, automatically use the `kubelogin` format.
+
+This article provides an overview of the following authentication methods and examples on how to use them:
+
+* Device code
+* The Azure CLI
+* Interactive web browser
+* Service principal
+* Managed identity
+* Workflow identity
+
+## Limitations
+
+* A maximum of 200 groups are included in the Microsoft Entra ID JSON Web Token (JWT). For more than 200 groups, consider using [Application Roles][entra-id-application-roles].
+* Groups created in Microsoft Entra ID are only included by their ObjectID and not by their display name. `sAMAccountName` is only available for groups synchronized from on-premises Active Directory.
+* On AKS, service principal authentication method only works with managed Microsoft Entra ID, not legacy Azure Active Directory.
+* Device code authentication method doesn't work when Conditional Access policy is configured on a Microsoft Entra tenant. Use web browser interactive authentication instead.
+
+## Authentication modes
+
+Most of the interaction with `kubelogin` is specific to the `convert-kubeconfig` subcommand, which uses the input kubeconfig specified in `--kubeconfig` or `KUBECONFIG` environment variable to convert to the final kubeconfig in exec format based on the specified authentication mode.
+
+### How authentication works
+
+The authentication modes that `kubelogin` implements are Microsoft Entra ID OAuth 2.0 token grant flows. Throughout `kubelogin` subcommands, you see below common flags. In general, these flags are already set up when you get the kubeconfig from AKS.
+
+* **--tenant-id**: Microsoft Entra ID tenant ID
+* **--client-id**: The application ID of the public client application. This client app is only used in device code, web browser interactive, and ropc log in modes.
+* **--server-id**: The application ID of the web app, or resource server. The token should be issued to this resource.
+
+> [!NOTE]
+> With each authentication method, the token isn't cached on the file system.
+
+## Using device code
+
+Device code is the default authentication mode in `convert-kubeconfig` subcommand. The `-l devicecode` is optional. This authentication method prompts the device code for user to sign in from a browser session.
+
+Before `kubelogin` and Exec plugin were introduced, the Azure authentication mode in `kubectl` only supported device code flow. It used an old library that produces the token with `audience` claim that has the `spn:` prefix, which isn't compatible with [AKS-managed Microsoft Entra ID][aks-managed-microsoft-entra-id] using [on-behalf-of][oauth-on-behalf-of] (OBO) flow. When you run the `convert-kubeconfig` subcommand, `kubelogin` removes the `spn:` (prefix in audience claim). If you require using the original functionality, add the `--legacy` argument.
+
+If you're using `kubeconfig` from legacy Azure AD cluster, `kubelogin` automatically adds the `--legacy` flag.
+
+In this sign in mode, the access token and refresh token are cached in the `${HOME}/.kube/cache/kubelogin` directory. This path can be overridden specifying the `--token-cache-dir` parameter.
+
+If your Azure AD integrated cluster uses Kubernetes version 1.24 or earlier, you need to manually convert the kubeconfig format by running the following commands.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+kubelogin convert-kubeconfig
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+To clean up cached tokens, run the following command.
+
+```bash
+kubelogin remove-tokens
+```
+
+> [!NOTE]
+> Device code sign in method doesn't work when Conditional Access policy is configured on Microsoft Entra tenant. Use the [web browser interactive mode][web-browser-interactive-mode] instead.
+
+## Using the Azure CLI
+
+Authenticating using the Azure CLI method uses the already signed in context performed by the Azure CLI to get the access token. The token is issued in the same Microsoft Entra tenant as with `az login`.
+
+`kubelogin` doesn't write the tokens to the token cache file. It's already managed by the Azure CLI.
+
+> [!NOTE]
+> This authentication method only works with AKS-managed Microsoft Entra ID.
+
+```bash
+az login
+
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l azurecli
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+When the Azure CLI's config directory is outside the $`{HOME}` directory, specify the parameter `--azure-config-dir` in `convert-kubeconfig` subcommand. It generates the `kubeconfig` with the environment variable configured. You can achieve the same configuration by setting the environment variable `AZURE_CONFIG_DIR` to this directory while running `kubectl` command.
+
+## Using an interactive web browser
+
+Interactive web browser authentication automatically opens a web browser to log in the user. Once authenticated, the browser redirects back to a local web server with the credentials. This authentication method complies with Conditional Access policy.
+
+When you authenticate using this method, the access token is cached in the `${HOME}/.kube/cache/kubelogin` directory. This path can be overridden by specifying the `--token-cache-dir` parameter.
+
+The following example shows how to use a bearer token with interactive flow.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l interactive
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+The following example shows how to use Proof-of-Possession (PoP) tokens with interactive flow.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l interactive --pop-enabled --pop-claims "u=/ARM/ID/OF/CLUSTER"
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+## Using a service principal
+
+This authentication method uses a service principal to sign in. The credential may be provided using an environment variable or command-line argument. The supported credentials are password and pfx client certificate.
+
+The following are limitations to consider before using this method:
+
+* This only works with managed Microsoft Entra ID
+* The service principal can be member of a maximum of [200 Microsoft Entra ID groups][microsoft-entra-group-membership].
+
+The following examples show how to set up a client secret using an environment variable.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn
+
+export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id>
+export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<spn secret>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn
+
+export AZURE_CLIENT_ID=<spn client id>
+export AZURE_CLIENT_SECRET=<spn secret>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+The following example shows how to set up a client secret in a command-line argument.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn --client-id <spn client id> --client-secret <spn client secret>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+> [!WARNING]
+> This method leaves the secret in the kubeconfig file.
+
+The following examples show how to set up a client secret using a client certificate.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn
+
+export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id>
+export AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE=/path/to/cert.pfx
+export AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE_PASSWORD=<pfx password>
+```
+
+Run `kubectl get nodes` command to get node information in ps output format.
+
+```bash
+kubectl get nodes
+```
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn
+
+export AZURE_CLIENT_ID=<spn client id>
+export AZURE_CLIENT_CERTIFICATE_PATH=/path/to/cert.pfx
+export AZURE_CLIENT_CERTIFICATE_PASSWORD=<pfx password>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+The following example shows how to set up a Proof-of-Possession (PoP) token using a client secret from environment variables.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l spn --pop-enabled --pop-claims "u=/ARM/ID/OF/CLUSTER"
+
+export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id>
+export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<spn secret>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+## Using a managed identity
+
+The [managed identity][managed-identity-overview] authentication method should be used for applications to use when connecting to resources that support Microsoft Entra authentication. For example, accessing Azure services such as Azure Virtual Machine, Azure Virtual Machine Scale Sets, Azure Cloud Shell, etc.
+
+The following example shows how to use the default managed identity.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l msi
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+The following example shows how to use a managed identity with a specific identity.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l msi --client-id <msi-client-id>
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+## Using a workload identity
+
+This authentication method uses Microsoft Entra ID federated identity credentials to authenticate to Kubernetes clusters with Microsoft Entra ID integration. It works by setting the environment variables:
+
+* **AZURE_CLIENT_ID**: the Microsoft Entra ID application ID that is federated with workload identity
+* **AZURE_TENANT_ID**: the Microsoft Entra ID tenant ID
+* **AZURE_FEDERATED_TOKEN_FILE**: the file containing signed assertion of workload identity. For example, Kubernetes projected service account (jwt) token
+* **AZURE_AUTHORITY_HOST**: the base URL of a Microsoft Entra ID authority. For example, `https://login.microsoftonline.com/`.
+
+With [workload identity][workload-identity], it's possible to access Kubernetes clusters from CI/CD system such as GitHub, ArgoCD, etc. without storing Service Principal credentials in those external systems. To configure OIDC federation from GitHub, see the following [example][oidc-federation-github].
+
+The following example shows how to use a workload identity.
+
+```bash
+export KUBECONFIG=/path/to/kubeconfig
+
+kubelogin convert-kubeconfig -l workloadidentity
+```
+
+Run `kubectl` command to get node information.
+
+```bash
+kubectl get nodes
+```
+
+## Using Kubelogin with AKS
+
+AKS uses a pair of first party Azure AD applications. These application IDs are the same in all environments.
+
+The AKS Microsoft Entra ID Server application ID used by the server side is: `6dae42f8-4368-4678-94ff-3960e28e3630`. The access token accessing AKS clusters need to be issued for this application. In most of kubelogin authentication modes, `--server-id` is a required parameter with `kubelogin get-token`.
+
+The AKS Microsoft Entra ID client application ID used by kubelogin to perform public client authentication on behalf of the user is: `80faf920-1908-4b52-b5ef-a8e7bedfc67a`. The client application ID is used as part of device code and web browser interactive authentication methods.
+
+## Next steps
+
+* Learn how to integrate AKS with Microsoft Entra ID with our [AKS-managed Microsoft Entra integration][aks-managed-microsoft-entra-integration-guide] how-to guide.
+* To get started with managed identities in AKS, see [Use a managed identity in AKS][use-managed-identity-aks].
+* To get started with workload identities in AKS, see [Use a workload identity in AKS][use-workload-identity-aks].
+
+<!-- LINKS - internal -->
+[aks-managed-microsoft-entra-id]: managed-azure-ad.md
+[oauth-on-behalf-of]: ../active-directory/develop/v2-oauth2-on-behalf-of-flow.md
+[web-browser-interactive-mode]: #using-an-interactive-web-browser
+[microsoft-entra-group-membership]: /entra/identity/hybrid/connect/how-to-connect-fed-group-claims
+[managed-identity-overview]: /entra/identity/managed-identities-azure-resources/overview
+[workload-identity]: /entra/workload-id/workload-identities-overview
+[entra-id-application-roles]: /entra/external-id/customers/how-to-use-app-roles-customers
+[aks-managed-microsoft-entra-integration-guide]: managed-azure-ad.md
+[use-managed-identity-aks]: use-managed-identity.md
+[use-workload-identity-aks]: workload-identity-overview.md
+
+<!-- LINKS - external -->
+[client-go-cred-plugin]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
+[oidc-federation-github]: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Previously updated : 11/02/2023 Last updated : 11/28/2023 #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service.
Before creating an ACR instance, you need a resource group. An Azure resource gr
## Build and push container images to registry
-* Build and push the images to your ACR using the [`az acr build`][az-acr-build] command.
+* Build and push the images to your ACR using the Azure CLI [`az acr build`][az-acr-build] command.
> [!NOTE]
+ > For this step, there isn't an equivalent Azure PowerShell cmdlet that performs this task.
+ >
> In the following example, we don't build the `rabbitmq` image. This image is available from the Docker Hub public repository and doesn't need to be built or pushed to your ACR instance. ```azurecli-interactive
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
Previously updated : 07/14/2023 Last updated : 10/19/2023
The `forward-request` policy forwards the incoming request to the backend servic
## Policy statement ```xml
-<forward-request http-version="1 | 2or1 | 2" timeout="time in seconds" continue-timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
+<forward-request http-version="1 | 2or1 | 2" timeout="time in seconds (alternatively, use timeout-ms)" | timeout-ms="time in milliseconds (alternatively, use timeout)" continue-timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
``` ## Attributes | Attribute | Description | Required | Default | | | -- | -- | - |
-| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. | No | 300 |
-| continue-timeout | The amount of time in seconds to wait for a `100 Continue` status code to be returned by the backend service before a timeout error is raised. Policy expressions are allowed. | No | N /A |
+| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. You can specify either `timeout` or `timeout-ms` but not both. | No | 300 |
+| timeout-ms | The amount of time in milliseconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 ms. Policy expressions are allowed. You can specify either `timeout` or `timeout-ms` but not both. | No | N/A |
+| continue-timeout | The amount of time in seconds to wait for a `100 Continue` status code to be returned by the backend service before a timeout error is raised. Policy expressions are allowed. | No | N/A |
| http-version | The HTTP spec version to use when sending the HTTP response to the backend service. When using `2or1`, the gateway will favor HTTP /2 over /1, but fall back to HTTP /1 if HTTP /2 doesn't work. | No | 1 | | follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. Policy expressions are allowed. | No | `false` | | buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` |
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
Previously updated : 12/08/2022 Last updated : 10/19/2023
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Usage notes * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Microsoft Entra authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
-* When using a custom header (`header-name`), the header value cannot be prefixed with `Bearer ` and should be removed.
## Examples
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
Previously updated : 12/08/2022 Last updated : 10/19/2023
The `validate-jwt` policy enforces existence and validity of a supported JSON we
* The policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512. * To configure the policy with one or more OpenID configuration endpoints for use with a self-hosted gateway, the OpenID configuration endpoints URLs must also be reachable by the cloud gateway. * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Microsoft Entra authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
-* When using a custom header (`header-name`), the header value cannot be prefixed with `Bearer ` and should be removed.
## Examples
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
# How To Control Inbound Traffic to an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
# How To Create an ILB ASEv1 Using Azure Resource Manager Templates > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md
# Introduction to App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Layered Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md
# Implementing a Layered Security Architecture with App Service Environments > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since App Service Environments provide an isolated runtime environment deployed into a virtual network, developers can create a layered security architecture providing differing levels of network access for each physical application tier.
app-service App Service App Service Environment Network Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md
# Network Architecture Overview of App Service Environments > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environments are always created within a subnet of a [virtual network][virtualnetwork] - apps running in an App Service Environment can communicate with private endpoints located within the same virtual network topology. Since customers may lock down parts of their virtual network infrastructure, it is important to understand the types of network communication flows that occur with an App Service Environment.
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
# Network configuration details for App Service Environment for Power Apps with Azure ExpressRoute > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection.
To get started with App Service Environment for Power Apps, see [Introduction to
[UDRHowTo]: ../../virtual-network/tutorial-create-route-table-powershell.md [AzureDownloads]: https://azure.microsoft.com/downloads/ [DownloadCenterAddressRanges]: https://www.microsoft.com/download/details.aspx?id=41653
-[NetworkSecurityGroups]: ../../virtual-network/virtual-network-vnet-plan-design-arm.md
[IntroToAppServiceEnvironment]: app-service-app-service-environment-intro.md <!-- IMAGES -->
app-service App Service App Service Environment Securely Connecting To Backend Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md
# Connect securely to back end resources from an App Service environment > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since an App Service Environment is always created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork], outbound connections from an App Service Environment to other backend resources can flow exclusively over the virtual network. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses).
app-service App Service Environment Auto Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md
# Autoscaling and App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments support *autoscaling*. You can autoscale individual worker pools based on metrics or schedule.
app-service App Service Web Configure An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md
# Configuring an App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service Web Scale A Web App In An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md
# Scaling apps in an App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> In the Azure App Service there are normally three things you can scale:
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md
# Certificates and the App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document.
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md
# Create an External App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
To learn more about ASEv1, see [Introduction to the App Service Environment v1][
[mobileapps]: /previous-versions/azure/app-service-mobile/app-service-mobile-value-prop [Functions]: ../../azure-functions/index.yml [Pricing]: https://azure.microsoft.com/pricing/details/app-service/
-[ARMOverview]: ../../azure-resource-manager/management/overview.md
+[ARMOverview]: ../../azure-resource-manager/management/overview.md
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
## Overview > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments (ASEs) can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The ASE on an internal IP address is called an ILB ASE. The ASE with a public endpoint is called an External ASE.
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
# Create and use an Internal Load Balancer App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
ILB ASEs that were made before May 2019 required you to set the domain suffix du
[ASEWAF]: integrate-with-application-gateway.md [AppGW]: ../../web-application-firewall/ag/ag-overview.md [customdomain]: ../app-service-web-tutorial-custom-domain.md
-[linuxapp]: ../overview.md#app-service-on-linux
+[linuxapp]: ../overview.md#app-service-on-linux
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
# Locking down an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) has many external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network. Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their virtual network.
Linux isn't available in US Gov regions and is thus not listed as an optional co
[2]: ./media/firewall-integration/firewall-serviceendpoints.png [3]: ./media/firewall-integration/firewall-ntprule.png [5]: ./media/firewall-integration/firewall-topology.png
-[6]: ./media/firewall-integration/firewall-ntprule-monitor.png
+[6]: ./media/firewall-integration/firewall-ntprule-monitor.png
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
# Configure your App Service Environment with forced tunneling > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
# Introduction to App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
# App Service Environment management addresses
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+ ## Summary
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
# Networking considerations for App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
App Service Environment is a single-tenant deployment of Azure App Service that hosts Windows and Linux containers, web apps, API apps, logic apps, and function apps. When you install an App Service Environment, you pick the Azure virtual network that you want it to be deployed in. All of the inbound and outbound application traffic is inside the virtual network you specify. You deploy into a single subnet in your virtual network, and nothing else can be deployed into that subnet. > [!NOTE]
-> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
+> This article is about App Service Environment v3, which is used with Isolated v2 App Service plans.
## Subnet requirements
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md
# Manage an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md
App Service Environment is a single-tenant deployment of Azure App Service. You use it with an Azure virtual network, and you're the only user of this system. Apps deployed are subject to the networking features that are applied to the subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features. > [!NOTE]
-> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
+> This article is about App Service Environment v3, which is used with Isolated v2 App Service plans.
## Create an app
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
App Service Environment has three versions. App Service Environment v3 is the latest version and provides advantages and feature differences over earlier versions. > [!IMPORTANT]
-> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted.
+> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted.
+
+There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
## Comparison between versions
Due to hardware changes between the versions, there are some regions where App S
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
--
+> [Using an App Service Environment v3](using.md)
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md
# Availability Zone support for App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+
+As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environment v2 (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner.
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 11/06/2023 Last updated : 11/28/2023
To do this, you'll:
4. Accept the other defaults and then select **Next: Disks**. 5. Accept the **Disks** tab defaults and then select **Next: Networking**. 6. On the **Networking** tab, verify that **myVNet** is selected for the **Virtual network** and the **Subnet** is set to **myBackendSubnet**. Accept the other defaults and then select **Next: Management**.<br>Application Gateway can communicate with instances outside of the virtual network that it is in, but you need to ensure there's IP connectivity.
-7. Select **Next: Monitoring** tab, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
+7. Select **Next: Monitoring** and set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
8. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**. 9. Wait for the virtual machine creation to complete before continuing.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Previously updated : 07/15/2022 Last updated : 11/28/2023
az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id
If you'd like to use Azure portal to enable AGIC add-on, go to [(https://aka.ms/azure/portal/aks/agic)](https://aka.ms/azure/portal/aks/agic) and navigate to your AKS cluster through the portal link. From there, go to the Networking tab within your AKS cluster. You'll see an application gateway ingress controller section, which allows you to enable/disable the ingress controller add-on using the Azure portal. Select the box next to **Enable ingress controller**, and then select the application gateway you created, **myApplicationGateway** from the dropdown menu. Select **Save**.
-> [!CAUTION]
-> When you use an application gateway in a different resource group, the managed identity created **_ingressapplicationgateway-{AKSNAME}_** once this add-on is enabled in the AKS nodes resource group must have Contributor role set in the Application Gateway resource as well as Reader role set in the Application Gateway resource group.
+> [!IMPORTANT]
+> When you use an application gateway in a different resource group than the AKS cluster resource group, the managed identity **_ingressapplicationgateway-{AKSNAME}_** that is created must have **Contributor** and **Reader** roles set in the application gateway resource group.
:::image type="content" source="./media/tutorial-ingress-controller-add-on-existing/portal-ingress-controller-add-on.png" alt-text="Screenshot showing how to enable application gateway ingress controller from the networking page of the Azure Kubernetes Service.":::
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 11/21/2023 Last updated : 11/28/2023
For example: if you're executing a runbook for a SharePoint automation scenario
The following are the current limitations and known issues with PowerShell runbooks:
+# [PowerShell 7.2](#tab/lps72)
+
+**Limitations**
+
+> [!NOTE]
+> Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds.
+
+- For the PowerShell 7.2 runtime version, the module activities aren't extracted for the imported modules. Use [Azure Automation extension for VS code](automation-runbook-authoring.md) to simplify runbook authoring experience.
+- PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
+- PowerShell 7.x currently doesn't support signed runbooks.
+- Source control integration doesn't support PowerShell 7.2. Also, PowerShell 7.2 runbooks in source control get created in Automation account as Runtime 5.1.
+- Az module 8.3.0 is installed by default. The complete list of component modules of selected Az module version is shown once Az version is configured again using Azure portal or API.
+- The imported PowerShell 7.2 module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution.
+- Azure runbook doesn't support `Start-Job` with `-credential`.
+- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md).
+
+**Known issues**
+- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required module information.
+- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
+- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview.
+ **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
+- When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/) and [PackageManagement](/powershell/module/packagemanagement/) modules.
+ # [PowerShell 5.1](#tab/lps51) **Limitations**
The following are the current limitations and known issues with PowerShell runbo
- When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. - We recommend that you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures. - If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.--
-# [PowerShell 7.2](#tab/lps72)
-
-**Limitations**
-
-> [!NOTE]
-> Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds.
--- For the PowerShell 7.2 runtime version, the module activities aren't extracted for the imported modules.-- PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.-- PowerShell 7.x currently doesn't support signed runbooks.-- Source control integration doesn't support PowerShell 7.2. Also, PowerShell 7.2 runbooks in source control get created in Automation account as Runtime 5.1.-- Az module 8.3.0 is installed by default. The complete list of component modules of selected Az module version is shown once Az version is configured again using Azure portal or API.-- The imported PowerShell 7.2 module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution.-- Azure runbook doesn't support `Start-Job` with `-credential`. -- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md).-
-**Known issues**
-- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required module information.-- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.-- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview.
- **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
-- When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/) and [PackageManagement](/powershell/module/packagemanagement/) modules. ## PowerShell Workflow runbooks
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 10/27/2023 Last updated : 11/28/2023
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
-## October 2023
+## November 2023
+
+### General Availability: Azure Automation supports PowerShell 7.2 runbooks
+Azure Automation announces General Availability of PowerShell 7.2 runbooks. This enables you to author runbooks in the long-term supported version of PowerShell using [Azure Automation extension for VS code](how-to/runbook-authoring-extension-for-vscode.md) and execute them on a secure and reliable platform. [Learn more](automation-runbook-types.md).
+
+## October 2023
-## General Availability: Automation extension for Visual Studio Code
+### General Availability: Automation extension for Visual Studio Code
Azure Automation now provides an advanced editing experience for PowerShell and Python scripts along with [runbook management operations](how-to/runbook-authoring-extension-for-vscode.md). For more information, see the [Key features and limitations](automation-runbook-authoring.md).
azure-app-configuration Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview-managed-identity.md
The following steps will walk you through creating an App Configuration store an
Creating an App Configuration store with a user-assigned identity requires that you create the identity and then assign its resource identifier to your store.
+> [!NOTE]
+> You can add up to 10 user-assigned managed identities to an App Configuration store.
+ ### Using the Azure CLI To set up a managed identity using the Azure CLI, use the [az appconfig identity assign] command against an existing configuration store. You have three options for running the examples in this section:
azure-app-configuration Quickstart Javascript Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript-provider.md
+
+ Title: Quickstart for using Azure App Configuration with JavaScript apps
+description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code.
+++
+ms.devlang: javascript
++ Last updated : 10/12/2023+
+#Customer intent: As a JavaScript developer, I want to manage all my app settings in one place.
+
+# Quickstart: Create a JavaScript app with Azure App Configuration
+
+In this quickstart, you'll use Azure App Configuration to centralize storage and management of application settings using the [Azure App Configuration JavaScript provider client library](https://github.com/Azure/AppConfiguration-JavaScriptProvider).
+
+App Configuration provider for JavaScript is built on top of the [Azure SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/appconfiguration/app-configuration) and is designed to be easier to use with richer features.
+It enables access to key-values in App Configuration as a `Map` object.
+It offers features like configuration composition from multiple labels, key prefix trimming, automatic resolution of Key Vault references, and many more.
+As an example, this tutorial shows how to use the JavaScript provider in a Node.js app.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
+
+## Add key-values
+
+Add the following key-values to the App Configuration store. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
+
+| Key | Value | Content type |
+|-|-|--|
+| *message* | *Message from Azure App Configuration* | Leave empty |
+| *app.greeting* | *Hello World* | Leave empty |
+| *app.json* | *{"myKey":"myValue"}* | *application/json* |
+
+## Setting up the Node.js app
+
+In this tutorial, you'll create a Node.js console app and load data from your App Configuration store.
+
+1. Create a new directory for the project named *app-configuration-quickstart*.
+
+ ```console
+ mkdir app-configuration-quickstart
+ ```
+
+1. Switch to the newly created *app-configuration-quickstart* directory.
+
+ ```console
+ cd app-configuration-quickstart
+ ```
+
+1. Install the Azure App Configuration provider by using the `npm install` command.
+
+ ```console
+ npm install @azure/app-configuration-provider
+ ```
+
+1. Create a new file called *app.js* in the *app-configuration-quickstart* directory and add the following code:
+
+ ```javascript
+ const { load } = require("@azure/app-configuration-provider");
+ const connectionString = process.env.AZURE_APPCONFIG_CONNECTION_STRING;
+
+ async function run() {
+ let settings;
+
+ // Sample 1: Connect to Azure App Configuration using a connection string and load all key-values with null label.
+ settings = await load(connectionString);
+
+ // Find the key "message" and print its value.
+ console.log(settings.get("message")); // Output: Message from Azure App Configuration
+
+ // Find the key "app.json" as an object, and print its property "myKey".
+ const jsonObject = settings.get("app.json");
+ console.log(jsonObject.myKey); // Output: myValue
+
+ // Sample 2: Load all key-values with null label and trim "app." prefix from all keys.
+ settings = await load(connectionString, {
+ trimKeyPrefixes: ["app."]
+ });
+
+ // From the keys with trimmed prefixes, find a key with "greeting" and print its value.
+ console.log(settings.get("greeting")); // Output: Hello World
+
+ // Sample 3: Load all keys starting with "app." prefix and null label.
+ settings = await load(connectionString, {
+ selectors: [{
+ keyFilter: "app.*"
+ }],
+ });
+
+ // Print true or false indicating whether a setting is loaded.
+ console.log(settings.has("message")); // Output: false
+ console.log(settings.has("app.greeting")); // Output: true
+ console.log(settings.has("app.json")); // Output: true
+ }
+
+ run().catch(console.error);
+ ```
+
+## Run the application locally
+
+1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```cmd
+ setx AZURE_APPCONFIG_CONNECTION_STRING "<app-configuration-store-connection-string>"
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>"
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+1. Print the value of the environment variable to validate that it's set properly with the command below.
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ Using the Windows command prompt, restart the command prompt to allow the change to take effect and run the following command:
+
+ ```cmd
+ echo %AZURE_APPCONFIG_CONNECTION_STRING%
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+ ```
+
+1. After the environment variable is properly set, run the following command to run the app locally:
+
+ ```bash
+ node app.js
+ ```
+
+ You should see the following output:
+
+ ```Output
+ Message from Azure App Configuration
+ myValue
+ Hello World
+ false
+ true
+ true
+ ```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you created a new App Configuration store and learned how to access key-values using the App Configuration JavaScript provider in a Node.js app.
+
+For more code samples, visit:
+
+> [!div class="nextstepaction"]
+> [Azure App Configuration JavaScript provider](https://github.com/Azure/AppConfiguration-JavaScriptProvider/tree/main/examples)
azure-app-configuration Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript.md
Title: Quickstart for using Azure App Configuration with JavaScript apps | Microsoft Docs
-description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code.
+ Title: Using Azure App Configuration in JavaScript apps with the Azure SDK for JavaScript | Microsoft Docs
+description: This document shows examples of how to use the Azure SDK for JavaScript to access key-values in Azure App Configuration.
ms.devlang: javascript--++ Last updated 03/20/2023 #Customer intent: As a JavaScript developer, I want to manage all my app settings in one place.
-# Quickstart: Create a JavaScript app with Azure App Configuration
+# Create a Node.js app with the Azure SDK for JavaScript
-In this quickstart, you will use Azure App Configuration to centralize storage and management of application settings using the [App Configuration client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/appconfiguration/app-configuration/README.md).
+This document shows examples of how to use the [Azure SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/appconfiguration/app-configuration) to access key-values in Azure App Configuration.
+
+>[!TIP]
+> App Configuration offers a JavaScript provider library that is built on top of the JavaScript SDK and is designed to be easier to use with richer features. It enables configuration settings to be used like a Map object, and offers other features like configuration composition from multiple labels, key name trimming, and automatic resolution of Key Vault references. Go to the [JavaScript quickstart](./quickstart-javascript-provider.md) to learn more.
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An Azure account with an active subscription - [Create one for free](https://azure.microsoft.com/free/)
- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store). - [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
-## Add a key-value
+## Create a key-value
Add the following key-value to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
Add the following key-value to the App Configuration store and leave **Label** a
## Setting up the Node.js app
-1. In this tutorial, you'll create a new directory for the project named *app-configuration-quickstart*.
+1. In this tutorial, you'll create a new directory for the project named *app-configuration-example*.
```console
- mkdir app-configuration-quickstart
+ mkdir app-configuration-example
```
-1. Switch to the newly created *app-configuration-quickstart* directory.
+1. Switch to the newly created *app-configuration-example* directory.
```console
- cd app-configuration-quickstart
+ cd app-configuration-example
``` 1. Install the Azure App Configuration client library by using the `npm install` command.
Add the following key-value to the App Configuration store and leave **Label** a
npm install @azure/app-configuration ```
-1. Create a new file called *app.js* in the *app-configuration-quickstart* directory and add the following code:
+1. Create a new file called *app-configuration-example.js* in the *app-configuration-example* directory and add the following code:
+
+ ```javascript
+ const { AppConfigurationClient } = require("@azure/app-configuration");
+
+ async function run() {
+ console.log("Azure App Configuration - JavaScript example");
+ // Example code goes here
+ }
+
+ run().catch(console.error);
+ ```
+
+> [!NOTE]
+> The code snippets in this example will help you get started with the App Configuration client library for JavaScript. For your application, you should also consider handling exceptions according to your needs. To learn more about exception handling, please refer to our [JavaScript SDK documentation](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/appconfiguration/app-configuration).
- ```javascript
- const appConfig = require("@azure/app-configuration");
- ```
+## Configure your App Configuration connection string
-## Configure your connection string
+1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
-1. Set an environment variable named **AZURE_APP_CONFIG_CONNECTION_STRING**, and set it to the access key to your App Configuration store. At the command line, run the following command:
+ ### [Windows command prompt](#tab/windowscommandprompt)
- ### [PowerShell](#tab/azure-powershell)
+ To run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```cmd
+ setx AZURE_APPCONFIG_CONNECTION_STRING "<app-configuration-store-connection-string>"
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
```azurepowershell
- $Env:AZURE_APP_CONFIG_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>"
```
- ### [Command line](#tab/command-line)
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+1. Print out the value of the environment variable to validate that it is set properly with the command below.
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ Using the Windows command prompt, restart the command prompt to allow the change to take effect and run the following command:
```cmd
- setx AZURE_APP_CONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ echo %AZURE_APPCONFIG_CONNECTION_STRING%
```
- ### [macOS](#tab/macOS)
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+ ```console
- export AZURE_APP_CONFIG_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
```
-
+## Code samples
+
+The sample code snippets in this section show you how to perform common operations with the App Configuration client library for JavaScript. Add these code snippets to the body of `run` function in *app-configuration-example.js* file you created earlier.
+
+> [!NOTE]
+> The App Configuration client library refers to a key-value object as `ConfigurationSetting`. Therefore, in this article, the **key-values** in App Configuration store will be referred to as **configuration settings**.
-2. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
+Learn below how to:
-## Connect to an App Configuration store
+- [Connect to an App Configuration store](#connect-to-an-app-configuration-store)
+- [Get a configuration setting](#get-a-configuration-setting)
+- [Add a configuration setting](#add-a-configuration-setting)
+- [Get a list of configuration settings](#get-a-list-of-configuration-settings)
+- [Lock a configuration setting](#lock-a-configuration-setting)
+- [Unlock a configuration setting](#unlock-a-configuration-setting)
+- [Update a configuration setting](#update-a-configuration-setting)
+- [Delete a configuration setting](#delete-a-configuration-setting)
+
+### Connect to an App Configuration store
The following code snippet creates an instance of **AppConfigurationClient** using the connection string stored in your environment variables. ```javascript
-const connection_string = process.env.AZURE_APP_CONFIG_CONNECTION_STRING;
-const client = new appConfig.AppConfigurationClient(connection_string);
+const connection_string = process.env.AZURE_APPCONFIG_CONNECTION_STRING;
+const client = new AppConfigurationClient(connection_string);
```
-## Get a configuration setting
+### Get a configuration setting
-The following code snippet retrieves a configuration setting by `key` name. The key shown in this example was created in the previous steps of this article.
+The following code snippet retrieves a configuration setting by `key` name.
```javascript
-async function run() {
-
- let retrievedSetting = await client.getConfigurationSetting({
- key: "TestApp:Settings:Message"
- });
+ const retrievedConfigSetting = await client.getConfigurationSetting({
+ key: "TestApp:Settings:Message"
+ });
+ console.log("\nRetrieved configuration setting:");
+ console.log(`Key: ${retrievedConfigSetting.key}, Value: ${retrievedConfigSetting.value}`);
+```
+
+### Add a configuration setting
+
+The following code snippet creates a `ConfigurationSetting` object with `key` and `value` fields and invokes the `addConfigurationSetting` method.
+This method will throw an exception if you try to add a configuration setting that already exists in your store. If you want to avoid this exception, the [setConfigurationSetting](#update-a-configuration-setting) method can be used instead.
+
+```javascript
+ const configSetting = {
+ key:"TestApp:Settings:NewSetting",
+ value:"New setting value"
+ };
+ const addedConfigSetting = await client.addConfigurationSetting(configSetting);
+ console.log("\nAdded configuration setting:");
+ console.log(`Key: ${addedConfigSetting.key}, Value: ${addedConfigSetting.value}`);
+```
- console.log("Retrieved value:", retrievedSetting.value);
+### Get a list of configuration settings
+
+The following code snippet retrieves a list of configuration settings. The `keyFilter` and `labelFilter` arguments can be provided to filter key-values based on `key` and `label` respectively. For more information on filtering, see how to [query configuration settings](./concept-key-value.md#query-key-values).
+
+```javascript
+ const filteredSettingsList = client.listConfigurationSettings({
+ keyFilter: "TestApp*"
+ });
+ console.log("\nRetrieved list of configuration settings:");
+ for await (const filteredSetting of filteredSettingsList) {
+ console.log(`Key: ${filteredSetting.key}, Value: ${filteredSetting.value}`);
+ }
+```
+
+### Lock a configuration setting
+
+The lock status of a key-value in App Configuration is denoted by the `readOnly` attribute of the `ConfigurationSetting` object. If `readOnly` is `true`, the setting is locked. The `setReadOnly` method can be invoked with `true` as the second argument to lock the configuration setting.
+
+```javascript
+ const lockedConfigSetting = await client.setReadOnly(addedConfigSetting, true /** readOnly */);
+ console.log(`\nRead-only status for ${lockedConfigSetting.key}: ${lockedConfigSetting.isReadOnly}`);
+```
+
+### Unlock a configuration setting
+
+If the `readOnly` attribute of a `ConfigurationSetting` is `false`, the setting is unlocked. The `setReadOnly` method can be invoked with `false` as the second argument to unlock the configuration setting.
+
+```javascript
+ const unlockedConfigSetting = await client.setReadOnly(lockedConfigSetting, false /** readOnly */);
+ console.log(`\nRead-only status for ${unlockedConfigSetting.key}: ${unlockedConfigSetting.isReadOnly}`);
+```
+
+### Update a configuration setting
+
+The `setConfigurationSetting` method can be used to update an existing setting or create a new setting. The following code snippet changes the value of an existing configuration setting.
+
+```javascript
+ addedConfigSetting.value = "Value has been updated!";
+ const updatedConfigSetting = await client.setConfigurationSetting(addedConfigSetting);
+ console.log("\nUpdated configuration setting:");
+ console.log(`Key: ${updatedConfigSetting.key}, Value: ${updatedConfigSetting.value}`);
+```
+
+### Delete a configuration setting
+
+The following code snippet deletes a configuration setting by `key` name.
+
+```javascript
+ const deletedConfigSetting = await client.deleteConfigurationSetting({
+ key: "TestApp:Settings:NewSetting"
+ });
+ console.log("\nDeleted configuration setting:");
+ console.log(`Key: ${deletedConfigSetting.key}, Value: ${deletedConfigSetting.value}`);
+```
+
+## Run the app
+
+In this example, you created a Node.js app that uses the Azure App Configuration client library to retrieve a configuration setting created through the Azure portal, add a new setting, retrieve a list of existing settings, lock and unlock a setting, update a setting, and finally delete a setting.
+
+At this point, your *app-configuration-example.js* file should have the following code:
+
+```javascript
+const { AppConfigurationClient } = require("@azure/app-configuration");
+
+async function run() {
+ console.log("Azure App Configuration - JavaScript example");
+
+ const connection_string = process.env.AZURE_APPCONFIG_CONNECTION_STRING;
+ const client = new AppConfigurationClient(connection_string);
+
+ const retrievedConfigSetting = await client.getConfigurationSetting({
+ key: "TestApp:Settings:Message"
+ });
+ console.log("\nRetrieved configuration setting:");
+ console.log(`Key: ${retrievedConfigSetting.key}, Value: ${retrievedConfigSetting.value}`);
+
+ const configSetting = {
+ key: "TestApp:Settings:NewSetting",
+ value: "New setting value"
+ };
+ const addedConfigSetting = await client.addConfigurationSetting(configSetting);
+ console.log("Added configuration setting:");
+ console.log(`Key: ${addedConfigSetting.key}, Value: ${addedConfigSetting.value}`);
+
+ const filteredSettingsList = client.listConfigurationSettings({
+ keyFilter: "TestApp*"
+ });
+ console.log("Retrieved list of configuration settings:");
+ for await (const filteredSetting of filteredSettingsList) {
+ console.log(`Key: ${filteredSetting.key}, Value: ${filteredSetting.value}`);
+ }
+
+ const lockedConfigSetting = await client.setReadOnly(addedConfigSetting, true /** readOnly */);
+ console.log(`Read-only status for ${lockedConfigSetting.key}: ${lockedConfigSetting.isReadOnly}`);
+
+ const unlockedConfigSetting = await client.setReadOnly(lockedConfigSetting, false /** readOnly */);
+ console.log(`Read-only status for ${unlockedConfigSetting.key}: ${unlockedConfigSetting.isReadOnly}`);
+
+ addedConfigSetting.value = "Value has been updated!";
+ const updatedConfigSetting = await client.setConfigurationSetting(addedConfigSetting);
+ console.log("Updated configuration setting:");
+ console.log(`Key: ${updatedConfigSetting.key}, Value: ${updatedConfigSetting.value}`);
+
+ const deletedConfigSetting = await client.deleteConfigurationSetting({
+ key: "TestApp:Settings:NewSetting"
+ });
+ console.log("Deleted configuration setting:");
+ console.log(`Key: ${deletedConfigSetting.key}, Value: ${deletedConfigSetting.value}`);
}
-run().catch((err) => console.log("ERROR:", err));
+run().catch(console.error);
```
-## Build and run the app locally
+In your console window, navigate to the directory containing the *app-configuration-example.js* file and execute the following command to run the app:
-1. Run the following command to run the Node.js app:
+```console
+node app.js
+```
- ```powershell
- node app.js
- ```
-1. You should see the following output at the command prompt:
+You should see the following output:
- ```powershell
- Retrieved value: Data from Azure App Configuration
- ```
-## Clean up resources
+```output
+Azure App Configuration - JavaScript example
+
+Retrieved configuration setting:
+Key: TestApp:Settings:Message, Value: Data from Azure App Configuration
+Added configuration setting:
+Key: TestApp:Settings:NewSetting, Value: New setting value
+
+Retrieved list of configuration settings:
+Key: TestApp:Settings:Message, Value: Data from Azure App Configuration
+Key: TestApp:Settings:NewSetting, Value: New setting value
+
+Read-only status for TestApp:Settings:NewSetting: true
+
+Read-only status for TestApp:Settings:NewSetting: false
+
+Updated configuration setting:
+Key: TestApp:Settings:NewSetting, Value: Value has been updated!
+
+Deleted configuration setting:
+Key: TestApp:Settings:NewSetting, Value: Value has been updated!
+```
+
+## Clean up resources
[!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)] ## Next steps
-In this quickstart, you created a new App Configuration store and learned how to access key-values from a Node.js app.
+This guide showed you how to use the Azure SDK for JavaScript to access key-values in Azure App Configuration.
For additional code samples, visit: > [!div class="nextstepaction"] > [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/appconfiguration/app-configuration/samples/v1/javascript)+
+To learn how to use Azure App Configuration with JavaScript apps, go to:
+
+> [!div class="nextstepaction"]
+> [Create a JavaScript app with Azure App Configuration](./quickstart-javascript-provider.md)
azure-arc About Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/about-arcdata-extension.md
To access the latest reference documentation:
- [`sql instance-failover-group-arc`](/cli/azure/sql/instance-failover-group-arc) - [`az postgres server-arc`](/cli/azure/postgres/server-arc)
-## Next steps
+## Related content
[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)
azure-arc Active Directory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md
Last updated 10/11/2022
-# Azure Arc-enabled SQL Managed Instance with Active Directory authentication
+# SQL Managed Instance enabled by Azure Arc with Active Directory authentication
-Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication.
+Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). SQL Managed Instance enabled by Azure Arc uses an existing on-premises Active Directory (AD) domain for authentication.
-This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes:
+This article describes how to enable SQL Managed Instance enabled by Azure Arc with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes:
- Customer-managed keytab (CMK) - Service-managed keytab (SMK)
To enable Active Directory authentication for SQL Server on Linux and Linux cont
- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a service-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md) - [Deploy SQL managed instances](deploy-active-directory-sql-managed-instance.md)
-The following diagram shows how to enable Active Directory authentication for Azure Arc-enabled SQL Managed Instance:
+The following diagram shows how to enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc:
![Actice Directory Deployment User journey](media/active-directory-deployment/active-directory-user-journey.png)
The following diagram shows how to enable Active Directory authentication for Az
In order to enable Active Directory authentication for SQL Managed Instance, the instance must be deployed in an environment that allows it to communicate with the Active Directory domain.
-To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. It provides Azure Arc-enabled SQL managed instances running on the same data controller the ability to perform Active Directory authentication.
+To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. It provides instances running on the same data controller the ability to perform Active Directory authentication.
## Compare AD integration modes What is the difference between the two Active Directory integration modes?
-To enable Active Directory authentication for Arc-enabled SQL Managed Instance, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are:
+To enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are:
- Customer-managed keytab - Service-managed keytab
The following section compares these modes.
For either mode, you need a specific Active Directory account, keytab, and Kubernetes secret for each SQL managed instance.
-## Enable Active Directory authentication in Arc-enabled SQL Managed Instance
+## Enable Active Directory authentication
-When you deploy SQL Managed Instance with the intention to enable Active Directory authentication, the deployment needs to reference an Active Directory connector instance to use. Referencing the Active Directory connector in managed instance specification automatically sets up the needed environment in the SQL Managed Instance container for the managed instance to authenticate with Active Directory.
+When you deploy an instance with the intention to enable Active Directory authentication, the deployment needs to reference an Active Directory connector instance to use. Referencing the Active Directory connector in managed instance specification automatically sets up the needed environment in instance container to authenticate with Active Directory.
-## Next steps
+## Related content
* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) * [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md)
-* [Deploy an Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
-* [Connect to Azure Arc-enabled SQL Managed Instance using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
+* [Deploy SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
+* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
azure-arc Active Directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md
Last updated 10/11/2022
-# Azure Arc-enabled SQL Managed Instance in Active Directory authentication with system-managed keytab - prerequisites
+# SQL Server enabled by Azure Arc in Active Directory authentication with system-managed keytab - prerequisites
This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources.
Whether you have created a new account for the DSA or are using an existing Acti
- Select **OK** twice more to close open dialog boxes.
-## Next steps
+## Related content
* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) * [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md)
-* [Deploy an Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
-* [Connect to Azure Arc-enabled SQL Managed Instance using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
+* [Deploy a SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
+* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
# Tutorial: Automated validation testing
-As part of each commit that builds up Arc-enabled data services, Microsoft runs automated CI/CD pipelines that perform end-to-end tests. These tests are orchestrated via two containers that are maintained alongside the core-product (Data Controller, Azure Arc-enabled SQL Managed Instance & PostgreSQL server). These containers are:
+As part of each commit that builds up Arc-enabled data services, Microsoft runs automated CI/CD pipelines that perform end-to-end tests. These tests are orchestrated via two containers that are maintained alongside the core-product (Data Controller, SQL Managed Instance enabled by Azure Arc & PostgreSQL server). These containers are:
- `arc-ci-launcher`: Containing deployment dependencies (for example, CLI extensions), as well product deployment code (using Azure CLI) for both Direct and Indirect connectivity modes. Once Kubernetes is onboarded with the Data Controller, the container leverages [Sonobuoy](https://sonobuoy.io/) to trigger parallel integration tests. - `arc-sb-plugin`: A [Sonobuoy plugin](https://sonobuoy.io/plugins/) containing [Pytest](https://docs.pytest.org/en/7.1.x/)-based end-to-end integration tests, ranging from simple smoke-tests (deployments, deletes), to complex high-availability scenarios, chaos-tests (resource deletions) etc.
kubectl delete -k arc_data_services/test/launcher/overlays/aks
This cleans up the resource manifests deployed as part of the launcher.
-## Next steps
+## Related content
> [!div class="nextstepaction"] > [Pre-release testing](preview-testing.md)
azure-arc Azure Data Studio Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/azure-data-studio-dashboards.md
The **Diagnose and solve problems** tab on the left, launches the PostgreSQL tro
For Azure support, select the **New support request** tab. This launches the Azure portal in context to the server group. Create an Azure support request from there.
-## Next steps
+## Related content
- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Backup Controller Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md
When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components that is deployed. The functions of the data controller include: - Provision, de-provision and update resources-- Orchestrate most of the activities for Azure Arc-enabled SQL Managed Instance such as upgrades, scale out etc.
+- Orchestrate most of the activities for SQL Managed Instance enabled by Azure Arc such as upgrades, scale out etc.
- Capture the billing and usage information of each Arc SQL managed instance. In order to perform above functions, the Data controller needs to store an inventory of all the current Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances. All this data is stored in a database called `controller` within the SQL Server instance that is deployed into the `controldb-0` pod.
Follow these steps to restore the controller database from a backup with new sto
16. Scale the controller ReplicaSet back up to 1 replica using the `kubectl scale` command.
-## Next steps
+## Related content
[Azure Data Studio dashboards](azure-data-studio-dashboards.md)
azure-arc Backup Restore Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-restore-postgresql.md
Update the backup retention period for an Azure Arc-enabled PostgreSQL server:
az postgres server-arc update -n pg01 -k test --retention-days <number of days> --use-k8s ```
-## Next steps
+## Related content
- [Restore Azure Arc-enabled PostgreSQL servers](restore-postgresql.md) - [Scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server.
azure-arc Change Postgresql Port https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/change-postgresql-port.md
In addition, note the value for `primaryEndpoint`.
"primaryEndpoint": "12.345.67.890:866", ```
-## Next steps
+## Related content
- Read about [how to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md). - Read about how you can configure other aspects of your server group in the section How-to\Manage\Configure & scale section of the documentation.
azure-arc Clean Up Past Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/clean-up-past-installation.md
kubectl delete apiservice v1.sql.arcdata.microsoft.com
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace} ```
-## Next steps
+## Related content
[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
Title: Configure Azure Arc-enabled SQL managed instance
-description: Configure Azure Arc-enabled SQL managed instance
+ Title: Configure SQL Managed Instance enabled by Azure Arc
+description: Configure SQL Managed Instance enabled by Azure Arc
Last updated 05/05/2023
-# Configure Azure Arc-enabled SQL managed instance
+# Configure SQL Managed Instance enabled by Azure Arc
-This article explains how to configure Azure Arc-enabled SQL managed instance.
+This article explains how to configure SQL Managed Instance enabled by Azure Arc.
## Configure resources such as cores, memory ### Configure using CLI
-You can update the configuration of Azure Arc-enabled SQL Managed Instances with the CLI. Run the following command to see configuration options.
+To update the configuration of an instance with the CLI. Run the following command to see configuration options.
```azurecli az sql mi-arc update --help ```
-You can update the available memory and cores for an Azure Arc-enabled SQL managed instance using the following command:
+To update the available memory and cores for an instance use:
```azurecli az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
The following example sets the cpu core and memory requests and limits.
az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n sqlinstance1 --k8s-namespace arc --use-k8s ```
-To view the changes made to the Azure Arc-enabled SQL managed instance, you can use the following commands to view the configuration yaml file:
+To view the changes made to the instance, you can use the following commands to view the configuration yaml file:
```azurecli az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --us
## Configure Server options
-You can configure certain server configuration settings for Azure Arc-enabled SQL managed instance either during or after creation time. This article describes how to configure settings like enabling "Ad Hoc Distributed Queries" or "backup compression default" etc.
+You can configure certain server configuration settings for SQL Managed Instance enabled by Azure Arc either during or after creation time. This article describes how to configure settings like enabling "Ad Hoc Distributed Queries" or "backup compression default" etc.
Currently the following server options can be configured: - Ad Hoc Distributed Queries
azure-arc Configure Security Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-security-postgresql.md
For audit scenarios please configure your server group to use the `pgaudit` exte
SSL is required for client connections. In connection string, the SSL mode parameter should not be disabled. [Form connection strings](get-connection-endpoints-and-connection-strings-postgresql-server.md#form-connection-strings).
-## Next steps
+## Related content
- See [`pgcrypto` extension](https://www.postgresql.org/docs/current/pgcrypto.html) - See [Use PostgreSQL extensions](using-extensions-in-postgresql-server.md)
azure-arc Configure Transparent Data Encryption Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md
Title: Encrypt a database with transparent data encryption manually in Azure Arc-enabled SQL Managed Instance
-description: How-to guide to turn on transparent data encryption in an Azure Arc-enabled SQL Managed Instance
+ Title: Encrypt a database with transparent data encryption manually in SQL Managed Instance enabled by Azure Arc
+description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc
Last updated 05/22/2022
-# Encrypt a database with transparent data encryption on Azure Arc-enabled SQL Managed Instance
+# Encrypt a database with transparent data encryption on SQL Managed Instance enabled by Azure Arc
-This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance. In this article, the term *managed instance* refers to a deployment of Azure Arc-enabled SQL Managed Instance.
+This article describes how to enable transparent data encryption on a database created in a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc.
## Prerequisites
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it.
+Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it.
-- [Create an Azure Arc-enabled SQL Managed Instance](./create-sql-managed-instance.md)-- [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md)
+- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)
+- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)
## Turn on transparent data encryption on a database in the managed instance
Similar to above, to restore the credentials, copy them into the container and r
kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" ```
-## Next steps
+## Related content
[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)
azure-arc Configure Transparent Data Encryption Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md
Title: Turn on transparent data encryption in Azure Arc-enabled SQL Managed Instance (preview)
-description: How-to guide to turn on transparent data encryption in an Azure Arc-enabled SQL Managed Instance (preview)
+ Title: Turn on transparent data encryption in SQL Managed Instance enabled by Azure Arc (preview)
+description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc (preview)
Last updated 06/06/2023
-# Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance (preview)
+# Enable transparent data encryption on SQL Managed Instance enabled by Azure Arc (preview)
-This article describes how to enable and disable transparent data encryption (TDE) at-rest on an Azure Arc-enabled SQL Managed Instance. In this article, the term *managed instance* refers to a deployment of Azure Arc-enabled SQL Managed Instance and enabling/disabling TDE will apply to all databases running on a managed instance.
+This article describes how to enable and disable transparent data encryption (TDE) at-rest on a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc and enabling/disabling TDE will apply to all databases running on a managed instance.
For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption).
Turning on the TDE feature does the following:
## Prerequisites
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it.
+Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it.
-- [Create an Azure Arc-enabled SQL Managed Instance](./create-sql-managed-instance.md)-- [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md)
+- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)
+- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)
## Limitations
The following limitations apply when you enable automatic TDE:
## Create a managed instance with TDE enabled (Azure CLI)
-The following example creates an Azure Arc-enabled SQL managed instance with one replica, TDE enabled:
+The following example creates a SQL Managed Instance enabled by Azure Arc with one replica, TDE enabled:
```azurecli az sql mi-arc create --name sqlmi-tde --k8s-namespace arc --tde-mode ServiceManaged --use-k8s
When TDE is enabled on Arc-enabled SQL Managed Instance, the data service automa
3. Adds the associated Database Encryption Keys (DEK) on all databases on the managed instance. 4. Enables encryption on all databases on the managed instance.
-You can set Azure Arc-enabled SQL Managed Instance TDE in one of two modes:
+You can set SQL Managed Instance enabled by Azure Arc TDE in one of two modes:
- Service-managed - Customer-managed
Similar to above, to restore the credentials, copy them into the container and r
kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" ```
-## Next steps
+## Related content
[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)
azure-arc Connect Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md
Title: Connect to AD-integrated Azure Arc-enabled SQL Managed Instance
-description: Connect to AD-integrated Azure Arc-enabled SQL Managed Instance
+ Title: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc
+description: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc
Last updated 10/11/2022
-# Connect to AD-integrated Azure Arc-enabled SQL Managed Instance
+# Connect to AD-integrated SQL Managed Instance enabled by Azure Arc
-This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated Azure Arc-enabled SQL Managed Instance deployed already.
+This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated SQL Managed Instance enabled by Azure Arc deployed already.
-See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication enabled.
+See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication enabled.
> [!NOTE] > Ensure that a DNS record for the SQL endpoint is created in Active Directory DNS servers before continuing on this page.
CREATE LOGIN [CONTOSO\admin] FROM WINDOWS;
GO ```
-## Connect to Azure Arc-enabled SQL Managed Instance
+## Connect to SQL Managed Instance enabled by Azure Arc
-From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/azure-data-studio/download-azure-data-studio) to connect to the Azure Arc-enabled SQL Managed Instance using AD authentication.
+From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/azure-data-studio/download-azure-data-studio) to connect to the instance with AD authentication.
A domain-aware Linux-based machine is one where you are able to use Kerberos authentication using kinit. Such machine should have /etc/krb5.conf file set to point to the Active Directory domain (realm) being used. It should also have /etc/resolv.conf file set such that one can run DNS lookups against the Active Directory domain.
azure-arc Connect Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-managed-instance.md
Title: Connect to Azure Arc-enabled SQL Managed Instance
-description: Connect to Azure Arc-enabled SQL Managed Instance
+ Title: Connect to SQL Managed Instance enabled by Azure Arc
+description: Connect to SQL Managed Instance enabled by Azure Arc
Last updated 07/30/2021
-# Connect to Azure Arc-enabled SQL Managed Instance
+# Connect to SQL Managed Instance enabled by Azure Arc
-This article explains how you can connect to your Azure Arc-enabled SQL Managed Instance.
+This article explains how you can connect to your SQL Managed Instance enabled by Azure Arc.
-## View Azure Arc-enabled SQL Managed Instances
+## View SQL Managed Instance enabled by Azure Arc
-To view the Azure Arc-enabled SQL Managed Instance and the external endpoints use the following command:
+To view instance and the external endpoints, use the following command:
```azurecli az sql mi-arc list --k8s-namespace <namespace> --use-k8s -o table
Replace the value of the `--destination-port-ranges` parameter below with the po
az network nsg rule create -n db_port --destination-port-ranges 30913 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*' ```
-## Next steps
+## Related content
- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards) - [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
When you complete the steps in this article, you will have:
- An Arc-enabled Azure Kubernetes cluster. - A data controller in directly connected mode.-- An instance of Azure Arc-enabled SQL Managed Instance.
+- An instance of SQL Managed Instance enabled by Azure Arc.
- A connection to the instance with Azure Data Studio. Azure Arc allows you to run Azure data services on-premises, at the edge, and in public clouds via Kubernetes. Deploy SQL Managed Instance and PostgreSQL server (preview) data services with Azure Arc. The benefits of using Azure Arc include staying current with constant service patches, elastic scale, self-service provisioning, unified management, and support for disconnected mode.
NAME STATE
<namespace> Ready ```
-## Create an instance of Azure Arc-enabled SQL Managed Instance
+## Deploy SQL Managed Instance enabled by Azure Arc
1. In the portal, locate the resource group. 1. In the resource group, select **Create**.
NAME STATE
## Connect with Azure Data Studio
-To connect with Azure Data Studio, see [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md).
+To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md).
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
When you complete the steps in this article, you will have:
- A Kubernetes cluster on Azure Kubernetes Services (AKS). - A data controller in indirectly connected mode.-- An instance of Azure Arc-enabled SQL Managed Instance.
+- SQL Managed Instance enabled by Azure Arc.
- A connection to the instance with Azure Data Studio. Use these objects to experience Azure Arc-enabled data services.
NAME STATE
<namespace> Ready ```
-## Create an instance of Azure Arc-enabled SQL Managed Instance
+## Deploy an instance of SQL Managed Instance enabled by Azure Arc
Now, we can create the Azure MI for indirectly connected mode with the following command:
NAME STATE
## Connect to managed instance on Azure Data Studio
-To connect with Azure Data Studio, see [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md).
+To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md).
## Upload usage and metrics to Azure portal
After you are done with the resources you created in this article.
Follow the steps in [Delete data controller in indirectly connected mode](uninstall-azure-arc-data-controller.md#delete-data-controller-in-indirectly-connected-mode).
-## Next steps
+## Related content
> [!div class="nextstepaction"] > [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md).
azure-arc Create Custom Configuration Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-custom-configuration-template.md
From the Azure Arc data controller create screen, select "Configure custom templ
After ensuring the values are correct, click Apply to proceed with the Azure Arc data controller deployment.
-## Next steps
+## Related content
* For direct connectivity mode: [Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md)
azure-arc Create Data Controller Direct Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-azure-portal.md
The progress of Azure Arc data controller deployment can be monitored as follows
- Check if the custom location is created by running ```az customlocation list --resource-group <resourcegroup> -o table``` - Check the status of pod deployment by running ```kubectl get pods -ns <namespace>```
-## Next steps
+## Related information
-[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+[Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md)
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
The deployment status of the Arc data controller on the cluster can be monitored
kubectl get datacontrollers --namespace arc ```
-## Next steps
+## Related content
[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md)
-[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
kubectl logs <pod name> --namespace arc
If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
-## Next steps
+## Related content
- [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md) - [Create a PostgreSQL server using Kubernetes-native tools](./create-postgresql-server-kubernetes-native-tools.md)
azure-arc Create Postgresql Server Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-azure-data-studio.md
It is important you set the storage class right at the time you deploy a server
- setting the storage class for the backups has been temporarily removed as we temporarily removed the backup/restore functionalities as we finalize designs and experiences.
-## Next steps
+## Related content
- [Manage your server using Azure Data Studio](manage-postgresql-server-with-azure-data-studio.md) - [Monitor your server](monitor-grafana-kibana.md)
azure-arc Create Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server.md
You can now connect either psql:
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655 ```
-## Next steps
+## Related content
- Connect to your Azure Arc-enabled PostgreSQL server: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgresql-server.md)
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
Title: Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
-description: Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
+ Title: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio
+description: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio
Last updated 06/16/2021
-# Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
+# Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio
This document demonstrates how to install Azure SQL Managed Instance - Azure Arc using Azure Data Studio. [!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)]
-## Create Azure Arc-enabled SQL Managed Instance
+## Steps
1. Launch Azure Data Studio 2. On the Connections tab, select on the three dots on the top left and choose **New Deployment...**.
This document demonstrates how to install Azure SQL Managed Instance - Azure Arc
After you select the deploy button, the Azure Arc data controller initiates the deployment. The deployment creates the managed instance. The deployment process takes a few minutes to create the data controller.
-## Connect to Azure Arc-enabled SQL Managed Instance from Azure Data Studio
+## Connect from Azure Data Studio
-View all the Azure SQL Managed Instances provisioned to this data controller. Use the following command:
+View all the SQL Managed Instances provisioned to this data controller. Use the following command:
```azurecli az sql mi-arc list --k8s-namespace <namespace> --use-k8s
View all the Azure SQL Managed Instances provisioned to this data controller. Us
1. Optionally, select/Add New Server Group as appropriate 1. Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc
-## Next Steps
+## Related information
Now try to [monitor your SQL instance](monitor-grafana-kibana.md)
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
kubectl describe pod/<pod name> --namespace arc
If you encounter any troubles with the deployment, please see the [troubleshooting guide](troubleshoot-guide.md).
-## Next steps
+## Related content
[Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
Title: Create an Azure Arc-enabled SQL Managed Instance
-description: Deploy Azure Arc-enabled SQL Managed Instance
+ Title: Create a SQL Server Managed Instance enabled by Azure Arc
+description: Deploy SQL Server Managed Instance enabled by Azure Arc
Last updated 07/30/2021
-# Create an Azure Arc-enabled SQL Managed Instance
+# Create a SQL Server Managed Instance enabled by Azure Arc
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)]
You can copy the external IP and port number from here and connect to it using y
[!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
-## Next steps
-- [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
+## Related content
+- [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md)
- [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md) - [Deploy Azure SQL Managed Instance using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
Title: Delete an Azure Arc-enabled SQL Managed Instance
-description: Learn how to delete an Azure Arc-enabled SQL Managed Instance and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
+ Title: Delete a SQL Server Managed Instance enabled by Azure Arc
+description: Learn how to delete a SQL Server Managed Instance enabled by Azure Arc and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
Last updated 07/30/2021
-# Delete an Azure Arc-enabled SQL Managed Instance
+# Delete a SQL Server Managed Instance enabled by Azure Arc
-In this how-to guide, you'll find and then delete an Azure Arc-enabled SQL Managed Instance. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs).
+In this how-to guide, you'll find and then delete a SQL Managed Instance enabled by Azure Arc. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs).
-1. Find existing Azure Arc-enabled SQL Managed Instances:
+1. Find existing instances:
```azurecli az sql mi-arc list --k8s-namespace <namespace> --use-k8s
By design, deleting a SQL Managed Instance doesn't remove its associated [PVCs](
persistentvolumeclaim "logs-demo-mi-0" deleted ```
-## Next steps
+## Related content
-Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md)
[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Deploy Active Directory Connector Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md
# Tutorial ΓÇô Deploy Active Directory connector using Azure CLI
-This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc.
## Prerequisites
az arcdata ad-connector delete --name arcadc --data-controller-name arcdc --reso
-## Next steps
+## Related content
* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) * [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md) * [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
azure-arc Deploy Active Directory Connector Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md
# Tutorial ΓÇô Deploy Active Directory connector using Azure portal
-Active Directory (AD) connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instances.
+Active Directory (AD) connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc.
This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal.
To delete multiple AD connectors at one time:
1. Click **Delete** in the management bar to delete the AD connectors that you selected.
-## Next steps
+## Related content
* [Tutorial ΓÇô Deploy Active Directory connector using Azure CLI](deploy-active-directory-connector-cli.md) * [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) * [Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)
azure-arc Deploy Active Directory Postgresql Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-postgresql-server-cli.md
az postgres server-arc update
--use-k8s ```
-## Next steps
+## Related content
- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
azure-arc Deploy Active Directory Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md
Title: Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
-description: Explains how to deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
+ Title: Deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
+description: Explains how to deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
Last updated 10/11/2022
-# Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
+# Deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
-This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication using Azure CLI.
+This article explains how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory (AD) authentication using Azure CLI.
See these articles for specific instructions:
Before you proceed, install the following tools:
To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md)
-## Deploy and update Active Directory integrated Azure Arc-enabled SQL Managed Instance
+## Deploy and update Active Directory integrated SQL Managed Instance
### [Customer-managed keytab mode](#tab/Customer-managed-keytab-mode)
-#### Create an Azure Arc-enabled SQL Managed Instance
+#### Create an instance
-To view available options for create command for Azure Arc-enabled SQL Managed Instance, use the following command:
+To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command:
```azurecli az sql mi-arc create --help
az sql mi-arc create
--resource-group arc-rg ```
-#### Update an Azure Arc-enabled SQL Managed Instance
+#### Update an instance
To update a SQL Managed Instance, use `az sql mi-arc update`. See the following examples for different connectivity modes:
az sql mi-arc update
### [System-managed keytab mode](#tab/system-managed-keytab-mode)
-#### Create an Azure Arc-enabled SQL Managed Instance
+#### Create an instance
-To view available options for create command for Azure Arc-enabled SQL Managed Instance, use the following command:
+To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command:
```azurecli az sql mi-arc create --help
az sql mi-arc create
-## Delete an Azure Arc-enabled SQL Managed Instance in directly connected mode
+## Delete an instance in directly connected mode
To delete a SQL Managed Instance, use `az sql mi-arc delete`. See the following examples for both connectivity modes:
Example:
az sql mi-arc delete --name contososqlmi --resource-group arc-rg ```
+## Related content
--
-## Next steps
* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to Active Directory integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+* [Connect to Active Directory integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
Title: Deploy Active Directory-integrated Azure Arc-enabled SQL Managed Instance
-description: Learn how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication.
+ Title: Deploy Active Directory-integrated SQL Server Managed Instance enabled by Azure Arc
+description: Learn how to deploy SQL Server Managed Instance enabled by Azure Arc with Active Directory authentication.
Last updated 10/11/2022
-# Deploy Active Directory-integrated Azure Arc-enabled SQL Managed Instance
+# Deploy Active Directory-integrated SQL Server Managed Instance enabled by Azure Arc
In this article, learn how to deploy Azure Arc-enabled Azure SQL Managed Instance with Active Directory authentication.
To prepare for deployment in system-managed keytab mode:
## Set properties for Active Directory authentication
-To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory authentication, update your deployment specification file to reference the Active Directory connector instance to use. Referencing the Active Directory connector in the SQL specification file automatically sets up SQL for Active Directory authentication.
+To deploy SQL Managed Instance enabled by Azure Arc for Azure Arc Active Directory authentication, update your deployment specification file to reference the Active Directory connector instance to use. Referencing the Active Directory connector in the SQL specification file automatically sets up SQL for Active Directory authentication.
### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
To support Active Directory authentication on SQL in system-managed keytab mode,
Next, prepare a YAML specification file to deploy SQL Managed Instance. For the mode you use, enter your deployment values in the specification file. > [!NOTE]
-> In the specification file for both modes, the `admin-login-secret` value in the YAML example provides basic authentication. You can use the parameter value to log in to the managed instance, and then create logins for Active Directory users and groups. For more information, see [Connect to Active Directory-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+> In the specification file for both modes, the `admin-login-secret` value in the YAML example provides basic authentication. You can use the parameter value to log in to the managed instance, and then create logins for Active Directory users and groups. For more information, see [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md).
### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
For both customer-managed keytab mode and system-managed keytab mode, deploy the
kubectl apply -f sqlmi.yaml ```
-## Next steps
+## Related content
-- [Connect to Active Directory-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md)
+- [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md)
- [Upgrade your Active Directory connector](upgrade-active-directory-connector.md)
azure-arc Deploy Customer Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md
# Tutorial ΓÇô Deploy Active Directory (AD) connector in customer-managed keytab mode
-This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc.
## Active Directory connector in customer-managed keytab mode
After submitting the deployment of AD Connector instance, you may check the stat
kubectl get adc -n <namespace> ```
-## Next steps
+## Related content
* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) * [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy System Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md
# Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode
-This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc.
## Active Directory connector in system-managed keytab mode
After submitting the deployment for the AD connector instance, you may check the
kubectl get adc -n <namespace> ```
-## Next steps
+## Related content
* [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) * [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy Telemetry Router https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md
metricsdc-fm7jh 2/2 Running 0 15h
metricsui-qqgbv 2/2 Running 0 15h ```
-## Next steps
+## Related content
- [Add exporters and pipelines to your telemetry router](adding-exporters-and-pipelines.md)
azure-arc Get Connection Endpoints And Connection Strings Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgresql-server.md
dbname='postgres' user='postgres' host='192.168.1.121' password='{your_password_
host=192.168.1.121; dbname=postgres user=postgres password={your_password_here} port=24276 sslmode=require ```
-## Next steps
+## Related content
- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server group
azure-arc Install Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-arcdata-extension.md
If you already have the extension, you can update it with the following command:
az extension update --name arcdata ```
-## Next steps
+## Related content
[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-client-tools.md
The following table lists common tools required for creating and managing Azure
<sup>3</sup> For PowerShell, `curl` is an alias to the Invoke-WebRequest cmdlet.
-## Next steps
+## Related content
[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)
azure-arc Least Privilege https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/least-privilege.md
kubectl logs <pod name> --namespace arc
#kubectl logs control-2g7b1 --namespace arc ```
-## Next steps
+## Related content
You have several additional options for creating the Azure Arc data controller:
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md
Title: Limitations of Azure Arc-enabled SQL Managed Instance
-description: Limitations of Azure Arc-enabled SQL Managed Instance
+ Title: Limitations of SQL Server Managed Instance enabled by Azure Arc
+description: Limitations of SQL Server Managed Instance enabled by Azure Arc
Last updated 09/07/2021
-# Limitations of Azure Arc-enabled SQL Managed Instance
+# Limitations of SQL Server Managed Instance enabled by Azure Arc
-This article describes limitations of Azure Arc-enabled SQL Managed Instance.
+This article describes limitations of SQL Managed Instance enabled by Azure Arc.
## Back up and restore
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
### Point-in-time restore (PITR) -- Doesn't support restore from one Azure Arc-enabled SQL Managed Instance to another Azure Arc-enabled SQL Managed Instance. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.
+- Doesn't support restore from one SQL Managed Instance enabled by Azure Arc to another SQL Managed Instance enabled by Azure Arc. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.
- Renaming databases is currently not supported, during point in time restore. - No support for restoring a TDE enabled database currently. - A deleted database cannot be restored currently.
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
## Roles and responsibilities
-The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like Azure Arc-enabled SQL Managed Instance).
+The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like SQL Managed Instance enabled by Azure Arc).
### Frequently asked questions
This table summarizes answers to frequently asked questions regarding support ro
__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Customers and their partners own and operate the infrastructure that Azure Arc hybrid services run on so Microsoft can't provide the SLA.
-## Next steps
+## Related content
- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. - **Create your own.** Follow these steps to create on your own Kubernetes cluster: 1. [Install the client tools](install-client-tools.md) 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)
- 3. [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+ 3. [Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
- **Learn** - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
azure-arc Limitations Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md
The table below summarizes answers to frequently asked questions regarding suppo
__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because with a hybrid service, you or your provider owns the infrastructure.
-## Next steps
+## Related content
- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
azure-arc List Servers Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/list-servers-postgresql.md
NAME STATE READY-PODS PRIMARY-ENDPOINT AGE
postgres01 Ready 5/5 12.345.67.890:5432 12d ```
-## Next steps:
+## Related content:
* [Read the article about how to get the connection end points and form the connection strings to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md) * [Read the article about showing the configuration of an Azure Arc-enabled PostgreSQL server](show-configuration-postgresql-server.md)
azure-arc Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md
During setup, specify a duration, recurrence, and start date and time. After the
## Prerequisites
-An Azure Arc-enabled SQL Managed Instance with the [`desiredVersion` property set to `auto`](upgrade-sql-managed-instance-auto.md).
+a SQL Managed Instance enabled by Azure Arc with the [`desiredVersion` property set to `auto`](upgrade-sql-managed-instance-auto.md).
## Limitations
Example:
az arcdata dc update --maintenance-start "2022-04-15T23:00" --k8s-namespace arc --use-k8s ```
-## Next steps
+## Related content
[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md)
azure-arc Managed Instance Business Continuity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-business-continuity-overview.md
Title: Business continuity overview - Azure Arc-enabled SQL Managed Instance
-description: Overview business continuity for Azure Arc-enabled SQL Managed Instance
+ Title: Business continuity overview - SQL Managed Instance enabled by Azure Arc
+description: Overview business continuity for SQL Managed Instance enabled by Azure Arc
Last updated 01/27/2022
-# Overview: Azure Arc-enabled SQL Managed Instance business continuity
+# Overview: SQL Managed Instance enabled by Azure Arc business continuity
Business continuity is a combination of people, processes, and technology that enables businesses to recover and continue operating in the event of disruptions. In hybrid scenarios there is a joint responsibility between Microsoft and customer, such that customer owns and manages the on-premises infrastructure while the software is provided by Microsoft. ## Features
-This overview describes the set of capabilities that come built-in with Azure Arc-enabled SQL Managed Instance and how you can leverage them to recover from disruptions.
+This overview describes the set of capabilities that come built-in with SQL Managed Instance enabled by Azure Arc and how you can leverage them to recover from disruptions.
| Feature | Use case | Service Tier | |--|--|| | Point in time restore | Use the built-in point in time restore (PITR) feature to recover from situations such as data corruptions caused by human errors. Learn more about [Point in time restore](.\point-in-time-restore.md) | Available in both General Purpose and Business Critical service tiers|
-| High availability | Deploy the Azure Arc enabled SQL Managed Instance in high availability mode to achieve local high availability. This mode automatically recovers from scenarios such as hardware failures, pod/node failures, and etc. The built-in listener service automatically redirects new connections to another replica while Kubernetes attempts to rebuild the failed replica. Learn more about [high-availability in Azure Arc-enabled SQL Managed Instance](.\managed-instance-high-availability.md) |This feature is only available in the Business Critical service tier. <br> For General Purpose service tier, Kubernetes provides basic recoverability from scenarios such as node/pod crashes. |
-|Disaster recovery| Configure disaster recovery by setting up another Azure Arc-enabled SQL Managed Instance in a geographically separate data center to synchronize data from the primary data center. This scenario is useful for recovering from events when an entire data center is down due to disruptions such as power outages or other events. | Available in both General Purpose and Business Critical service tiers|
+| High availability | Deploy the Azure Arc enabled SQL Managed Instance in high availability mode to achieve local high availability. This mode automatically recovers from scenarios such as hardware failures, pod/node failures, and etc. The built-in listener service automatically redirects new connections to another replica while Kubernetes attempts to rebuild the failed replica. Learn more about [high-availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md) |This feature is only available in the Business Critical service tier. <br> For General Purpose service tier, Kubernetes provides basic recoverability from scenarios such as node/pod crashes. |
+|Disaster recovery| Configure disaster recovery by setting up another SQL Managed Instance enabled by Azure Arc in a geographically separate data center to synchronize data from the primary data center. This scenario is useful for recovering from events when an entire data center is down due to disruptions such as power outages or other events. | Available in both General Purpose and Business Critical service tiers|
|
-## Next steps
+## Related content
[Learn more about configuring point in time restore](.\point-in-time-restore.md)
-[Learn more about configuring high availability in Azure Arc-enabled SQL Managed Instance](.\managed-instance-high-availability.md)
+[Learn more about configuring high availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md)
-[Learn more about setting up and configuring disaster recovery in Azure Arc-enabled SQL Managed Instance](.\managed-instance-disaster-recovery.md)
+[Learn more about setting up and configuring disaster recovery in SQL Managed Instance enabled by Azure Arc](.\managed-instance-disaster-recovery.md)
azure-arc Managed Instance Disaster Recovery Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md
Title: Configure failover group - CLI
-description: Describes how to configure disaster recovery with a failover group for Azure Arc-enabled SQL Managed Instance with the CLI
+description: Describes how to configure disaster recovery with a failover group for SQL Managed Instance enabled by Azure Arc with the CLI
# Configure failover group - CLI
-This article explains how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance with the CLI. Before you proceed, review the information and prerequisites in [Azure Arc-enabled SQL Managed Instance - disaster recovery](managed-instance-disaster-recovery.md).
+This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with the CLI. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md).
[!INCLUDE [failover-group-prerequisites](includes/failover-group-prerequisites.md)]
This article explains how to configure disaster recovery for Azure Arc-enabled S
Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode.
-Once the prerequisites are met, run the below command to set up Azure failover group between the two Azure Arc-enabled SQL managed instances:
+Once the prerequisites are met, run the below command to set up Azure failover group between the two instances:
```azurecli az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG>
Once the failover group is set up between the managed instances, different failo
Possible failover scenarios are: -- The Azure Arc-enabled SQL managed instances at both sites are in healthy state and a failover needs to be performed:
+- The instances at both sites are in healthy state and a failover needs to be performed:
+ perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI. - Primary site is unhealthy/unreachable and a failover needs to be performed:
- + the primary Azure Arc-enabled SQL managed instance is down/unhealthy/unreachable
- + the secondary Azure Arc-enabled SQL managed instance needs to be force-promoted to primary with potential data loss
- + when the original primary Azure Arc-enabled SQL managed instance comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized.
+ + the primary SQL Managed Instance enabled by Azure Arc is down/unhealthy/unreachable
+ + the secondary SQL Managed Instance enabled by Azure Arc needs to be force-promoted to primary with potential data loss
+ + when the original primary SQL Managed Instance enabled by Azure Arc comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized.
## Manual failover (without data loss)
Once you perform a failover from primary site to secondary site, either with or
- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance - If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
-## Next steps
+## Related content
-- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)
- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md)
azure-arc Managed Instance Disaster Recovery Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md
Title: Disaster recovery - Azure Arc-enabled SQL Managed Instance - portal
-description: Describes how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance in the portal
+ Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc - portal
+description: Describes how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc in the portal
# Configure failover group - portal
-This article explains how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance with Azure portal. Before you proceed, review the information and prerequisites in [Azure Arc-enabled SQL Managed Instance - disaster recovery](managed-instance-disaster-recovery.md).
+This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with Azure portal. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md).
[!INCLUDE [failover-group-prerequisites](includes/failover-group-prerequisites.md)]
After you initiate the change, the portal automatically refreshes the status eve
1. Select **Delete failover group** to proceed. Otherwise select **Cancel**, to not delete the group.
-## Next steps
+## Related content
-- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)
- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Title: Disaster recovery - Azure Arc-enabled SQL Managed Instance
-description: Describes disaster recovery for Azure Arc-enabled SQL Managed Instance
+ Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc
+description: Describes disaster recovery for SQL Managed Instance enabled by Azure Arc
Last updated 08/02/2023
-# Azure Arc-enabled SQL Managed Instance - disaster recovery
+# SQL Managed Instance enabled by Azure Arc - disaster recovery
-To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up Azure failover groups. This article explains failover groups.
+To configure disaster recovery in SQL Managed Instance enabled by Azure Arc, set up Azure failover groups. This article explains failover groups.
## Background
-Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
+Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because SQL Managed Instance enabled by Azure Arc runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
> [!NOTE]
-> - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in.
+> - The instances in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in.
> - Distributed availability groups can be set up for either General Purpose or Business Critical service tiers. You can configure failover groups in with the CLI or in the portal. For prerequisites and instructions see the respective content below:
You can configure failover groups in with the CLI or in the portal. For prerequi
- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md) - [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)
-## Next steps
+## Related content
-- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
Title: Azure Arc-enabled SQL Managed Instance high availability-
-description: Learn how to deploy Azure Arc-enabled SQL Managed Instance with high availability.
+ Title: SQL Managed Instance enabled by Azure Arc high availability
+
+description: Learn how to deploy SQL Server Managed Instance enabled by Azure Arc with high availability.
-# High Availability with Azure Arc-enabled SQL Managed Instance
+# High Availability with SQL Server Managed Instance enabled by Azure Arc
-Azure Arc-enabled SQL Managed Instance is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. Arc-enabled data service provides this service is provided without user intervention. The service sets up the availability group, configures database mirroring endpoints, adds databases to the availability group, and coordinates failover and upgrade. This document explores both types of high availability.
+SQL Managed Instance enabled by Azure Arc is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure SQL Managed Instance enabled by Azure Arc to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. Arc-enabled data service provides this service is provided without user intervention. The service sets up the availability group, configures database mirroring endpoints, adds databases to the availability group, and coordinates failover and upgrade. This document explores both types of high availability.
-Azure Arc-enabled SQL Managed Instance provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier.
+SQL Managed Instance enabled by Azure Arc provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier.
## High availability in General Purpose service tier
To verify the build-in high availability provided by Kubernetes, you can delete
### Prerequisites - Kubernetes cluster must have [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration) -- An Azure Arc-enabled SQL Managed Instance deployed with one replica (default)
+- A SQL Managed Instance enabled by Azure Arc deployed with one replica (default)
1. View the pods.
After all containers within the pod have recovered, you can connect to the manag
## High availability in Business Critical service tier
-In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, Azure SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. Azure Arc-enabled SQL managed instance deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the instance that has all the data from the primary and is ready to take on connections.
+In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, Azure SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. SQL Managed Instance enabled by Azure Arc deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the instance that has all the data from the primary and is ready to take on connections.
## Contained availability groups An availability group binds one or more user databases into a logical group so that when there is a failover, the entire group of databases fails over to the secondary replica as a single unit. An availability group only replicates data in the user databases but not the data in system databases such as logins, permissions, or agent jobs. A contained availability group includes metadata from system databases such as `msdb` and `master` databases. When logins are created or modified in the primary replica, they're automatically also created in the secondary replicas. Similarly, when an agent job is created or modified in the primary replica, the secondary replicas also receive those changes.
-Azure Arc-enabled SQL Managed Instance takes this concept of contained availability group and adds Kubernetes operator so these can be deployed and managed at scale.
+SQL Managed Instance enabled by Azure Arc takes this concept of contained availability group and adds Kubernetes operator so these can be deployed and managed at scale.
Capabilities that contained availability groups enable: -- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in the Azure Arc-enabled SQL Managed Instance.
+- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in an instance.
- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group. - An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.
-### Deploy Azure Arc-enabled SQL Managed Instance with multiple replicas using Azure portal
+### Deploy SQL Server Managed Instance enabled by Azure Arc with multiple replicas using Azure portal
From Azure portal, on the create Azure Arc-enabled SQL Managed Instance page: 1. Select **Configure Compute + Storage** under Compute + Storage. The portal shows advanced settings.
From Azure portal, on the create Azure Arc-enabled SQL Managed Instance page:
-### Deploy Azure Arc-enabled SQL Managed Instance with multiple replicas using Azure CLI
+### Deploy with multiple replicas using Azure CLI
-When an Azure Arc-enabled SQL Managed Instance is deployed in Business Critical service tier, this enables multiple replicas to be created. The setup and configuration of contained availability groups among those instances is automatically done during provisioning.
+When a SQL Managed Instance enabled by Azure Arc is deployed in Business Critical service tier, this enables multiple replicas to be created. The setup and configuration of contained availability groups among those instances is automatically done during provisioning.
For instance, the following command creates a managed instance with 3 replicas.
Additional steps are required to restore a database into an availability group.
### Limitations
-Azure Arc-enabled SQL Managed Instance availability groups has the same limitations as Big Data Cluster availability groups. For more information, see [Deploy SQL Server Big Data Cluster with high availability](/sql/big-data-cluster/deployment-high-availability#known-limitations).
+SQL Managed Instance enabled by Azure Arc availability groups has the same limitations as Big Data Cluster availability groups. For more information, see [Deploy SQL Server Big Data Cluster with high availability](/sql/big-data-cluster/deployment-high-availability#known-limitations).
-## Next steps
+## Related content
-Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md)
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md
Title: Azure Arc-enabled SQL Managed Instance Overview
-description: Azure Arc-enabled SQL Managed Instance Overview
+ Title: SQL Managed Instance enabled by Azure Arc Overview
+description: SQL Managed Instance enabled by Azure Arc Overview
Last updated 07/19/2023
-# Azure Arc-enabled SQL Managed Instance Overview
+# SQL Managed Instance enabled by Azure Arc Overview
-Azure Arc-enabled SQL Managed Instance is an Azure SQL data service that can be created on the infrastructure of your choice.
+SQL Managed Instance enabled by Azure Arc is an Azure SQL data service that can be created on the infrastructure of your choice.
## Description
-Azure Arc-enabled SQL Managed Instance has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
+SQL Managed Instance enabled by Azure Arc has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
To learn more about these capabilities, watch these introductory videos.
-### Azure Arc-enabled SQL Managed Instance - indirect connected mode
+### SQL Managed Instance enabled by Azure Arc - indirect connected mode
> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
-### Azure Arc-enabled SQL Managed Instance - direct connected mode
+### SQL Managed Instance enabled by Azure Arc - direct connected mode
> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
-## Next steps
+## Related content
-Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md)
[Azure Arc-enabled Managed Instance high availability](managed-instance-high-availability.md) [Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md
Title: Migrate a database from SQL Server to Azure Arc-enabled SQL Managed Instance
-description: Migrate database from SQL Server to Azure Arc-enabled SQL Managed Instance
+ Title: Migrate a database from SQL Server to SQL Server Managed Instance enabled by Azure Arc
+description: Migrate database from SQL Server to SQL Server Managed Instance enabled by Azure Arc
Last updated 07/30/2021
-# Migrate: SQL Server to Azure Arc-enabled SQL Managed Instance
+# Migrate: SQL Server to SQL Server Managed Instance enabled by Azure Arc
This scenario walks you through the steps for migrating a database from a SQL Server instance to Azure SQL managed instance in Azure Arc via two different backup and restore methods. ## Use Azure blob storage
-Use Azure blob storage for migrating to Azure Arc-enabled SQL Managed Instance.
+Use Azure blob storage for migrating to SQL Managed Instance enabled by Azure Arc.
This method uses Azure Blob Storage as a temporary storage location that you can back up to and then restore from.
WITH MOVE 'test' to '/var/opt/mssql/datf'
GO ```
-## Next steps
+## Related content
-[Learn more about Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
+[Learn more about Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md)
[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Monitor Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-certificates.md
Make sure the services are listed as subject alternative names (SANs) and the ce
- `certificate.pem` containing the base64 encoded certificate - `privatekey.pem` containing the private key
-## Next steps
+## Related content
- Try [Upload metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) - Read about Grafana: - [Getting started](https://grafana.com/docs/grafana/latest/getting-started/getting-started)
azure-arc Monitor Grafana Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-grafana-kibana.md
Kibana and Grafana web dashboards are provided to bring insight and clarity to t
## Monitor Azure SQL managed instances on Azure Arc
-To access the logs and monitoring dashboards for Azure Arc-enabled SQL Managed Instance, run the following `azdata` CLI command
+To access the logs and monitoring dashboards for SQL Managed Instance enabled by Azure Arc, run the following `azdata` CLI command
```azurecli az sql mi-arc endpoint list -n <name of SQL instance> --use-k8s
az network nsg rule create -n ports_30777 --nsg-name azurearcvmNSG --priority 60
```
-## Next steps
+## Related content
- Try [Upload metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) - Read about Grafana: - [Getting started](https://grafana.com/docs/grafana/latest/getting-started/getting-started)
azure-arc Monitoring Log Analytics Azure Portal Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitoring-log-analytics-azure-portal-managed-instance.md
This article lists additional experiences you can have with Azure Arc-enabled da
[!INCLUDE [azure-arc-common-monitoring](../../../includes/azure-arc-common-monitoring.md)]
-## Next steps
+## Related content
- [Read about the overview of Azure Arc-enabled data services](overview.md) - [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md)
azure-arc Monitoring Log Analytics Azure Portal Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitoring-log-analytics-azure-portal-postgresql.md
This article lists additional experiences you can have with Azure Arc-enabled da
[!INCLUDE [azure-arc-common-monitoring](../../../includes/azure-arc-common-monitoring.md)]
-## Next steps
+## Related content
- [Read about the overview of Azure Arc-enabled data services](overview.md) - [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
For an introduction to how Azure Arc-enabled data services supports your hybrid
## Always current
-Azure Arc-enabled data services such as Azure Arc-enabled SQL managed instance and Azure Arc-enabled PostgreSQL server receive updates on a frequent basis including servicing patches and new features similar to the experience in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are a subscription service, you will no longer face end-of-support situations for your databases.
+Azure Arc-enabled data services such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server receive updates on a frequent basis including servicing patches and new features similar to the experience in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are a subscription service, you will no longer face end-of-support situations for your databases.
## Elastic scale
To see the regions that currently support Azure Arc-enabled data services, go to
[!INCLUDE [arc-region-note](../includes/arc-region-note.md)]
-## Next steps
+## Related content
> **Just want to try things out?** > Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. >
->In addition, deploy [Jumpstart ArcBox for DataOps](https://azurearcjumpstart.com/azure_jumpstart_arcbox/DataOps), an easy to deploy sandbox for all things Azure Arc-enabled SQL Managed Instance. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for you to get hands-on with all available Azure Arc-enabled technology with nothing more than an available Azure subscription.
+>In addition, deploy [Jumpstart ArcBox for DataOps](https://azurearcjumpstart.com/azure_jumpstart_arcbox/DataOps), an easy to deploy sandbox for all things SQL Managed Instance enabled by Azure Arc. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for you to get hands-on with all available Azure Arc-enabled technology with nothing more than an available Azure subscription.
[Install the client tools](install-client-tools.md) [Plan your Azure Arc data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first)
-[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first)
+[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first)
[Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) (requires creation of an Azure Arc data controller first)
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
In order to experience Azure Arc-enabled data services, you'll need to complete
1. [Create Azure Arc data controller in direct connectivity mode (prerequisites)](create-data-controller-direct-prerequisites.md).
- For other ways to create a data controller see the links under [Next steps](#next-steps).
+ For other ways to create a data controller see the links under [Related content](#related-content).
1. Create data services.
- For example, [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md).
+ For example, [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md).
1. Connect with Azure Data Studio.
Verify that:
```console kubectl cluster-info ``` -- You have an Azure subscription that resources such as an Azure Arc data controller, Azure Arc-enabled SQL managed instance, or Azure Arc-enabled PostgreSQL server will be projected and billed to.
+- You have an Azure subscription that resources such as an Azure Arc data controller, SQL Managed Instance enabled by Azure Arc, or Azure Arc-enabled PostgreSQL server will be projected and billed to.
- The Microsoft.AzureArcData provider is registered for the subscription where the Azure Arc-enabled data services will be deployed. After you're prepared the infrastructure, deploy Azure Arc-enabled data services in the following way: 1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster.
-1. Create an Azure Arc-enabled SQL managed instance and/or an Azure Arc-enabled PostgreSQL server.
+1. Create a SQL Managed Instance enabled by Azure Arc and/or an Azure Arc-enabled PostgreSQL server.
> [!CAUTION] > Some of the data services tiers and modes are in [general availability (GA)](release-notes.md), and some are in preview. We recommend that you don't mix GA and preview services on the same data controller. If you mix GA and preview services on the same data controller, you can't upgrade in place. In that scenario, when you want to upgrade, you must remove and re-create the data controller and data services.
When you're creating Azure Arc-enabled data services, regardless of the service
- **Password**: The password for the Kibana/Grafana administrator user. - **Name of your Kubernetes namespace**: The name of the Kubernetes namespace where you want to create the data controller. - **Connectivity mode**: Determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Your choice of connectivity mode determines the options for deployment methods. For more information, see [Connectivity modes and requirements](./connectivity.md).-- **Azure subscription ID**: The Azure subscription GUID for where you want to create the data controller resource in Azure. All Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL servers are also created in and billed to this subscription.-- **Azure resource group name**: The name of the resource group where you want to create the data controller resource in Azure. All Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL servers are also created in this resource group.
+- **Azure subscription ID**: The Azure subscription GUID for where you want to create the data controller resource in Azure. All deployments of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL are also created in and billed to this subscription.
+- **Azure resource group name**: The name of the resource group where you want to create the data controller resource in Azure. All deployments of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL are also created in this resource group.
- **Azure location**: The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page for Azure global infrastructure. The metadata and billing information about the Azure resources that are managed by your deployed data controller is stored only in the location in Azure that you specify as the location parameter. If you're deploying in direct connectivity mode, the location parameter for the data controller is the same as the location of your targeted custom location resource. - **Service principal information**: - If you're deploying in **indirect** connectivity mode, you'll need service principal information to upload usage and metrics data. For more information, see the "Assign roles to the service principal" section of [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md).
As outlined in [Connectivity modes and requirements](./connectivity.md), you can
You can perform all three of these steps in a single step by using the Azure Arc data controller creation wizard in the Azure portal.
-After you've installed the Azure Arc data controller, you can create and access data services such as Azure Arc-enabled SQL Managed Instance or Azure Arc-enabled PostgreSQL server.
+After you've installed the Azure Arc data controller, you can create and access data services such as SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL server.
## Known limitations Currently, only one Azure Arc data controller per Kubernetes cluster is supported. However, you can create multiple Arc data services, such as Arc-enabled SQL managed instances and Arc-enabled PostgreSQL servers, that are managed by the same Azure Arc data controller.
-## Next steps
+## Related content
You have several additional options for creating the Azure Arc data controller:
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
Title: Restore a database in Azure Arc-enabled SQL Managed Instance to a previous point-in-time
-description: Explains how to restore a database to a specific point-in-time on Azure Arc-enabled SQL Managed Instance.
+ Title: Restore a database in SQL Managed Instance enabled by Azure Arc to a previous point-in-time
+description: Explains how to restore a database to a specific point-in-time on SQL Managed Instance enabled by Azure Arc.
# Perform a point-in-time Restore
-Use the point-in-time restore (PITR) to create a database as a copy of another database from some time in the past that is within the retention period. This article describes how to do a point-in-time restore of a database in Azure Arc-enabled SQL managed instance.
+Use the point-in-time restore (PITR) to create a database as a copy of another database from some time in the past that is within the retention period. This article describes how to do a point-in-time restore of a database in SQL Managed Instance enabled by Azure Arc.
Point-in-time restore can restore a database: - From an existing database-- To a new database on the same Azure Arc-enabled SQL managed instance
+- To a new database on the same SQL Managed Instance enabled by Azure Arc
You can restore a database to a point-in-time within a pre-configured retention setting.
-You can check the retention setting for an Azure Arc-enabled SQL managed instance as follows:
+You can check the retention setting for a SQL Managed Instance enabled by Azure Arc as follows:
For **Direct** connected mode:
Currently, point-in-time restore can restore a database:
## Automatic Backups
-Azure Arc-enabled SQL managed instance has built-in automatic backups feature enabled. Whenever you create or restore a new database, Azure Arc-enabled SQL managed instance initiates a full backup immediately and schedules differential and transaction log backups automatically. SQL managed instance stores these backups in the storage class specified during the deployment.
+SQL Managed Instance enabled by Azure Arc has built-in automatic backups feature enabled. Whenever you create or restore a new database, SQL Managed Instance enabled by Azure Arc initiates a full backup immediately and schedules differential and transaction log backups automatically. SQL managed instance stores these backups in the storage class specified during the deployment.
Point-in-time restore enables a database to be restored to a specific point-in-time, within the retention period. To restore a database to a specific point-in-time, Azure Arc-enabled data services applies the backup files in a specific order. For example:
Currently, full backups are taken once a week, differential backups are taken ev
## Retention Period
-The default retention period for a new Azure Arc-enabled SQL managed instance is seven days, and can be adjusted with values of 0, or 1-35 days. The retention period can be set during deployment of the SQL managed instance by specifying the `--retention-days` property. Backup files older than the configured retention period are automatically deleted.
+The default retention period for a new SQL Managed Instance enabled by Azure Arc is seven days, and can be adjusted with values of 0, or 1-35 days. The retention period can be set during deployment of the SQL managed instance by specifying the `--retention-days` property. Backup files older than the configured retention period are automatically deleted.
## Create a database from a point-in-time using az CLI
az sql midb-arc restore --managed-instance sqlmi1 --name Testdb1 --dest-name myn
1. Edit the properties as follows: 1. `name:` Unique string for each custom resource (CR). Required by Kubernetes.
- 1. `namespace:` Kubernetes namespace where the Azure Arc-enabled SQL managed instance is.
+ 1. `namespace:` Kubernetes namespace where instance is.
1. `source: ... name:` Name of the source instance. 1. `source: ... database:` Name of source database where the restore would be applied from. 1. `restorePoint:` Point-in-time for the restore operation in UTC datetime.
You can also restore a database to a point-in-time from Azure Data Studio as fol
1. Launch Azure Data studio 2. Ensure you have the required Arc extensions as described in [Tools](install-client-tools.md). 3. Connect to the Azure Arc data controller
-4. Expand the data controller node and right click on the Azure Arc-enabled SQL managed instance and select "Manage". Azure Data Studio launches the SQL managed instance dashboard.
+4. Expand the data controller node, right-click on the instance and select **Manage**. Azure Data Studio launches the SQL managed instance dashboard.
5. Click on the **Backups** tab in the dashboard 6. You should see a list of databases on the SQL managed instance and their Earliest and Latest restore time windows, and an icon to initiate the **Restore** 7. Click on the icon for the database you want to restore from. Azure Data Studio launches a blade towards the right side
kubectl describe sqlmirestoretask <nameoftask> -n <namespace>
## Configure Retention period
-The Retention period for an Azure Arc-enabled SQL managed instance can be reconfigured from their original setting as follows:
+The Retention period for a SQL Managed Instance enabled by Azure Arc can be reconfigured from their original setting as follows:
> [!WARNING] > If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention period. Backups that are no longer needed to provide PITR within the new retention period are deleted. If you increase the current retention period, you do not immediately gain the ability to restore to older points in time within the new retention period. You gain that ability over time, as the system starts to retain backups for longer.
az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-da
## Disable Automatic backups
-You can disable the built-in automated backups for a specific instance of Azure Arc-enabled SQL managed instance by setting the `--retention-days` property to 0, as follows. The below command applies to both ```direct``` and ```indirect``` modes.
+You can disable the built-in automated backups for a specific instance of SQL Managed Instance enabled by Azure Arc by setting the `--retention-days` property to 0, as follows. The below command applies to both ```direct``` and ```indirect``` modes.
> [!WARNING]
-> If you disable Automatic Backups for an Azure Arc-enabled SQL managed instance, then any Automatic Backups configured will be deleted and you lose the ability to do a point-in-time restore. You can change the `retention-days` property to re-initiate automatic backups if needed.
+> If you disable Automatic Backups for a SQL Managed Instance enabled by Azure Arc, then any Automatic Backups configured will be deleted and you lose the ability to do a point-in-time restore. You can change the `retention-days` property to re-initiate automatic backups if needed.
```azurecli
The backups are stored under `/var/opt/mssql/backups/archived/<dbname>/<datetime
## Limitations
-Point-in-time restore to Azure Arc-enabled SQL Managed Instance has the following limitations:
+Point-in-time restore to SQL Managed Instance enabled by Azure Arc has the following limitations:
- Point-in-time restore is database level feature, not an instance level feature. You cannot restore the entire instance with Point-in-time restore.-- You can only restore to the same Azure Arc-enabled SQL managed instance from where the backup was taken.
+- You can only restore to the same SQL Managed Instance enabled by Azure Arc from where the backup was taken.
-## Next steps
+## Related content
-[Learn more about Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
+[Learn more about Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md)
[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
-[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
When you deploy with this method, the most recent pre-release version will alway
At this time, pre-release testing is supported for certain customers and partners that have established agreements with Microsoft. Participants have points of contact on the product engineering team. Email your points of contact with any issues that are found during pre-release testing.
-## Next steps
+## Related content
[Release notes - Azure Arc-enabled data services](release-notes.md)
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
This article describes the data that Azure Arc-enabled data services transmit to
Neither Azure Arc-enabled data services nor any of the applicable data services store any customer data. This applies to: -- Azure Arc-enabled SQL Managed Instance
+- SQL Managed Instance enabled by Azure Arc
- Azure Arc-enabled PostgreSQL ## Azure Arc-enabled data services Azure Arc-enabled data services may use some or all of the following products: -- Azure Arc-enabled SQL Managed Instance
+- SQL Managed Instance enabled by Azure Arc
- Azure Arc-enabled PostgreSQL - Azure Data Studio
Every database instance and the data controller itself will be reflected in Azur
There are three resource types: -- Azure Arc-enabled SQL Managed Instance
+- SQL Managed Instance enabled by Azure Arc
- Azure Arc-enabled PostgreSQL server - Data controller
In support situations, you may be asked to provide database instance logs, Kuber
|Crash dumps ΓÇô customer data | Maximum 30-day retention of crash dumps ΓÇô may contain access control data <br/><br/> Statistics objects, data values within rows, query texts could be in customer crash dumps | |Crash dumps ΓÇô personal data | Machine, logins/ user names, emails, location information, customer identification ΓÇô require user consent to be included |
-## Next steps
+## Related content
[Upload usage data to Azure Monitor](upload-usage-data.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, review [Version log](version-log.md#au
### Release notes -- Support for configuring and managing Azure Failover groups between two Azure Arc-enabled SQL managed instances using Azure portal. For details, review [Configure failover group - portal](managed-instance-disaster-recovery-portal.md).
+- Support for configuring and managing Azure Failover groups between two instances using Azure portal. For details, review [Configure failover group - portal](managed-instance-disaster-recovery-portal.md).
- Upgraded OpenSearch and OpenSearch Dashboards from 2.7.0 to 2.8.0 - Improvements and examples to [Back up and recover controller database](backup-controller-database.md).
For complete release version information, review [Version log](version-log.md#ju
### Release notes -- Azure Arc-enabled SQL Managed Instance
+- SQL Managed Instance enabled by Azure Arc
- [Added Azure CLI support to manage transparent data encryption (TDE)](configure-transparent-data-encryption-sql-managed-instance.md). ## May 9, 2023
New for this release:
- Error-handling in the `az` CLI is improved during data controller upgrade - Fixed a bug to preserve the resource limits for Azure Arc Data Controller where the resource limits could get reset during an upgrade. -- Azure Arc-enabled SQL Managed Instance
- - General Purpose: Customer-managed TDE encryption keys (preview). For information, review [Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance](configure-transparent-data-encryption-sql-managed-instance.md).
- - Support for customer-managed keytab rotation. For information, review [Rotate Azure Arc-enabled SQL Managed Instance customer-managed keytab](rotate-customer-managed-keytab.md).
- - Support for `sp_configure` to manage configuration. For information, review [Configure Azure Arc-enabled SQL managed instance](configure-managed-instance.md).
+- SQL Managed Instance enabled by Azure Arc
+ - General Purpose: Customer-managed TDE encryption keys (preview). For information, review [Enable transparent data encryption on SQL Managed Instance enabled by Azure Arc](configure-transparent-data-encryption-sql-managed-instance.md).
+ - Support for customer-managed keytab rotation. For information, review [Rotate SQL Managed Instance enabled by Azure Arc customer-managed keytab](rotate-customer-managed-keytab.md).
+ - Support for `sp_configure` to manage configuration. For information, review [Configure SQL Managed Instance enabled by Azure Arc](configure-managed-instance.md).
- Service-managed credential rotation. For information, review [How to rotate service-managed credentials in a managed instance](rotate-sql-managed-instance-credentials.md#how-to-rotate-service-managed-credentials-in-a-managed-instance). ## April 12, 2023
For complete release version information, see [Version log](version-log.md#april
New for this release: -- Azure Arc-enabled SQL Managed Instance
+- SQL Managed Instance enabled by Azure Arc
- Direct mode for failover groups is generally available az CLI - Schedule the HA orchestrator replicas on different nodes when available
For complete release version information, see [Version log](version-log.md#march
New for this release: -- Azure Arc-enabled SQL Managed Instance
- - [Rotate Azure Arc-enabled SQL Managed Instance service-managed credentials (preview)](rotate-sql-managed-instance-credentials.md)
+- SQL Managed Instance enabled by Azure Arc
+ - [Rotate SQL Managed Instance enabled by Azure Arc service-managed credentials (preview)](rotate-sql-managed-instance-credentials.md)
- Azure Arc-enabled PostgreSQL - Require client connections to use SSL
- - Extended Azure Arc-enabled SQL Managed Instance authentication control plane to PostgreSQL
+ - Extended SQL Managed Instance enabled by Azure Arc authentication control plane to PostgreSQL
## February 14, 2023
Reminders and warnings are implemented in Azure portal, custom resource status,
### SQL Managed Instance
-General Availability of Business Critical service tier. Azure Arc-enabled SQL Managed Instance instances that have a version greater than or equal to v1.7.0 will be charged through Azure billing meters.
+General Availability of Business Critical service tier. SQL Managed Instance enabled by Azure Arc instances that have a version greater than or equal to v1.7.0 will be charged through Azure billing meters.
### User experience improvements
General Availability of Business Critical service tier. Azure Arc-enabled SQL M
Added ability to create AD Connectors from Azure portal.
-Preview expected costs for Azure Arc-enabled SQL Managed Instance Business Critical tier when you create new instances.
+Preview expected costs for SQL Managed Instance enabled by Azure Arc Business Critical tier when you create new instances.
#### Azure Data Studio
-Added ability to upgrade Azure Arc-enabled SQL Managed Instances from Azure Data Studio in the indirect and direct connectivity modes.
+Added ability to upgrade instances from Azure Data Studio in the indirect and direct connectivity modes.
-Preview expected costs for Azure Arc-enabled SQL Managed Instance Business Critical tier when you create new instances.
+Preview expected costs for SQL Managed Instance enabled by Azure Arc Business Critical tier when you create new instances.
## May 4, 2022
For complete release version information, see [Version log](version-log.md#april
Not supported because one or more minor versions are skipped. - Updates to open source projects included in Azure Arc-enabled data services to patch vulnerabilities.
-### Azure Arc-enabled SQL Managed Instance
+### SQL Managed Instance enabled by Azure Arc
You can create a maintenance window on the data controller, and if you have SQL managed instances with a desired version set to `auto`, they will be upgraded in the next maintenance windows after a data controller upgrade.
AD authentication connectors can now be set up in an `automatic mode` or *system
Backup and point-in-time-restore when a database has Transparent Data Encryption (TDE) enabled is now supported.
-Change Data Capture (CDC) is now enabled in Azure Arc-enabled SQL Managed Instance.
+Change Data Capture (CDC) is now enabled in SQL Managed Instance enabled by Azure Arc.
Bug fixes for replica scaling in Arc SQL MI Business Critical and database restore when there is insufficient disk space.
For complete release version information, see [Version log](version-log.md#febru
- [ReadWriteMany (RWX) capable storage class](../../aks/concepts-storage.md#azure-disk) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment. - Billing support when using multiple read replicas.
-For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
+For additional information about service tiers, see [High Availability with SQL Managed Instance enabled by Azure Arc (preview)](managed-instance-high-availability.md).
### User experience improvements
For complete release version information, see [Version log](version-log.md#janua
### SQL Managed Instance -- Azure Arc-enabled SQL Managed Instance Business Critical instances can be upgraded from the January release and going forward (preview)
+- SQL Managed Instance enabled by Azure Arc Business Critical instances can be upgraded from the January release and going forward (preview)
- Business critical distributed availability group failover can now be done through a Kubernetes-native experience or the Azure CLI (indirect mode only) (preview) - Added support for `LicenseType: DisasterRecovery` which will ensure that instances which are used for Business Critical distributed availability group secondary replicas: - Are not billed for
This release introduces directly connected mode availability in the following Az
For complete list, see [Supported regions](overview.md#supported-regions).
-### Azure Arc-enabled SQL Managed Instance
+### SQL Managed Instance enabled by Azure Arc
-- Upgrade instances of Azure Arc-enabled SQL Managed Instance General Purpose in-place
+- Upgrade instances of SQL Managed Instance enabled by Azure Arc General Purpose in-place
- The SQL binaries are updated to a new version - Direct connected mode deployment of Azure Arc enabled SQL Managed Instance using Azure CLI - Point in time restore for Azure Arc enabled SQL Managed Instance is being made generally available with this release. Currently point in time restore is only supported for the General Purpose SQL Managed Instance. Point in time restore for Business Critical SQL Managed Instance is still under preview.
az arcdata sql mi-arc update
- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
- When a pod is re-provisioned, SQL Managed Instance starts a new set of full backups for all databases. - If your data controller is directly connected, before you can provision a SQL Managed Instance, you must upgrade your data controller to the most recent version first. Attempting to provision a SQL Managed Instance with a data controller imageVersion of `v1.0.0_2021-07-30` will not succeed.
az arcdata sql mi-arc update
This release is published July 30, 2021.
-This release announces general availability for Azure Arc-enabled SQL Managed Instance [General Purpose service tier](service-tiers.md) in indirectly connected mode.
+This release announces general availability for SQL Managed Instance enabled by Azure Arc [General Purpose service tier](service-tiers.md) in indirectly connected mode.
> [!NOTE] > In addition, this release provides the following Azure Arc-enabled services in preview:
Use the following tools:
- Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
- Automated backup and point-in-time restore is in preview.-- Supports point-in-time restore from an existing database in an Azure Arc-enabled SQL Managed Instance to a new database within the same instance.
+- Supports point-in-time restore from an existing database in a SQL Managed Instance enabled by Azure Arc to a new database within the same instance.
- If the current datetime is given as point-in-time in UTC format, it resolves to the latest valid restore time and restores the given database until last valid transaction. - A database can be restored to any point-in-time where the transactions took place.-- To set a specific recovery point objective for an Azure Arc-enabled SQL Managed Instance, edit the SQL Managed Instance CRD to set the `recoveryPointObjectiveInSeconds` property. Supported values are from 300 to 600.
+- To set a specific recovery point objective for a SQL Managed Instance enabled by Azure Arc, edit the SQL Managed Instance CRD to set the `recoveryPointObjectiveInSeconds` property. Supported values are from 300 to 600.
- To disable the automated backups, edit the SQL instance CRD and set the `recoveryPointObjectiveInSeconds` property to 0. ### Known issues
Use the following tools:
- In directly connected mode, upload of usage, metrics, and logs using `az arcdata dc upload` is blocked by design. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work. - Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.-- Azure Arc-enabled SQL Managed instance and Azure Arc-enabled PostgreSQL server are not GB18030 certified.
+- SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server are not GB18030 certified.
- Currently, only one Azure Arc data controller per Kubernetes cluster is supported. #### Data controller -- When Azure Arc data controller is deleted from Azure portal, validation is done to block the delete if there any Azure Arc-enabled SQL Managed Instances deployed on this Arc data controller. Currently, this validation is applied only when the delete is performed from the Overview page of the Azure Arc data controller.
+- When Azure Arc data controller is deleted from Azure portal, validation is done to block the delete if there any instances deployed on this Arc data controller. Currently, this validation is applied only when the delete is performed from the Overview page of the Azure Arc data controller.
#### Azure Arc-enabled PostgreSQL server
Use the following tools:
- Point in time restore is not supported for now on NFS storage.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
##### Can't see resources in portal -- Portal does not show Azure Arc-enabled SQL Managed Instance resources created in the June release. Delete the SQL Managed Instance resources from the resource group list view. You may need to delete the custom location resource first.
+- Portal does not show SQL Managed Instance enabled by Azure Arc resources created in the June release. Delete the SQL Managed Instance resources from the resource group list view. You may need to delete the custom location resource first.
##### Point-in-time restore(PITR) supportability and limitations:-- Doesn't support restore from one Azure Arc-enabled SQL Managed Instance to another Azure Arc-enabled SQL Managed Instance. The database can only be restored to the same Azure Arc-enabled SQL Managed Instance where the backups were created.
+- Doesn't support restore from one SQL Managed Instance enabled by Azure Arc to another SQL Managed Instance enabled by Azure Arc. The database can only be restored to the same SQL Managed Instance enabled by Azure Arc where the backups were created.
- Renaming a database is currently not supported, for point in time restore purposes. - Currently there is no CLI command or an API to provide the allowed time window information for point-in-time restore. You can provide a time within a reasonable window, since the time the database was created, and if the timestamp is valid the restore would work. If the timestamp is not valid, the allowed time window will be provided via an error message. - No support for restoring a TDE enabled database.
This preview release is published July 13, 2021.
- Kubernetes native deployment templates have been modified for data controller, bootstrapper, & SQL Managed Instance. Update your .yaml templates. [Sample yaml files](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml)
-#### New Azure CLI extension for data controller and Azure Arc-enabled SQL Managed Instance
+#### New Azure CLI extension for data controller and SQL Server Managed Instance enabled by Azure Arc
This release introduces the `arcdata` extension to the Azure CLI. To add the extension, run the following command:
The OpenDistro security pack has been removed. Log in to Kibana is now done thro
All CRDs have had the version bumped from `v1alpha1` to `v1beta1` for this release. Be sure to delete all CRDs as part of the uninstall process if you have deployed a version of Azure Arc-enabled data services prior to the June 2021 release. The new CRDs deployed with the June 2021 release will have v1beta1 as the version.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
Automated backup service is available and on by default. Keep a close watch on space availability on the backup volume.
This release introduces `az` CLI extensions for Azure Arc-enabled data services.
- From the Azure portal, you can now view the list of PostgreSQL extensions created on your PostgreSQL server. - From the Azure portal, you can delete Azure Arc-enabled PostgreSQL server groups on a data controller that is directly connected to Azure.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
- Automated backups are now enabled. - You can now restore a database backup as a new database on the same SQL instance by creating a new custom resource based on the `sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com` custom resource definition (CRD). See documentation for details. There is no command-line interface (`azdata` or `az`), Azure portal, or Azure Data Studio experience for restoring a database yet. - The version of SQL engine binaries included in this release is aligned to the latest binaries that are deployed globally in Azure SQL Managed Instance (PaaS in Azure). This alignment enables backup/restore back and forth between Azure SQL Managed Instance PaaS and Azure Arc-enabled Azure SQL Managed Instance. More details on the compatibility will be provided later. - You can now delete Azure Arc SQL Managed Instances from the Azure portal in direct connected mode.-- You can now configure a SQL Managed Instance to have a pricing tier (`GeneralPurpose`, `BusinessCritical`), license type (`LicenseIncluded`, `BasePrice` (used for AHB pricing), and `developer`. There will be no charges incurred for using Azure Arc-enabled SQL Managed Instance until the General Availability date (publicly announced as scheduled for July 30, 2021) and until you upgrade to the General Availability version of the service.
+- You can now configure a SQL Managed Instance to have a pricing tier (`GeneralPurpose`, `BusinessCritical`), license type (`LicenseIncluded`, `BasePrice` (used for AHB pricing), and `developer`. There will be no charges incurred for using SQL Managed Instance enabled by Azure Arc until the General Availability date (publicly announced as scheduled for July 30, 2021) and until you upgrade to the General Availability version of the service.
- The `arcdata` extension for Azure Data Studio now has additional parameters that can be configured for deploying and editing SQL Managed Instances: enable/disable agent, admin login secret, annotations, labels, service annotations, service labels, SSL/TLS configuration settings, collation, language, and trace flags. - New commands in `azdata`/custom resource tasks for setting up distributed availability groups. These commands are in early stages of preview, documentation will be provided soon.
This release introduces the following features or capabilities:
- Specify storage classes and PostgreSQL extensions when deploying Azure Arc-enabled PostgreSQL server from the Azure portal. - Reduce the number of worker nodes in your Azure Arc-enabled PostgreSQL server. You can do this operation (known as scale in as opposed to scale out when you increase the number of worker nodes) from `azdata` command-line.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
-- New [Azure CLI extension](/cli/azure/azure-cli-extensions-overview) for Azure Arc-enabled SQL Managed Instance has the same commands as `az sql mi-arc <command>`. All Azure Arc-enabled SQL Managed Instance commands are located at `az sql mi-arc`. All Arc related `azdata` commands will be deprecated and moved to Azure CLI in a future release.
+- New [Azure CLI extension](/cli/azure/azure-cli-extensions-overview) for SQL Managed Instance enabled by Azure Arc has the same commands as `az sql mi-arc <command>`. All SQL Managed Instance enabled by Azure Arc commands are located at `az sql mi-arc`. All Arc related `azdata` commands will be deprecated and moved to Azure CLI in a future release.
To add the extension:
This section describes the new features introduced or enabled for this release.
- Azure Arc-enabled PostgreSQL server now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group. - Azure Arc-enabled PostgreSQL server now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
-#### Azure Arc-enabled SQL Managed Instance
+#### SQL Managed Instance enabled by Azure Arc
- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group. - Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
-## Next steps
+## Related content
> **Just want to try things out?** > Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on AKS, AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
azure-arc Reprovision Replica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reprovision-replica.md
Title: Reprovision replica
-description: This article explains how to rebuild a broken Azure Arc-enabled SQL Managed Instance replica. A replica may break due to storage corruption, for example.
+description: This article explains how to rebuild a broken SQL Server Managed Instance enabled by Azure Arc replica. A replica may break due to storage corruption, for example.
Last updated 10/05/2022
-# Reprovision replica - Azure Arc-enabled SQL Managed Instance
+# Reprovision replica - SQL Server Managed Instance enabled by Azure Arc
-This article describes how to provision a new replica to replace an existing replica in Azure Arc-enabled SQL Managed Instance.
+This article describes how to provision a new replica to replace an existing replica in SQL Server Managed Instance enabled by Azure Arc.
-When you reprovision a replica, you rebuild a new managed instance replica for an Azure Arc-enabled SQL Managed Instance deployment. Use this task to replace a replica that is failing to synchronize, for example, due to corruption of the data on the persistent volumes (PV) for that instance, or due to some recurring SQL issue.
+When you reprovision a replica, you rebuild a new managed instance replica for a SQL Server Managed Instance enabled by Azure Arc deployment. Use this task to replace a replica that is failing to synchronize, for example, due to corruption of the data on the persistent volumes (PV) for that instance, or due to some recurring SQL issue.
You can reprovision a replica [via `az` CLI](#via-az-cli) or [via `kubectl`](#via-kubectl). You can't reprovision a replica from the Azure portal.
azure-arc Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reserved-capacity-overview.md
Title: Save costs with reserved capacity
-description: Learn how to buy Azure Arc-enabled SQL Managed Instance reserved capacity to save costs.
+description: Learn how to buy SQL Server Managed Instance enabled by Azure Arc reserved capacity to save costs.
Last updated 10/27/2021
-# Reserved capacity - Azure Arc-enabled SQL Managed Instance
+# Reserved capacity - SQL Server Managed Instance enabled by Azure Arc
-Save money with Azure Arc-enabled SQL Managed Instance by committing to a reservation for Azure Arc services compared to pay-as-you-go prices. With reserved capacity, you make a commitment for Azure Arc-enabled SQL Managed Instance use for one or three years to get a significant discount on the service fee. To purchase reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
+Save money with SQL Managed Instance enabled by Azure Arc by committing to a reservation for Azure Arc services compared to pay-as-you-go prices. With reserved capacity, you make a commitment for SQL Managed Instance enabled by Azure Arc use for one or three years to get a significant discount on the service fee. To purchase reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
You do not need to assign the reservation to a specific database or managed instance. Matching existing deployments that are already running or ones that are newly deployed automatically get the benefit. By purchasing a reservation, you commit to usage for the Azure Arc services cost for one or three years. As soon as you buy a reservation, the service charges that match the reservation attributes are no longer charged at the pay-as-you go rates.
The following list demonstrates a scenario to project how you would reserve reso
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **All services** > **Reservations**.
-3. Select **Add** and then in the **Purchase Reservations** pane, select **SQL Managed Instance** to purchase a new reservation for Azure Arc-enabled SQL Managed Instance.
+3. Select **Add** and then in the **Purchase Reservations** pane, select **SQL Managed Instance** to purchase a new reservation for SQL Managed Instance enabled by Azure Arc.
4. Fill in the required fields. Existing SQL Managed Instance resources that match the attributes you select qualify to get the reserved capacity discount. The actual number of databases or managed instances that get the discount depends on the scope and quantity selected. The following table describes required fields.
Reserved capacity pricing is only supported for features and products that are i
If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-## Next steps
+## Related content
The vCore reservation discount is applied automatically to the number of managed instances that match the capacity reservation scope and attributes. You can update the scope of the capacity reservation through the [Azure portal](https://portal.azure.com), PowerShell, Azure CLI, or the API.
-To learn about service tiers for Azure Arc-enabled SQL Managed Instance, see [Azure Arc-enabled SQL Managed Instance service tiers](service-tiers.md).
+To learn about service tiers for SQL Managed Instance enabled by Azure Arc, see [SQL Managed Instance enabled by Azure Arc service tiers](service-tiers.md).
- For information on Azure SQL Managed Instance service tiers for the vCore model, see [Azure SQL Managed Instance - Compute Hardware in the vCore Service Tier](/azure/azure-sql/managed-instance/service-tiers-managed-instance-vcore)
azure-arc Resize Persistent Volume Claim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resize-persistent-volume-claim.md
This article explains how to resize an existing persistent volume to increase it
> [!NOTE] > Resizing PVCs using this method only works your `StorageClass` supports `AllowVolumeExpansion=True`.
-When you deploy an Azure Arc-enabled SQL managed instance, you can configure the size of the persistent volume (PV) for `data`, `logs`, `datalogs`, and `backups`. The deployment creates these volumes based on the values set by parameters `--volume-size-data`, `--volume-size-logs`, `--volume-size-datalogs`, and `--volume-size-backups`. When these volumes become full, you will need to resize the `PersistentVolumes`. Azure Arc-enabled SQL Managed Instance is deployed as part of a `StatefulSet` for both General Purpose or Business Critical service tiers. Kubernetes supports automatic resizing for persistent volumes but not for volumes attached to `StatefulSet`.
+When you deploy a SQL Managed Instance enabled by Azure Arc, you can configure the size of the persistent volume (PV) for `data`, `logs`, `datalogs`, and `backups`. The deployment creates these volumes based on the values set by parameters `--volume-size-data`, `--volume-size-logs`, `--volume-size-datalogs`, and `--volume-size-backups`. When these volumes become full, you will need to resize the `PersistentVolumes`. SQL Managed Instance enabled by Azure Arc is deployed as part of a `StatefulSet` for both General Purpose or Business Critical service tiers. Kubernetes supports automatic resizing for persistent volumes but not for volumes attached to `StatefulSet`.
Following are the steps to resize persistent volumes attached to `StatefulSet`:
azure-arc Resource Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resource-sync.md
https://management.azure.com/subscriptions/{{subscription}}/resourcegroups/{{res
- Resource sync rule does not project Azure Arc Active Directory connector - Resource sync rule does not project Azure Arc Instance Failover Groups
-## Next steps
+## Related content
[Create Azure Arc data controller in direct connectivity mode using CLI](create-data-controller-direct-cli.md)
azure-arc Restore Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-postgresql.md
For details about all the parameters available for restore review the output of
az postgres server-arc restore --help ```
-## Next steps
+## Related content
- [Configure automated backup - Azure Arc-enabled PostgreSQL servers](backup-restore-postgresql.md) - [Scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server.
azure-arc Rotate Customer Managed Keytab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-customer-managed-keytab.md
Last updated 05/05/2023
-# Rotate Azure Arc-enabled SQL Managed Instance customer-managed keytab
+# Rotate SQL Server Managed Instance enabled by Azure Arc customer-managed keytab
-This article describes how to rotate customer-managed keytabs for Azure Arc-enabled SQL Managed Instance. These keytabs are used to enable Active Directory logins for the managed instance.
+This article describes how to rotate customer-managed keytabs for SQL Managed Instance enabled by Azure Arc. These keytabs are used to enable Active Directory logins for the managed instance.
## Prerequisites:
-Before you proceed with this article, you must have an active directory connector in customer-managed keytab mode and an Azure Arc-enabled SQL Managed Instance created.
+Before you proceed with this article, you must have an active directory connector in customer-managed keytab mode and a SQL Managed Instance enabled by Azure Arc created.
- [Deploy a customer-managed keytab active directory connector](./deploy-customer-managed-keytab-active-directory-connector.md)-- [Deploy and connect an Azure Arc-enabled SQL Managed Instance](./deploy-active-directory-sql-managed-instance.md)
+- [Deploy and connect a SQL Managed Instance enabled by Azure Arc](./deploy-active-directory-sql-managed-instance.md)
## How to rotate customer-managed keytabs in a managed instance
Additionally, after getting the kerberos Ticket-Granting Ticket (TGT) by using `
We can also enable debug logging for the `kinit` command by running the following: `KRB5_TRACE=/dev/stdout kinit -V arcsqlmi@CONTOSO.COM`. This increases the verbosity and outputs the logs to stdout as the command is being executed.
-## Next steps
+## Related content
- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards) - [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Rotate Sql Managed Instance Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md
Last updated 03/06/2023
-# Rotate Azure Arc-enabled SQL Managed Instance service-managed credentials (preview)
+# Rotate SQL Server Managed Instance enabled by Azure Arc service-managed credentials (preview)
-This article describes how to rotate service-managed credentials for Azure Arc-enabled SQL Managed Instance. Arc data services generate various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services.
+This article describes how to rotate service-managed credentials for SQL Managed Instance enabled by Azure Arc. Arc data services generate various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services.
Service-managed credential rotation is a user-triggered operation that you initiate during a security issue or when periodic rotation is required for compliance.
There's a brief moment of downtime when the failover occurs.
## Prerequisites:
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created.
+Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created.
-- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md)
+- [a SQL Managed Instance enabled by Azure Arc created](./create-sql-managed-instance.md)
## How to rotate service-managed credentials in a managed instance
kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{
Triggering rollback is the same as triggering a rotation of service-managed credentials except that the target generation is previous generation and doesn't generate a new generation or credentials.
-## Next steps
+## Related content
- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards) - [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Rotate User Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-user-tls-certificate.md
Title: Rotate user-provided TLS certificate in indirectly connected Azure Arc-enabled SQL Managed Instance
-description: Rotate user-provided TLS certificate in indirectly connected Azure Arc-enabled SQL Managed Instance
+ Title: Rotate user-provided TLS certificate in indirectly connected SQL Server Managed Instance enabled by Azure Arc
+description: Rotate user-provided TLS certificate in indirectly connected SQL Server Managed Instance enabled by Azure Arc
Last updated 12/15/2021
-# Rotate certificate Azure Arc-enabled SQL Managed Instance (indirectly connected)
+# Rotate certificate SQL Server Managed Instance enabled by Azure Arc (indirectly connected)
-This article describes how to rotate user-provided Transport Layer Security(TLS) certificate for Azure Arc-enabled SQL Managed Instances in indirectly connected mode using Azure CLI or `kubectl` commands.
+This article describes how to rotate user-provided Transport Layer Security(TLS) certificate for SQL Managed Instance enabled by Azure Arc in indirectly connected mode using Azure CLI or `kubectl` commands.
Examples in this article use OpenSSL. [OpenSSL](https://www.openssl.org/) is an open-source command-line toolkit for general-purpose cryptography and secure communication. ## Prerequisite * [Install openssl utility ](https://www.openssl.org/source/)
-* An Azure Arc-enabled SQL Managed Instance in indirectly connected mode
+* a SQL Managed Instance enabled by Azure Arc in indirectly connected mode
## Generate certificate request using `openssl`
You can use the following kubectl command to apply this setting:
kubectl apply -f <my-sql-mi-yaml-file> ```
-## Next steps
+## Related content
- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards) - [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Scale Up Down Postgresql Server Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-up-down-postgresql-server-using-cli.md
or
az postgres server-arc edit -n postgres01 --cores-request '' --cores-limit '' --k8s-namespace arc --use-k8s ```
-## Next steps
+## Related content
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md
Title: Azure Arc-enabled SQL Managed Instance service tiers
-description: Explains the service tiers available for Azure Arc-enabled SQL Managed Instance deployments.
+ Title: SQL Managed Instance enabled by Azure Arc service tiers
+description: Explains the service tiers available for SQL Managed Instance enabled by Azure Arc deployments.
Last updated 07/19/2023
-# Azure Arc-enabled SQL Managed Instance service tiers
+# SQL Managed Instance enabled by Azure Arc service tiers
-As part of the family of Azure SQL products, Azure Arc-enabled SQL Managed Instance is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers.
+As part of the family of Azure SQL products, SQL Managed Instance enabled by Azure Arc is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers.
- **General Purpose** is a budget-friendly tier designed for most workloads with common performance and availability features. - **Business Critical** tier is designed for performance-sensitive workloads with higher availability features.
azure-arc Show Configuration Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/show-configuration-postgresql-server.md
az postgres server-arc show -n postgres01 --k8s-namespace arc --use-k8s
Returns the information in a format and content similar to the one returned by kubectl. Use the tool of your choice to interact with the system.
-## Next steps
+## Related content
- [Read about how to scale up/down (increase or reduce memory and/or vCores) a server group](scale-up-down-postgresql-server-using-cli.md) - [Read about storage configuration](storage-configuration.md)
azure-arc Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/support-policy.md
# Azure Arc-enabled data services support policy.
-This article describes the support policies and troubleshooting boundaries for Azure Arc-enabled data services. This article specifically explains support for Azure Arc data controller and Azure Arc-enabled SQL Managed Instance.
+This article describes the support policies and troubleshooting boundaries for Azure Arc-enabled data services. This article specifically explains support for Azure Arc data controller and SQL Managed Instance enabled by Azure Arc.
## Support policy - Azure Arc-enabled data services follow [Microsoft Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy).
This article describes the support policies and troubleshooting boundaries for A
## Support versions
-Microsoft supports Azure Arc-enabled data services for one year from the date of the release of that specific version. This support applies to the data controller, and any supported data services. For example, this support also applies to Azure Arc-enabled SQL Managed Instance.
+Microsoft supports Azure Arc-enabled data services for one year from the date of the release of that specific version. This support applies to the data controller, and any supported data services. For example, this support also applies to SQL Managed Instance enabled by Azure Arc.
For descriptions, and instructions on how to identify a version release date, see [Supported versions](upgrade-overview.md#supported-versions).
To plan updates, see [Upgrade Azure Arc-enabled data services](upgrade-overview.
## Support by components
-Microsoft supports Azure Arc-enabled data services, including the data controller, and the data services (like Azure Arc-enabled SQL Managed Instance) that we provide. Arc-enabled data services require a Kubernetes distribution deployed in a customer operated environment. Microsoft does not provide support for the Kubernetes distribution. Support for the environment and hardware that hosts Kubernetes is provided by the operator of the environment and hardware.
+Microsoft supports Azure Arc-enabled data services, including the data controller, and the data services (like SQL Managed Instance enabled by Azure Arc) that we provide. Arc-enabled data services require a Kubernetes distribution deployed in a customer operated environment. Microsoft does not provide support for the Kubernetes distribution. Support for the environment and hardware that hosts Kubernetes is provided by the operator of the environment and hardware.
Microsoft has worked with industry partners to validate specific distributions for Azure Arc-enabled data services. You can see a list of partners and validated solutions in [Azure Arc-enabled data services Kubernetes validation](validation-program.md).
azure-arc Supported Versions Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/supported-versions-postgresql.md
In this example, this output indicates there is one CRD related to PostgreSQL: `
Come back and read this article. It's updated as appropriate.
-## Next steps:
+## Related content:
- [Read about creating Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) - [Read about getting a list of the Azure Arc-enabled PostgreSQL servers created in your Arc Data Controller](list-servers-postgresql.md)
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md
If you see a message about insufficient CPU or memory, you should add more nodes
[View logs and metrics using Kibana and Grafana](monitor-grafana-kibana.md)
-## Next steps
+## Related content
[Scenario: View inventory of your instances in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)
azure-arc Troubleshoot Managed Instance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance-configuration.md
Title: Troubleshoot configuration - Azure Arc-enabled SQL Managed Instance
-description: Describes how to troubleshoot configuration. Includes steps to provide configuration files for Azure Arc-enabled SQL Managed Instance Azure Arc-enabled data services
+ Title: Troubleshoot configuration - SQL Managed Instance enabled by Azure Arc
+description: Describes how to troubleshoot configuration. Includes steps to provide configuration files for SQL Managed Instance enabled by Azure Arc Azure Arc-enabled data services
For Arc SQL Managed Instance, the supported configuration files that you can ove
- `mssql.json`: `/var/run/config/mssql/mssql.json` - `krb5.conf`: `/etc/krb5.conf`
-## Next steps
+## Related content
[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md)
azure-arc Troubleshoot Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance.md
Title: Troubleshoot connection to failover group - Azure Arc-enabled SQL Managed Instance
+ Title: Troubleshoot connection to failover group - SQL Server Managed Instance enabled by Azure Arc
description: Describes how to troubleshoot issues with connections to failover group resources in Azure Arc-enabled data services
Last updated 03/15/2023
-# Troubleshoot Azure Arc-enabled SQL Managed Instance deployments
+# Troubleshoot SQL Server Managed Instance enabled by Azure Arc deployments
This article identifies potential issues, and describes how to diagnose root causes for these issues for deployments of Azure Arc-enabled data services.
-## Connection to Azure Arc-enabled SQL Managed Instance failover group
+## Connection to SQL Server Managed Instance enabled by Azure Arc failover group
This section describes how to troubleshoot issues connecting to a failover group.
On each side, there are two replicas for one failover group. Check the value of
If one of `connectedState` isn't equal to `CONNECTED`, see the instructions under [Check parameters](#check-parameters).
-If one of `synchronizationState` isn't equal to `HEALTHY`, focus on the instance which `synchronizationState` isn't equal to `HEALTHY`". Refer to [Can't connect to Arc-enabled SQL Managed Instance](#cant-connect-to-arc-enabled-sql-managed-instance) for how to debug.
+If one of `synchronizationState` isn't equal to `HEALTHY`, focus on the instance which `synchronizationState` isn't equal to `HEALTHY`". Refer to [Can't connect to SQL Server Managed Instance enabled by Azure Arc](#cant-connect-to-sql-server-managed-instance-enabled-by-azure-arc).
### Check parameters
kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin
If SQL server can use external endpoint TDS, there is a good chance it can reach external mirroring endpoint because they are defined and activated in the same service, specifically `$sqlmiName-external-svc`.
-## Can't connect to Arc-enabled SQL Managed Instance
+## Can't connect to SQL Server Managed Instance enabled by Azure Arc
-This section identifies specific steps you can take to troubleshoot connections to Azure Arc-enabled SQL managed instances.
+This section identifies specific steps you can take to troubleshoot connections to SQL Managed Instance enabled by Azure Arc.
> [!NOTE]
-> You can't connect to an Azure Arc-enabled SQL Managed Instance if the instance license type is `DisasterRecovery`.
+> You can't connect to a SQL Managed Instance enabled by Azure Arc if the instance license type is `DisasterRecovery`.
### Check the managed instance status
kubectl -n $nameSpace cp $sqlmiName-ha-0:/var/log $localFolder/$sqlmiName-ha-0/
```
-## Next steps
+## Related content
[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md)
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
This article describes how to delete Azure Arc-enabled data service resources fr
> [!WARNING] > When you delete resources as described in this article, these actions are irreversible.
-Deploying Azure Arc-enabled data services involves deploying an Azure Arc data controller and instances of data services Azure Arc-enabled SQL Managed Instance or Azure Arc-enabled PostgresQL server. Deployment creates several artifacts, such as:
+Deploying Azure Arc-enabled data services involves deploying an Azure Arc data controller and instances of data services SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgresQL server. Deployment creates several artifacts, such as:
- Custom Resource Definitions (CRDs) - Cluster roles - Cluster role bindings
In directly connected mode, there are additional artifacts such as:
## Before
-Before you delete a resource such as Azure Arc-enabled SQL Managed Instance or data controller, ensure you complete the following actions first:
+Before you delete a resource such as SQL Managed Instance enabled by Azure Arc or data controller, ensure you complete the following actions first:
1. For an indirectly connected data controller, export and upload the usage information to Azure for accurate billing calculation by following the instructions described in [Upload billing data to Azure - Indirectly connected mode](view-billing-data-in-azure.md#upload-billing-data-to-azureindirectly-connected-mode). 2. Ensure all the data services that have been create on the data controller are uninstalled as described in:
- - [Delete Azure Arc-enabled SQL Managed Instance](delete-managed-instance.md)
+ - [Delete SQL Managed Instance enabled by Azure Arc](delete-managed-instance.md)
- [Delete an Azure Arc-enabled PostgreSQL server](delete-postgresql-hyperscale-server-group.md).
-After deleting any existing instances of Azure Arc-enabled SQL Managed Instances and/or Azure Arc-enabled PostgreSQL server, delete the data controller using one of the appropriate method for connectivity mode.
+After deleting any existing instances of SQL Managed Instance enabled by Azure Arc and/or Azure Arc-enabled PostgreSQL server, delete the data controller using one of the appropriate method for connectivity mode.
> [!Note] > If you deployed the data controller in directly connected mode then follow the steps to:
azure-arc Update Service Principal Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/update-service-principal-credentials.md
YYYY-MM-DD HH:MM:SS.mmmm | ERROR | [AzureUpload] Upload task exception: A config
-## Next steps
+## Related content
[Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal)
azure-arc Upgrade Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-active-directory-connector.md
Title: Upgrade Active Directory connector for Azure SQL Managed Instance direct or indirect mode connected to Azure Arc
-description: The article describes how to upgrade an active directory connector for direct or indirect mode connected to Azure Arc-enabled SQL Managed Instance
+description: The article describes how to upgrade an active directory connector for direct or indirect mode connected to SQL Managed Instance enabled by Azure Arc
azure-arc Upgrade Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-overview.md
Upgrades are limited to the next incremental minor or major version. For example
## Upgrade order
-Upgrade the data controller before you upgrade any data service. Azure Arc-enabled SQL Managed Instance is an example of a data service.
+Upgrade the data controller before you upgrade any data service. SQL Managed Instance enabled by Azure Arc is an example of a data service.
A data controller may be up to one version ahead of a data service. A data service major version may not be one version ahead, or more than one version behind a data controller.
azure-arc Upgrade Sql Managed Instance Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md
# Enable automatic upgrades of an Azure SQL Managed Instance for Azure Arc
-You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of an Azure Arc-enabled SQL Managed Instance to `auto` to ensure that your managed instance will be upgraded after a data controller upgrade, with no interaction from a user. This setting simplifies management, as you don't need to manually upgrade every instance for every release.
+You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of a SQL Managed Instance enabled by Azure Arc to `auto` to ensure that your managed instance will be upgraded after a data controller upgrade, with no interaction from a user. This setting simplifies management, as you don't need to manually upgrade every instance for every release.
After setting the `--desired-version` parameter of the `spec.update.desiredVersion` property to `auto` the first time, the Azure Arc-enabled data service will begin an upgrade of the managed instance to the newest image version within five minutes, or within the next [Maintenance Window](maintenance-window.md). Thereafter, within five minutes of a data controller being upgraded, or within the next maintenance window, the managed instance will begin the upgrade process. This setting works for both directly connected and indirectly connected modes.
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Title: Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected Azure Arc-enabled SQL Managed Instance using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected SQL Managed Instance enabled by Azure Arc using Kubernetes tools
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-logs.md
kubectl edit datacontroller <DC name> --name <namespace>
## Upload logs to Azure Monitor in **indirect** mode
- To upload logs for your Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL servers run the following CLI commands-
+ To upload logs for SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL servers run the following CLI commands-
1. Export all logs to the specified file:
watch -n 1200 ./myuploadscript.sh
You could also use a job scheduler like cron or Windows Task Scheduler or an orchestrator like Ansible, Puppet, or Chef.
-## Next steps
+## Related content
[Upload metrics, and logs to Azure Monitor](upload-metrics.md)
azure-arc Upload Metrics And Logs To Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
Upload the usage only once per day. When usage information is exported and uploa
For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](../../azure-monitor/essentials/metrics-store-custom-rest-api.md#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal.
-## Next steps
+## Related content
[Learn about service principals](/powershell/azure/azurerm/create-azure-service-principal-azureps#what-is-a-service-principal)
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md
echo %SPN_AUTHORITY%
### Upload metrics to Azure Monitor
-To upload metrics for your Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL servers run, the following CLI commands:
+To upload metrics for SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL, run the following CLI commands:
1. Export all metrics to the specified file:
Upload the usage only once per day. When usage information is exported and uploa
For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](../../azure-monitor/essentials/metrics-store-custom-rest-api.md#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal.
-## Next steps
+## Related content
[Upload logs to Azure Monitor](upload-logs.md)
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-usage-data.md
watch -n 1200 ./myuploadscript.sh
You could also use a job scheduler like cron or Windows Task Scheduler or an orchestrator like Ansible, Puppet, or Chef.
-## Next steps
+## Related content
[Upload metrics, and logs to Azure Monitor](upload-metrics.md)
azure-arc Using Extensions In Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/using-extensions-in-postgresql-server.md
Connect to your database with the client tool of your choice and run the standar
select * from pg_extension; ```
-## Next steps
+## Related content
- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
Azure Arc-enabled data services team has worked with industry partners to valida
To see how all Azure Arc-enabled components are validated, see [Validation program overview](../validation-program/overview.md) > [!NOTE]
-> At the current time, Azure Arc-enabled SQL Managed Instance is generally available in select regions.
+> At the current time, SQL Managed Instance enabled by Azure Arc is generally available in select regions.
> > Azure Arc-enabled PostgreSQL server is available for preview in select regions.
These tests verify that the product is compliant with the requirements of runnin
The tests for data services cover the following in indirectly connected mode 1. Deploy data controller in indirect mode
-2. Deploy [Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+2. Deploy [SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
3. Deploy [Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) More tests will be added in future releases of Azure Arc-enabled data services.
More tests will be added in future releases of Azure Arc-enabled data services.
- [Azure Arc-enabled Kubernetes validation](../kubernetes/validation-program.md) - [Azure Arc validation program - GitHub project](https://github.com/Azure/azure-arc-validation/)
-## Next steps
+## Related content
- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - [Create a data controller - indirectly connected with the CLI](create-data-controller-indirect-cli.md)
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
All other components are the same as previously released.
## July 30, 2021
-This release introduces general availability for Azure Arc-enabled SQL Managed Instance General Purpose and Azure Arc-enabled SQL Server. The following table describes the components in this release.
+This release introduces general availability for SQL Managed Instance enabled by Azure Arc General Purpose and SQL Server enabled by Azure Arc. The following table describes the components in this release.
|Component |Value | |--||
azure-arc View Arc Data Services Inventory In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-arc-data-services-inventory-in-azure-portal.md
You can view your Azure Arc-enabled data services in the Azure portal or in your
## View resources in Azure portal
-After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your Azure Arc-enabled SQL managed instances or Azure Arc-enabled PostgreSQL servers in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps:
+After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your deployments of SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL servers in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps:
1. Go to **All services**. 1. Search for your database instance type.
azure-arc View Billing Data In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-billing-data-in-azure.md
In the indirectly connected mode, billing data is periodically exported out of t
To upload billing data to Azure, the following should happen first: 1. Create an Azure Arc-enabled data service if you don't have one already. For example create one of the following:
- - [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+ - [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md)
- [Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) 2. Wait for at least 2 hours since the creation of the data service so that the billing telemetry collection process can collect some billing data. 3. Follow the steps described in [Upload resource inventory, usage data, metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) to get setup with prerequisites for uploading usage/billing/logs data and then proceed to the [Upload usage data to Azure](upload-usage-data.md) to upload the billing data.
azure-arc View Data Controller In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-data-controller-in-azure-portal.md
In the **indirect** connected mode, you must export and upload at least one type
## Azure portal
-After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any Azure Arc-enabled SQL managed instances or Azure Arc-enabled PostgreSQL server resources in the [Azure portal](https://portal.azure.com).
+After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any SQL Managed Instance enabled by Azure Arcs or Azure Arc-enabled PostgreSQL server resources in the [Azure portal](https://portal.azure.com).
To find your data controller, search for it by name in the search bar and then select it.
azure-arc What Is Azure Arc Enabled Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgresql.md
Microsoft offers PostgreSQL database services in Azure in two ways:
Azure Arc-enabled PostgreSQL server is the community version of the [PostgreSQL 14](https://www.postgresql.org/) server with a curated set of available extensions. Most PostgreSQL applications workloads should be capable of running against Azure Arc-enabled PostgreSQL server using standard drivers.
-## Next steps
+## Related content
### Try it out
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md
description: "This article provides a conceptual overview of the custom location
# Custom locations on top of Azure Arc-enabled Kubernetes
-As an extension of the Azure location construct, the *custom locations* feature provides a way for tenant administrators to use their Azure Arc-enabled Kubernetes clusters as target locations for deploying Azure services instances. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL server.
+As an extension of the Azure location construct, the *custom locations* feature provides a way for tenant administrators to use their Azure Arc-enabled Kubernetes clusters as target locations for deploying Azure services instances. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server.
Similar to Azure locations, end users within the tenant who have access to Custom Locations can deploy resources there using their company's private compute.
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
description: "Use custom locations to deploy Azure PaaS services on Azure Arc-en
# Create and manage custom locations on Azure Arc-enabled Kubernetes
- The *custom locations* feature provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL server, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management.
+ The *custom locations* feature provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management.
A custom location has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure role-based access control (Azure RBAC) can be used to grant granular permissions to application developers or database admins, enabling them to deploy resources such as databases or application instances on top of Arc-enabled Kubernetes clusters in a multi-tenant manner.
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
For more information, see [Connectivity modes and requirements](data/connectivit
Connectivity to Arc-enabled server endpoints is required for: -- Azure Arc-enabled SQL Server
+- SQL Server enabled by Azure Arc
- Azure Arc-enabled VMware vSphere (preview) <sup>*</sup> - Azure Arc-enabled System Center Virtual Machine Manager (preview) <sup>*</sup> - Azure Arc-enabled Azure Stack (HCI) (preview) <sup>*</sup>
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled servers](./servers/overview.md). * Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). * Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/).
-* Learn about [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview).
+* Learn about [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview).
* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md). * Learn about [Azure Arc-enabled VM Management on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview). * Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md).
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/platform/conceptual-custom-locations.md
As an extension of the Azure location construct, a *custom location* provides a
Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md), an administrator or operator can determine which users have access to create resource instances on:
-* A namespace within a Kubernetes cluster to target deployment of Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL servers.
+* A namespace within a Kubernetes cluster to target deployment of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL servers.
* The compute, storage, networking, and other vCenter or Azure Stack HCI resources to deploy and manage VMs. For example, a cluster operator could create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy healthcare-related web applications. The developers can then deploy these applications without having to know details of the namespace and Kubernetes cluster.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Currently, private cloud providers differ in how they perform Arc resource bridg
For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
-Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. However, HCI version 22H2 won't be supported for appliance version 1.0.15 or higher because it's being deprecated. Customers on HCI 22H2 will receive limited support. To use appliance version 1.0.15 or higher, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
+Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. However, Azure Stack HCI, version 22H2 won't be supported for appliance version 1.0.15 or higher, because it's being deprecated. Customers on Azure Stack HCI, version 22H2 will receive limited support. To use appliance version 1.0.15 or higher, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
For Arc-enabled System Center Virtual Machine Manager (SCVMM), the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Download for [Windows](https://download.microsoft.com/download/5/e/9/5e9081ed-2e
### New features - [azcmagent show](azcmagent-show.md) now reports extended security license status on Windows Server 2012 server machines.-- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the Azure Arc-enabled SQL Server endpoints. This will enable you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for Azure Arc-enabled SQL Server.
+- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the SQL Server enabled by Azure Arc endpoints. This will enable you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for SQL Server enabled by Azure Arc.
- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase will help improve reliability of extension install, upgrade and uninstall operations. - Older extension manager and machine configuration agent logs are automatically zipped to reduce disk space requirements. - New executable names for the extension manager (`gc_extension_service`) and machine configuration (`gc_arc_service`) agents on Windows to help you distinguish the two services. For more information, see [Windows agent installation details](./agent-overview.md#windows-agent-installation-details).
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure
| `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com`</br> `san-af-<location>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com`| | `ArcData` <sup>1</sup> | `san-af-<region>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com` |
-<sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the Azure Arc-enabled SQL Server endpoints in the "Arc" proxy bypass value.
+<sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the SQL Server enabled by Azure Arc endpoints in the "Arc" proxy bypass value.
To send Microsoft Entra ID and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command:
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 11/01/2023 Last updated : 11/22/2023
Other Azure services through Azure Arc-enabled servers are available as well, wi
## Prepare delivery of ESUs
-To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
+Plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) to establish a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy. Billing for this service starts from October 2023 (i.e., after Windows Server 2012 end of support). -- > [!NOTE] > In order to purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE). Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance is not required to purchase ESUs.
+You must also download both the licensing package and servicing stack update (SSU) for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45).
+ ### Deployment options There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
If you're unable to successfully link your Azure Arc-enabled server to an activa
## ESU patches issues
-Ensure that both the licensing package and SSU are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking).
+Ensure that both the licensing package and servicing stack update (SSU) are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking).
If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", there is a known remediation approach:
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
||| |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the **Azure Resource ID** column makes the specified resource into the alert target. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource.| |Operator|The operator used on the dimension name and value.|
- |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values.|
+ |Dimension values|The dimension values are based on data from the last 24 hours. Select **Add custom value** to add custom dimension values.|
|Include all future values| Select this field to include any future values added to the selected dimension.| 1. (Optional) In the **When to evaluate** section:
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
The current alert rule wizard is different from the earlier experience:
- Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to trigger an alert of the desired resource by using the resource ID column. - For more advanced customizations, [use Azure Logic Apps](alerts-logic-apps.md). + ## Manage alert rules created in previous versions in the Azure portal 1. In the [Azure portal](https://portal.azure.com/), select the resource you want. 1. Under **Monitoring**, select **Alerts**.
The current alert rule wizard is different from the earlier experience:
<!-- convertborder later --> :::image type="content" source="media/alerts-log/AlertsPreviewSuppress.png" lightbox="media/alerts-log/AlertsPreviewSuppress.png" alt-text="Screenshot that shows the Alert Details pane." border="false"::: 1. To make alerts stateful, select **Automatically resolve alerts (preview)**.
-1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md) when the alert condition is met.
- > [!NOTE]
- > * For limits on the actions that can be performed, see [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md).
- > * Search results were included in the payload of the triggered alert and its associated notifications. **Notice that**: The **email** included only **10 rows** from the unfiltered results while the **webhook payload** contained **1,000 unfiltered results**.
+1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md) when the alert condition is met. For limits on the actions that can be performed, see [Azure Monitor service limits](../../azure-monitor/service-limits.md).
1. (Optional) Customize actions in log alert rules: - **Custom email subject**: Overrides the *email subject* of email actions. You can't modify the body of the mail and this field *isn't for email addresses*. - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log alerts](./alerts-log-webhook.md).
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
To diagnose the problem if you can't view status information or no results are r
ama-logs-windows-6drwq 1/1 Running 0 1d ```
+1. If the pods are in a running state, but there is no data in Log Analytics or data appears to only send during a certain part of the day, it might be an indication that the daily cap has been met. When this limit is met each day, data stops ingesting into the Log Analytics Workspace and resets at the reset time. For more information, see [Log Analytics Daily Cap](../../azure-monitor/logs/daily-cap.md#determine-your-daily-cap).
+ ## Container insights agent ReplicaSet Pods aren't scheduled on a non-AKS cluster Container insights agent ReplicaSet Pods have a dependency on the following node selectors on the worker (or agent) nodes for the scheduling:
azure-monitor Integrate Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/integrate-keda.md
This article walks you through the steps to integrate KEDA into your AKS cluster
+ Azure Kubernetes Service (AKS) cluster + Prometheus sending metrics to an Azure Monitor workspace. For more information, see [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).-++ Microsoft Entra Workload ID. For more information, see [Azure AD Workload Identity](https://azure.github.io/azure-workload-identity/docs/). ## Set up a workload identity
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
For a complete list of data sources that can send data to Azure Monitor Metrics,
## REST API Azure Monitor provides REST APIs that allow you to get data in and out of Azure Monitor Metrics. - **Custom metrics API** - [Custom metrics](./metrics-custom-overview.md) allow you to load your own metrics into the Azure Monitor Metrics database. Those metrics can then be used by the same analysis tools that process Azure Monitor platform metrics. -- **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md).
+- **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/metrics/list). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md).
- **Azure Monitor Metrics Batch REST API** - [Azure Monitor Metrics Batch API](/rest/api/monitor/metrics-batch/) is a high-volume API designed for customers with large volume metrics queries. It's similar to the existing standard Azure Monitor Metrics REST API, but provides the capability to retrieve metric data for up to 50 resource IDs in the same subscription and region in a single batch API call. This improves query throughput and reduces the risk of throttling. ## Security
Platform and custom metrics are stored for **93 days** with the following except
While platform and custom metrics are stored for 93 days, you can only query (in the **Metrics** tile) for a maximum of 30 days' worth of data on any single chart. This limitation doesn't apply to log-based metrics. If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
+> [!NOTE]
+> Moving or renaming an Azure Resource may result in a lost of metric history for that resource.
### Prometheus metrics Prometheus metrics are stored for **18 months**, but a PromQL query can only span a maximum of 32 days.
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
From the **Export** menu in Log Analytics, select one of the two options for cre
- **Power BI (as an M query)**: This option exports the query (together with the connection string for the query) to a .txt file that you can use in Power BI Desktop. Use this option if you need to model or transform the data in ways that aren't available in the Power BI service. Otherwise, consider exporting the query as a new dataset. - **Power BI (new Dataset)**: This option creates a new dataset based on your query directly in the Power BI service. After the dataset has been created, you can create reports, use Analyze in Excel, share it with others, and use other Power BI features. For more information, see [Create a Power BI dataset directly from Log Analytics](/power-bi/connect-data/create-dataset-log-analytics).
+> [!NOTE]
+> The export operation is subject to the [Log Analytics Query API limits](../service-limits.md#la-query-api). If your query results exceed the maximum size of data returned by the Query API, the operation exports partial results.
+ ## Collect data with Power BI dataflows [Power BI dataflows](/power-bi/service-dataflows-overview) also allow you to collect and store data. A dataflow is a type of cloud ETL (extract, transform, and load) process that helps you collect and prepare your data. A dataset is the "model" designed to help you connect different entities and model them for your needs.
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* UAE North * UK South * UK West
-* US Gov Arizona
-* US Gov Texas
-* US Gov Virginia
* West Europe * West US * West US 2
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## September 2023
-* [Azure NetApp Files customer-managed keys for Azure NetApp Files volume encryption is now available in select US Gov regions (Preview)](configure-customer-managed-keys.md#supported-regions)
-
- Customer keys are protected from attacks for maximum security of their Azure NetApp File volumes. This capability is now available US Gov Virginia (preview). This increased security complements the additional security for deployments in US Gov. Healthcare, Finance, Government, and many other customers can now protect their customer-managed encryption keys within the secure confines of US Gov Virginia region.
- * [Standard network features in select US Gov regions (Preview)](azure-netapp-files-network-topologies.md) Azure NetApp Files now supports Standard network features for new volumes in select US Gov regions. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files. You can now choose Standard or Basic network features when creating a new Azure NetApp Files volume. This feature is Generally Available in Azure commercial regions and public preview US Gov region(s).
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards.md
Dashboards are a focused and organized view of your cloud resources in the Azure
The Azure portal provides a default dashboard as a starting point. You can edit this default dashboard, and you can create and customize additional dashboards.
-All dashboards are private when created, and each user can create up to 100 private dashboards. If you publish and share a dashboard with other users in your organization](azure-portal-dashboard-share-access.md), the shared dashboard is implemented as an Azure resource in your subscription, and doesn't count towards the private dashboard limit.
+All dashboards are private when created, and each user can create up to 100 private dashboards. If you publish and [share a dashboard with other users in your organization](azure-portal-dashboard-share-access.md), the shared dashboard is implemented as an Azure resource in your subscription, and doesn't count towards the private dashboard limit.
## Create a new dashboard
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
Use the following steps to authenticate using a password from Azure Key Vault.
* Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). > [!NOTE]
- > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess-linux.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
> 1. To work with the VM in a new browser tab, select **Open in new browser tab**.
Use the following steps to authenticate using a private key stored in Azure Key
* Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). > [!NOTE]
- > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess-linux.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
> * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
Use the following steps to authenticate using a password from Azure Key Vault.
* Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). > [!NOTE]
- > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess-linux.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
> 1. To work with the VM in a new browser tab, select **Open in new browser tab**.
Use the following steps to authenticate using a private key stored in Azure Key
* Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). > [!NOTE]
- > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess-linux.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
> * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
The following table compares the features available with each product.
| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/azure/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable |
-| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2 |gzip, deflate, bzip2 |
+| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2, brotli |gzip, deflate, bzip2, brotli |
## Migration
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-improve-performance.md
These profiles support the following compression encodings:
- gzip (GNU zip) - DEFLATE - bzip2
+- brotli
-Azure CDN from Edgio doesn't support brotli compression. When the HTTP request has the header `Accept-Encoding: br`, the CDN responds with an uncompressed response.
+When the HTTP request has the header `Accept-Encoding: br`, the CDN responds with an uncompressed response.
### Azure CDN Standard from Akamai profiles
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
Previously updated : 08/17/2023 Last updated : 11/27/2023
# Connect Azure Communication Services with Azure AI services --
-Azure Communication Services Call Automation APIs provide developers the ability to steer and control the Azure Communication Services Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality.
+Azure Communication Services Call Automation APIs provide developers the ability to steer and control the Azure Communication Services Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure AI Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality.
All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Azure AI services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Microsoft Entra authentication. BYO Azure AI services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Azure AI services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution. > [!NOTE]
-> This integration is only supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. This integration only supports Multi-service Cognitive Service resource, so we recommend if you're creating a new Azure Cognitive Service resource you create a Multi-service Cognitive Service resource or when you're connecting an existing resource confirm that it is a Multi-service Cognitive Service resource.
+> This integration is supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. This integration only supports Multi-service Cognitive Service resource, we recommend if you're creating a new Azure AI Service resource you create a Multi-service Cognitive Service resource or when you're connecting an existing resource confirm that it is a Multi-service Cognitive Service resource.
## Common use cases
This integration between Azure Communication Services and Azure AI services is o
- northcentralus - southcentralus - westcentralus-- westeu
+- westeurope
- uksouth
+- northeurope
+- southafricanorth
+- canadacentral
+- centralindia
+- eastasia
+- southeastasia
+- australiaeast
+- brazilsouth
+- uaenorth
## Next steps - Learn about [playing audio](../../concepts/call-automation/play-action.md) to callers using Text-to-Speech.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Azure Communication Services Call Automation provides developers the ability to
Some of the common use cases that can be built using Call Automation include: - Program VoIP or PSTN calls for transactional workflows such as click-to-call and appointment reminders to improve customer service.-- Build interactive interaction workflows to self-serve customers for use cases like order bookings and updates, using Play (Audio URL) and Recognize (DTMF) actions.
+- Build interactive interaction workflows to self-serve customers for use cases like order bookings and updates, using Play (Audio URL, Text-to-Speech and SSML) and Recognize (DTMF and Voice) actions.
- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing. - Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform. - Increase engagement by building automated customer outreach programs for marketing and customer service.
The following list presents the set of features that are currently available in
| | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Play Audio using Text-to-Speech | ✔️ | ✔️ | ✔️ | ✔️ |
| | Recognize user input through DTMF | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Recognize user voice inputs | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Start continuous DTMF recognition | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Stop continuous DTMF recognition | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send DTMF | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ |
| | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ | | | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ |
When your application answers a call or places an outbound call, you can play an
**Recognize input** After your application has played an audio prompt, you can request user input to drive business logic and navigation in your application. To learn more, view our [concepts](./recognize-action.md) and how-to guide for [Gathering user input](../../how-tos/call-automation/recognize-action.md).
+**Continuous DTMF recognition**
+When your application needs to be able to receive DTMF tones at any point in the call without the application needing to trigger a specific recognize action. This can be useful in scenarios where an agent is on a call and needs the user to enter in some kind of ID or tracking number. To learn more about how to use this view our [guide](../../how-tos/call-automation/control-mid-call-media-actions.md).
+
+**Send DTMF**
+When your application needs to send DTMF tones to an external participant, this could be for purposes like dialing out to an external agent and providing the extension number, or something like navigating an external IVR menu.
+
+**Mute**
+Your application can mute certain users based on your business logic. The user would then need to unmute themselves manually if they want to speak.
+ **Transfer** When your application answers a call or places an outbound call to an endpoint, that call can be transferred to another destination endpoint. Transferring a 1:1 call removes your application's ability to control the call using the Call Automation SDKs.
The Call Automation events are sent to the web hook callback URI specified when
| ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | | PlayCompleted | Your application successfully played the audio file provided | | PlayFailed | Your application failed to play audio |
-| PlayCanceled | The requested play action has been canceled. |
+| PlayCanceled | The requested play action has been canceled |
| RecognizeCompleted | Recognition of user input was successfully completed |
-| RecognizeCanceled | The requested recognize action has been canceled. |
+| RecognizeCanceled | The requested recognize action has been canceled |
| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|
-|RecordingStateChanged | Status of recording action has changed from active to inactive or vice versa. |
+|RecordingStateChanged | Status of recording action has changed from active to inactive or vice versa |
+| ContinuousDtmfRecognitionToneReceived | StartContinuousDtmfRecognition completed successfully and a DTMF tone was received from the participant |
+| ContinuousDtmfRecognitionToneFailed | StartContinuousDtmfRecognition completed but an error occurred while handling a DTMF tone from the participant |
+| ContinuousDtmfRecognitionStopped | Successfully executed StopContinuousRecognition |
+| SendDtmfCompleted | SendDTMF completed successfully and the DTMF tones were sent to the target participant |
+| SendDtmfFailed | An error occurred while sending the DTMF tones |
To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples and sequence diagrams for various call control flows.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
The play action provided through the Azure Communication Services Call Automatio
- Providing Azure Communication Services access to prerecorded audio files of WAV format, that Azure Communication Services can access with support for authentication - Regular text that can be converted into speech output through the integration with Azure AI services.
-You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview)
-
+You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
> [!NOTE]
-> Azure Communication Services currently supports two file formats, MP3 files and WAV files formatted as 16-bit PCM mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
+> Azure Communication Services currently supports two file formats, MP3 files with ID3V2TAG and WAV files formatted as 16-bit PCM mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
## Prebuilt Neural Text to Speech voices Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 prebuilt voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md).
The play action can also be used to play hold music for callers. This action can
### Playing compliance messages As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call is recorded for quality purposes.ΓÇ¥.
-## Sample architecture for playing audio in call using Text-To-Speech (Public preview)
+## Sample architecture for playing audio in call using Text-To-Speech
![Diagram showing sample architecture for Play with AI.](./media/play-ai.png)
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Sending a high volume of messages has a set of limitations on the number of emai
|Total email request size (including attachments) |10 MB | ### Action to take
-This sandbox setup is designed to help developers begin building the application. Once the application is ready for production, you can gradually request to increase the sending volume. If you need to send more messages than the rate limits allow, submit a support request to raise your desired email sending limit. The reviewing team will consider your overall sender reputation, which includes factors such as your email delivery failure rates, your domain reputation, and reports of spam and abuse, when determining approval status.
+This sandbox setup is to help developers start building the application. Once you have established a sender reputation by sending mails, you can request to increase the sending volume limits. Submit a [support request](https://azure.microsoft.com/support/create-ticket/) to raise your desired email sending limit if you require sending a volume of messages exceeding the rate limits. Email quota increase requests are not automatically approved. The reviewing team will consider your overall sender reputation, which includes factors such as your email delivery failure rates, your domain reputation, and reports of spam and abuse when determining approval status.
+
+> [!NOTE]
+> Email quota increase requests may take up to 72 hours to be evaluated and approved, especially for requests that come in on Friday afernoon.
## Chat
communication-services Data Channel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/data-channel.md
The Data Channel API enables real-time messaging during audio and video calls. W
## Common use cases
-The Data Channel feature has two common use cases:
+The Data Channel can be used in many different scenarios. Two common use case examples are:
### Messaging between participants in a call
-The Data Channel API enables the transmission of binary type messages among call participants.
-With appropriate serialization in the application, it can deliver various message types for different purposes.
-There are also other libraries or services providing the messaging functionalities.
-Each of them has its advantages and disadvantages. You should choose the suitable one for your usage scenario.
-For example, the Data Channel API offers the advantage of low-latency communication, and simplifies user management as there's no need to maintain a separate participant list.
-However, the data channel feature doesn't provide message persistence and doesn't guarantee that message won't be lost in an end-to-end manner.
-If you need the stateful messaging or guaranteed delivery, you may want to consider alternative solutions.
+The Data Channel API enables the transmission of binary type messages among call participants. With appropriate serialization in the application, it can deliver various message types for different purposes. There are also other libraries or services providing the messaging functionalities. Each of them has its advantages and disadvantages. You should choose the suitable one for your usage scenario. For example, the Data Channel API offers the advantage of low-latency communication, and simplifies user management as there's no need to maintain a separate participant list. However, the data channel feature doesn't provide message persistence and doesn't guarantee that message won't be lost in an end-to-end manner. If you need the stateful messaging or guaranteed delivery, you may want to consider alternative solutions.
### File sharing
-File sharing represents another common use cases for the Data Channel API.
-In a peer-to-peer call scenario, the Data Channel connection works on a peer-to-peer basis.
-This setup offers an efficient method for file transfer, taking full advantage of the direct, peer-to-peer connection to enhance speed and reduce latency.
+File sharing represents another common use cases for the Data Channel API. In a peer-to-peer call scenario, the Data Channel connection works on a peer-to-peer basis. This setup offers an efficient method for file transfer, taking full advantage of the direct, peer-to-peer connection to enhance speed and reduce latency.
-In a group call scenario, files can still be shared among participants. However, there are better ways, such as Azure Storage or Azure Files.
-Additionally, broadcasting the file content to all participants in a call can be achieved by setting an empty participant list.
-However, it's important to keep in mind that, in addition to bandwidth limitations,
-there are further restrictions imposed during a group call when broadcasting messages, such as packet rate and back pressure from the receive bitrate.
+In a group call scenario, files can still be shared among participants. However, there are better ways, such as Azure Storage or Azure Files. Additionally, broadcasting the file content to all participants in a call can be achieved by setting an empty participant list. However, it's important to keep in mind that, in addition to bandwidth limitations, there are further restrictions imposed during a group call when broadcasting messages, such as packet rate and back pressure from the receive bitrate.
## Key concepts
The Data Channel API is designed for unidirectional communication, as opposed to
The decoupling of sender and receiver objects simplifies message handling in group call scenarios, providing a more streamlined and user-friendly experience. ### Channel
-Every Data Channel message is associated with a specific channel identified by `channelId`.
-It's important to clarify that this channelId isn't related to the `id` property in the WebRTC Data Channel.
-This channelId can be utilized to differentiate various application uses, such as using 1000 for control messages and 1001 for image transfers.
+Every Data Channel message is associated with a specific channel identified by `channelId`. It's important to clarify that this channelId isn't related to the `id` property in the WebRTC Data Channel. This channelId can be utilized to differentiate various application uses, such as using 1000 for control messages and 1001 for image transfers.
-The channelId is assigned during the creation of a DataChannelSender object,
-and can be either user-specified or determined by the SDK if left unspecified.
+The channelId is assigned during the creation of a DataChannelSender object, and can be either user-specified or determined by the SDK if left unspecified.
-The valid range of a channelId lies between 1 and 65535. If a channelId 0 is provided,
-or if no channelId is provided, the SDK assigns an available channelId from within the valid range.
+The valid range of a channelId lies between 1 and 65535. If a channelId 0 is provided, or if no channelId is provided, the SDK assigns an available channelId from within the valid range.
### Reliability Upon creation, a channel can be configured to be one of the two Reliability options: `lossy` or `durable`. A `lossy` channel means the order of messages isn't guaranteed and a message can be silently dropped when sending fails. It generally affords a faster data transfer speed.
-A `durable` channel means the SDK guarantees a lossless and ordered message delivery. In cases when a message can't be delivered, the SDK will throw an exception.
-In the Web SDK, the durability of the channel is ensured through a reliable SCTP connection. However, it doesn't imply that message won't be lost in an end-to-end manner.
-In the context of a group call, it signifies the prevention of message loss between the sender and server.
-In a peer-to-peer call, it denotes reliable transmission between the sender and remote endpoint.
+A `durable` channel means the SDK guarantees a lossless and ordered message delivery. In cases when a message can't be delivered, the SDK will throw an exception. In the Web SDK, the durability of the channel is ensured through a reliable SCTP connection. However, it doesn't imply that message won't be lost in an end-to-end manner. In the context of a group call, it signifies the prevention of message loss between the sender and server. In a peer-to-peer call, it denotes reliable transmission between the sender and remote endpoint.
> [!Note] > In the current Web SDK implementation, data transmission is done through a reliable WebRTC Data Channel connection for both `lossy` and `durable` channels.
When creating a channel, a desirable bitrate can be specified for bandwidth allo
This Bitrate property is to notify the SDK of the expected bandwidth requirement for a particular use case. Although the SDK generally can't match the exact bitrate, it tries to accommodate the request. - ### Session
-The Data Channel API introduces the concept of a session, which adheres to open-close semantics.
-In the SDK, the session is associated to the sender or the receiver object.
+The Data Channel API introduces the concept of a session, which adheres to open-close semantics. In the SDK, the session is associated to the sender or the receiver object.
-Upon creating a sender object with a new channelId, the sender object is in open state.
-If the `close()` API is invoked on the sender object, the session becomes closed and can no longer facilitate message sending.
-At the same time, the sender object notifies all participants in the call that the session is closed.
+Upon creating a sender object with a new channelId, the sender object is in open state. If the `close()` API is invoked on the sender object, the session becomes closed and can no longer facilitate message sending. At the same time, the sender object notifies all participants in the call that the session is closed.
If a sender object is created with an already existing channelId, the existing sender object associated with the channelId will be closed and any messages sent from the newly created sender object will be recognized as part of the new session.
-From the receiver's perspective, messages coming from different sessions on the sender's side are directed to distinct receiver objects.
-If the SDK identifies a new session associated with an existing channelId on the receiver's side, it creates a new receiver object.
-The SDK doesn't close the older receiver object; such closure takes place 1) when the receiver object receives a closure notification from the sender, or 2) if the session hasn't received any messages from the sender for over two minutes.
+From the receiver's perspective, messages coming from different sessions on the sender's side are directed to distinct receiver objects. If the SDK identifies a new session associated with an existing channelId on the receiver's side, it creates a new receiver object. The SDK doesn't close the older receiver object; such closure takes place 1) when the receiver object receives a closure notification from the sender, or 2) if the session hasn't received any messages from the sender for over two minutes.
In instances where the session of a receiver object is closed and no new session for the same channelId exists on the receiver's side, the SDK creates a new receiver object upon receipt of a message from the same session at a later time. However, if a new session for the same channelId exists on the receiver's side, the SDK discards any incoming messages from the previous session.
For instance, consider a scenario where a sender sends three messages. Initially
The maximum allowable size for a single message is 32 KB. If you need to send data larger than the limit, you'll need to divide the data into multiple messages. ### Participant list
-The maximum number of participants in a list is limited to 64. If you want to specify more participants, you'll need to manage participant list on your own. For example, if you want to send a message to 50 participants, you can create two different channels, each with 25 participants in their recipient lists.
-When calculating the limit, two endpoints with the same participant identifier will be counted as separate entities.
-As an alternative, you could opt for broadcasting messages. However, certain restrictions apply when broadcasting messages.
+The maximum number of participants in a list is limited to 64. If you want to specify more participants, you'll need to manage participant list on your own. For example, if you want to send a message to 50 participants, you can create two different channels, each with 25 participants in their recipient lists. When calculating the limit, two endpoints with the same participant identifier will be counted as separate entities. As an alternative, you could opt for broadcasting messages. However, certain restrictions apply when broadcasting messages.
### Rate limiting
-There's a limit on the overall send bitrate, currently set at 500 Kbps.
-However, when broadcasting messages, the send bitrate limit is dynamic and depends on the receive bitrate.
-In the current implementation, the send bitrate limit is calculated as the maximum send bitrate (500 Kbps) minus 80% of the receive bitrate.
+Currently the calling SDK has rate limiting implemented, which prevents users from sending data at a higher speed even if their network allows it. The current bandwidth rate maximums for data channel are:
+- Reliable channel (Durable): 64 kbps
+- Unreliable channel (Lossy): 512 kbps
+- High priority unreliable channel: 200 kbps
+
+However, when broadcasting messages, the send bitrate limit is dynamic and depends on the receive bitrate. In the current implementation, the send bitrate limit is calculated as the maximum send bitrate minus 80% of the receive bitrate.
-Furthermore, we also enforce a packet rate restriction when sending broadcast messages.
-The current limit is set at 80 packets per second, where every 1200 bytes in a message is counted as one packet.
-These measures are in place to prevent flooding when a significant number of participants in a group call are broadcasting messages.
+Furthermore, we also enforce a packet rate restriction when sending broadcast messages. The current limit is set at 80 packets per second, where every 1200 bytes in a message is counted as one packet. These measures are in place to prevent flooding when a significant number of participants in a group call are broadcasting messages.
## Next steps For more information, see the following articles:
communication-services Control Mid Call Media Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md
Previously updated : 08/09/2023 Last updated : 11/16/2023
# How to control mid-call media actions with Call Automation - Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available to developers during calls, like Send DTMF and Continuous DTMF Recognition. Actions are accompanied with sample code on how to invoke the said action. Call Automation supports various other actions to manage calls and recording that aren't included in this guide.
You can send DTMF tones to an external participant, which may be useful when you
Send a list of DTMF tones to an external participant. ### [csharp](#tab/csharp) ```csharp
-var tones = new DtmfTone[] { DtmfTone.One, DtmfTone.Two, DtmfTone.Three, DtmfTone.Pound };
+var tones = new DtmfTone[] { DtmfTone.One, DtmfTone.Two, DtmfTone.Three, DtmfTone.Pound };
+var sendDtmfTonesOptions = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(calleePhonenumber))
+{
+ OperationContextΓÇ»=ΓÇ»"dtmfs-to-ivr"
+};
-await callAutomationClient.GetCallConnection(callConnectionId)
- .GetCallMedia()
- .SendDtmfTonesAsync(tones, new PhoneNumberIdentifier(c2Target), "dtmfs-to-ivr");
+var sendDtmfAsyncResult = await callAutomationClient.GetCallConnection(callConnectionId)
+ .GetCallMedia()
+        .SendDtmfTonesAsync(sendDtmfTonesOptions);
``` ### [Java](#tab/java) ```java List<DtmfTone> tones = Arrays.asList(DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE, DtmfTone.POUND);
+SendDtmfTonesOptions options = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(c2Target));
+options.setOperationContext("dtmfs-to-ivr");
callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .sendDtmfTonesWithResponse(tones, new PhoneNumberIdentifier(c2Target), "dtmfs-to-ivr")
- .block();
+ .getCallMediaAsync()
+ .sendDtmfTonesWithResponse(options)
+ .block();
``` ### [JavaScript](#tab/javascript) ```javascript
await callAutomationClient.GetCallConnection(callConnectionId)
``` ### [Java](#tab/java) ```java
+ContinuousDtmfRecognitionOptions options = new ContinuousDtmfRecognitionOptions(new PhoneNumberIdentifier(c2Target));
+options.setOperationContext("dtmf-reco-on-c2");
callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .startContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2")
- .block();
+ .getCallMediaAsync()
+ .startContinuousDtmfRecognitionWithResponse(options)
+ .block();
``` ### [JavaScript](#tab/javascript) ```javascript
When your application no longer wishes to receive DTMF tones from the participan
Stop detecting DTMF tones sent by participant. ### [csharp](#tab/csharp) ```csharp
-await callAutomationClient.GetCallConnection(callConnectionId)
- .GetCallMedia()
- .StopContinuousDtmfRecognitionAsync(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2");
+var continuousDtmfRecognitionOptions = new ContinuousDtmfRecognitionOptions(new PhoneNumberIdentifier(callerPhonenumber))
+{
+    OperationContext = "dtmf-reco-on-c2"
+};
+
+var startContinuousDtmfRecognitionAsyncResult = await callAutomationClient.GetCallConnection(callConnectionId)
+    .GetCallMedia()
+    .StartContinuousDtmfRecognitionAsync(continuousDtmfRecognitionOptions);
``` ### [Java](#tab/java) ```java
+ContinuousDtmfRecognitionOptions options = new ContinuousDtmfRecognitionOptions(new PhoneNumberIdentifier(c2Target));
+options.setOperationContext("dtmf-reco-on-c2");
callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .stopContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2")
- .block();
+ .getCallMediaAsync()
+ .stopContinuousDtmfRecognitionWithResponse(options)
+ .block();
``` ### [JavaScript](#tab/javascript) ```javascript
Your application receives event updates when these actions either succeed or fai
Example of how you can handle a DTMF tone successfully detected. ### [csharp](#tab/csharp) ``` csharp
-if (acsEvent is ContinuousDtmfRecognitionToneReceived continuousDtmfRecognitionToneReceived)
+if (acsEvent is ContinuousDtmfRecognitionToneReceived continuousDtmfRecognitionToneReceived)
{
- logger.LogInformation("Tone detected: sequenceId={sequenceId}, tone={tone}, context={context}",
- continuousDtmfRecognitionToneReceived.ToneInfo.SequenceId,
- continuousDtmfRecognitionToneReceived.ToneInfo.Tone,
- continuousDtmfRecognitionToneReceived.OperationContext);
+ logger.LogInformation("Tone detected: sequenceId={sequenceId}, tone={tone}",
+ continuousDtmfRecognitionToneReceived.SequenceId,
+        continuousDtmfRecognitionToneReceived.Tone);
} ``` ### [Java](#tab/java) ``` java
-if (acsEvent instanceof ContinuousDtmfRecognitionToneReceived) {
- ContinuousDtmfRecognitionToneReceived event = (ContinuousDtmfRecognitionToneReceived) acsEvent;
- log.info("Tone detected: sequenceId=" + event.getToneInfo().getSequenceId()
- + ", tone=" + event.getToneInfo().getTone().convertToString()
- + ", context=" + event.getOperationContext());
+ if (acsEvent instanceof ContinuousDtmfRecognitionToneReceived) {
+ ContinuousDtmfRecognitionToneReceived event = (ContinuousDtmfRecognitionToneReceived) acsEvent;
+ log.info("Tone detected: sequenceId=" + event.getSequenceId()
+ +ΓÇ»", tone=" + event.getTone().convertToString()
+ +ΓÇ»", context=" + event.getOperationContext());
} ``` ### [JavaScript](#tab/javascript) ```javascript
-if (event.type === "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived") {
- console.log("Tone detected: sequenceId=%s, tone=%s, context=%s",
- eventData.toneInfo.sequenceId,
- eventData.toneInfo.tone,
- eventData.operationContext);
-
-}
+if (event.type === "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived") {
+ console.log("Tone detected: sequenceId=%s, tone=%s, context=%s",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» eventData.sequenceId,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» eventData.tone,
+ eventData.operationContext);
+}
``` ### [Python](#tab/python) ```python
-if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived":
- app.logger.info("Tone detected: sequenceId=%s, tone=%s, context=%s",
- event.data['toneInfo']['sequenceId'],
- event.data['toneInfo']['tone'],
- event.data['operationContext'])
+if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived":
+ app.logger.info("Tone detected: sequenceId=%s, tone=%s, context=%s",
+ event.data['sequenceId'],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» event.data['tone'],
+ event.data['operationContext'])
``` --
communication-services Connect Whatsapp Business Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/connect-whatsapp-business-account.md
Get started with the Azure Communication Services Advanced Messaging, which exte
- Receive inquiries from your customers for product feedback or support, price quotes, and reschedule appointments. - Send your customer's notifications like appointment reminders, product discounts, transaction receipts, and one-time passcodes.
+## Overview
+
+This document provides information about registering a WhatsApp Business Account with Azure Communication Services. This [video](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=04c63978-6f27-4289-93d6-625d8569ee28) demonstrates the process.
+ ## Prerequisites - [Create Azure Communication Resource](../../create-communication-resource.md)
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
For more information on Azure confidential ledger, and for examples of what can
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] + ## Create a resource group [!INCLUDE [Create resource group](../../includes/cli-rg-create.md)]
confidential-ledger Quickstart Ledger Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-ledger-explorer.md
+
+ Title: Use Ledger explorer to visually verify your transactions
+description: Learn to use the Microsoft Azure confidential ledger through Azure portal
++ Last updated : 11/08/2023+++++
+# Quickstart: Upload, view and list ledger data with the Azure ledger explorer
+
+In this quickstart, learn how to use the [Azure portal](https://portal.azure.com) to list, view and verify the integrity and authenticity of the data stored in your Azure confidential ledger.
+
+## Prerequisites
+
+The ledger explorer is accessible through the Azure Portal for your confidential ledger resource. You need to be logged in with an Entra ID user which has a Reader, Contributor or Administrator role assigned to access the ledger explorer. For help managing Entra ID users for your ledger, please see [Manage Microsoft Entra token-based users in Azure confidential ledger](./manage-azure-ad-token-based-users.md).
++
+## How to use the ledger explorer
+The ledger explorer allows you to a list of all transactions on your ledger with their IDs and contents, filtered by collections. You can click on a transaction row to see more details, such as the transaction ID, the transaction receipt, and the cryptographic proof.
+
+As the ledger is an append-only, sequential datastore, data fetched sequentially starting from Transaction ID `2.1`, the start of the ledger.
+
+To use the ledger explorer, follow these steps:
+
+1) Open the Azure portal and log in as an Entra ID user who has a Reader, Contributor or Administrator role assigned for the confidential ledger resource.
+1) On the Overview page, navigate to the "Ledger explorer (preview)" tab
+
+### Searching for a transaction
+[CCF Transaction IDs](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#verifying-transactions) require both a view and a sequence number, separated by a `.`. e.g. `2.15`
+
+Valid Transaction IDs start at `2.1`. Your transactions will receive a unique sequence number assigned by the system, and will be associated with a view.
+
+If you have previously recorded the specific Transaction ID of a past transaction, you may enter that Transaction ID in the search box to locate that transaction.
+
+- Search: You can use the filters and the search box to start your transaction search from any Transaction ID.
+
+### Creating an entry
+Entries can be created from ledger explorer if you have Administrator or Contributor roles. You can use Ledger explorer to quickly create a new ledger entry by clicking on the `Create` button in the command bar.
+
+Every entry requires a `Collection ID` along with some content. A default Collection ID `subledger:0` is assigned if you do not specify one.
+
+You can change the Collection ID using the dropdown, or specify a completely new collection by typing it in the `Collection Id` field.
+
+![Screenshot of how to post an entry in Ledger explorer.](./media/ledger-explorer-post.png)
+
+> [!WARNING]
+> Ledger entries are immutable. Once you have committed a transaction you cannot delete it.
+>
+
+## How to verify your ledger data
+One of the key features of Azure confidential ledger is that it provides cryptographic evidence that your ledger data has not been tampered with via Transaction Receipts.
+
+A transaction receipt is a JSON document that contains the metadata of a transaction, such as the transaction ID, cryptographic proofs and certificate information. You can use the transaction receipt to verify that a transaction exists on your ledger and that it has not been modified. To learn more about transaction receipts, please read [Write Transaction Receipts](./write-transaction-receipts.md).
+
+Ledger explorer performs the verification steps listed in [Verify Azure Confidential Ledger write transaction receipts](./verify-write-transaction-receipts.md) to verify the transaction receipt.
+
+To begin verifying a transaction:
+1. Click on a transaction in Ledger explorer
+1. Click on the `Proof` tab.
+
+### 1. Leaf node computation:
+The transaction digest is computed from the `Claims Digest`, `Commit Evidence` and `Write Set Digest`. This transaction digest is inserted as a leaf node into the merkle tree.
+
+![Screenshot of the calculated transaction digest in Ledger explorer.](./media/ledger-explorer-transaction-digest.png)
+
+This step corresponds to [Leaf Node Computation](./verify-write-transaction-receipts.md#leaf-node-computation) in [Verify Azure Confidential Ledger write transaction receipts](./verify-write-transaction-receipts.md).
+
+### 2. Root node computation
+The transaction receipt provides a cryptographic proof with the Merkle tree branches that leads to the root of the Merkle tree.
+
+![Screenshot of the calculated Merkle root in Ledger explorer.](./media/ledger-explorer-calculated-root.png)
+
+This step corresponds to [Root node Computation](./verify-write-transaction-receipts.md#root-node-computation) in [Verify Azure Confidential Ledger write transaction receipts](./verify-write-transaction-receipts.md)
+
+### 3. Verify signature
+When this transaction is committed, the primary node signs the Merkle root. To verify that this transaction was committed by your ledger and has not been tampered with, Ledger explorer uses the public key of the signing node and the digital signature to verify that the calculated Merkle root matches the signed value.
+
+Finally, we check that the signing node is endorsed by the ledger. If the transaction is committed and has not been tampered with, Ledger explorer will indicate that the `Globally Committed Status` is `verified`.
+
+![Screenshot of a verfified signature in Ledger explorer.](./media/ledger-explorer-committed-status.png)
+
+This step corresponds to [Verify signature over root node](./verify-write-transaction-receipts.md#verify-signature-over-root-node) and [Verify signing node certificate endorsement](./verify-write-transaction-receipts.md#verify-signing-node-certificate-endorsement) in [Verify Azure Confidential Ledger write transaction receipts](./verify-write-transaction-receipts.md)
+
+## Next steps
+
+Learn more about using the SDK to write to and read from the ledger, and verify write transaction receipts:
+
+- [Quickstart: Microsoft Azure confidential ledger client library for Python](./quickstart-python.md)
+- [Verify write transaction receipts - Code Walkthrough](./verify-write-transaction-receipts.md#code-walkthrough)
+
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Azure confidential ledger is a cloud service that provides a high integrity stor
In this quickstart, you create a confidential ledger with the [Azure portal](https://portal.azure.com).
+## Prerequisites
++ ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+## Prerequisites
++ ## Create a resource group [!INCLUDE [Create resource group](../../includes/powershell-rg-create.md)]
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Microsoft Azure confidential ledger is a new and highly secure service for manag
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Python versions that are [supported by the Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites). - [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell).
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
[![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.confidentialledger%2Fconfidential-ledger-create%2Fazuredeploy.json) ## Prerequisites ### Azure subscription
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Application rules allow or deny traffic based on the application layer. The foll
| Scenarios | FQDNs | Description | |--|--|--| | All scenarios | `mcr.microsoft.com`, `*.data.mcr.microsoft.com` | These FQDNs for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these application rules or the network rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
-| Azure Container Registry (ACR) | *Your-ACR-address*, `*.blob.windows.net`, `login.microsoft.com` | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. |
+| Azure Container Registry (ACR) | *Your-ACR-address*, `*.blob.core.windows.net`, `login.microsoft.com` | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. |
| Azure Key Vault | *Your-Azure-Key-Vault-address*, `login.microsoft.com` | These FQDNs are required in addition to the service tag required for the network rule for Azure Key Vault. | | Managed Identity | `*.identity.azure.net`, `login.microsoftonline.com`, `*.login.microsoftonline.com`, `*.login.microsoft.com` | These FQDNs are required when using managed identity with Azure Firewall in Azure Container Apps. | Docker Hub Registry | `hub.docker.com`, `registry-1.docker.io`, `production.cloudflare.docker.com` | If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add these FQDNs to the firewall. |
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
You can now create a job that uses to use the container image. In this section,
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` --trigger-type Event ` --replica-timeout 1800 `
- --replica-retry-limit 1 `
+ --replica-retry-limit 0 `
--replica-completion-count 1 ` --parallelism 1 ` --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
You can run a manual job to register an offline placeholder agent. The job runs
az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Manual \ --replica-timeout 300 \
- --replica-retry-limit 1 \
+ --replica-retry-limit 0 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
--polling-interval 30 \ --scale-rule-name "azure-pipelines" \ --scale-rule-type "azure-pipelines" \
- --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" \
+ --scale-rule-metadata "poolName=$AZP_POOL" "targetPipelinesQueueLength=1" \
--scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" \ --cpu "2.0" \ --memory "4Gi" \
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
--polling-interval 30 ` --scale-rule-name "azure-pipelines" ` --scale-rule-type "azure-pipelines" `
- --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" `
+ --scale-rule-metadata "poolName=$AZP_POOL" "targetPipelinesQueueLength=1" `
--scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" ` --cpu "2.0" ` --memory "4Gi" `
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
Previously updated : 07/11/2023 Last updated : 11/28/2023
-# Change Data Capture in Azure Cosmos DB analytical store (preview)
+# Change Data Capture in Azure Cosmos DB analytical store
[!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)]
cosmos-db Get Started Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md
Previously updated : 04/18/2023 Last updated : 11/28/2023
-# Get started with change data capture in the analytical store for Azure Cosmos DB (Preview)
+# Get started with change data capture in the analytical store for Azure Cosmos DB
[!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)]
cosmos-db How To Migrate From Change Feed Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-change-feed-library.md
Previously updated : 09/13/2021 Last updated : 11/28/2023 ms.devlang: csharp
For the delegate, you can have a static method to receive the events. If you wer
[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=Delegate)]
+## Health events and observability
+
+If previously you were using `IHealthMonitor` or you were leveraging `IChangeFeedObserver.OpenAsync` and `IChangeFeedObserver.CloseAsync`, use the [Notifications API](./change-feed-processor.md#life-cycle-notifications).
+
+* `IChangeFeedObserver.OpenAsync` can be replaced with `WithLeaseAcquireNotification`.
+* `IChangeFeedObserver.CloseAsync` can be replaced with `WithLeaseReleaseNotification`.
+* `IHealthMonitor.InspectAsync` can be replaced with `WithErrorNotification`.
+ ## State and lease container Similar to the change feed processor library, the change feed feature in .NET V3 SDK uses a [lease container](change-feed-processor.md#components-of-the-change-feed-processor) to store the state. However, the schemas are different.
cosmos-db Synapse Link Time Travel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-time-travel.md
Previously updated : 06/01/2023 Last updated : 11/28/2023
-# Time travel in Azure Synapse Link for Azure Cosmos DB for NoSQL (preview)
+# Time travel in Azure Synapse Link for Azure Cosmos DB for NoSQL
[!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)]
Time travel enables you to access Azure Cosmos DB data in the analytical store,
This article covers how to do time travel analysis on your Azure Cosmos DB data stored in the analytical store. The analytical store is created when you enable Azure Synapse Link in your containers.
-> [!IMPORTANT]
-> Time Travel feature is currently in public preview. This preview version is provided without a service level agreement and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## How does it work? To perform time-travel operations on Azure Cosmos DB data, ensure that your Azure Cosmos DB account has been enabled for [Azure Synapse Link](synapse-link.md). Also, ensure that you have enabled Azure Synapse Link in your container. Azure Synapse Link enables the [analytical store](analytical-store-introduction.md) for your container, and is then used for Azure Synapse Link analysis including time travel.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 11/14/2023 Last updated : 11/28/2023
For information about how to cancel a support plan, see [Cancel your Azure subsc
The following table describes product transfer support between the different agreement types. Links are provided for more information about each type of transfer.
-Currently transfer isn't supported for [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) or [Azure in Open (AIO)](https://azure.microsoft.com/offers/ms-azr-0111p/) products. For a workaround, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
+Currently transfer isn't supported for [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) products. For a workaround, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
Dev/Test products aren't shown in the following table. Transfers for Dev/Test products are handled in the same way as other product types. For example, an EA Dev/Test product transfer is handled in the way an EA product transfer.
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
You can buy savings plans in the [Azure portal](https://portal.azure.com/) or wi
## Why buy a savings plan?
-If you have consistent compute spend, but your use of disparate resources makes reservations infeasible, buying a savings plan gives you the ability to reduce your costs. For example, If you consistently spend at least $X every hour, but your usage comes from different resources and/or different datacenter regions, you likely can't effectively cover these costs with reservations. When you buy a savings plan, your hourly usage, up to your commitment amount, is discounted. For this usage, you no longer charged at the pay-as-you-go rates.
+If you have consistent compute spend, but your use of disparate resources makes reservations infeasible, buying a savings plan gives you the ability to reduce your costs. For example, if you consistently spend at least $X every hour, but your usage comes from different resources and/or different datacenter regions, you likely can't effectively cover these costs with reservations. When you buy a savings plan, your hourly usage, up to your commitment amount, is discounted. For this usage, you no longer charged at the pay-as-you-go rates.
## How savings plan benefits are applied
If you have Azure savings plan for compute questions, contact your account team,
- Learn [how discounts apply to savings plans](discount-application.md). - [Trade in reservations for a savings plan](reservation-trade-in.md).-- [Buy a savings plan](buy-savings-plan.md).
+- [Buy a savings plan](buy-savings-plan.md).
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-fault-tolerance.md
Previously updated : 10/20/2023 Last updated : 11/09/2023 # Fault tolerance of copy activity in Azure Data Factory and Synapse Analytics pipelines
Copy activity supports three scenarios for detecting, skipping, and logging inco
>[!NOTE] >- To load data into Azure Synapse Analytics using PolyBase, configure PolyBase's native fault tolerance settings by specifying reject policies via "[polyBaseSettings](connector-azure-sql-data-warehouse.md#azure-sql-data-warehouse-as-sink)" in copy activity. You can still enable redirecting PolyBase incompatible rows to Blob or ADLS as normal as shown below. >- This feature doesn't apply when copy activity is configured to invoke [Amazon Redshift Unload](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).
->- This feature doesn't apply when copy activity is configured to invoke a [stored procedure from a SQL sink](./connector-azure-sql-database.md#invoke-a-stored-procedure-from-a-sql-sink).
+>- This feature doesn't apply when copy activity is configured to invoke a [stored procedure from a SQL sink](./connector-azure-sql-database.md#invoke-a-stored-procedure-from-a-sql-sink), or use [Upsert](connector-azure-sql-database.md#upsert-data) to write data into a SQL sink.
### Configuration The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Previously updated : 09/05/2023 Last updated : 11/28/2023
You can also enable the DDoS protection plan for an existing virtual network fro
Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](../firewall-manager/configure-ddos.md). ## Enable DDoS protection for all virtual networks
The _MyVnet_ virtual network should be listed.
## View protected resources Under **Protected resources**, you can view your protected virtual networks and public IP addresses, or add more virtual networks to your DDoS protection plan: + ### Disable for a virtual network:
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
BreakingPoint Cloud offers:
- A simplified user interface and an ΓÇ£out-of-the-boxΓÇ¥ experience. - Pay-per-use model. - Predefined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors.-- A free trail account.
+- A free trial account.
> [!NOTE] > For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud).
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Defender for SQL servers on machines protects your SQL servers hosted in Azure,
- Learn more about [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). -- For on-premises SQL servers, you can learn more about [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview) and how to [install Log Analytics agent on Windows computers without Azure Arc](../azure-monitor/agents/agent-windows.md).
+- For on-premises SQL servers, you can learn more about [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview) and how to [install Log Analytics agent on Windows computers without Azure Arc](../azure-monitor/agents/agent-windows.md).
- For multicloud SQL servers:
defender-for-cloud Edit Devops Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/edit-devops-connector.md
After onboarding your Azure DevOps, GitHub, or GitLab environments to Microsoft
1. Navigate to **Configure access**. Here you can perform token exchange, change the organizations/groups onboarded, or toggle autodiscovery. > [!NOTE]
-> IF you are the owner of the connector, re-authorizing your environment to make changes is **optional**. For a user trying to take ownership of the connector, you must re-authorize using your access token. This change is irreversible as soon as you select 'Re-authorize'.
+> If you are the owner of the connector, re-authorizing your environment to make changes is **optional**.
+> If you are trying to take ownership of the connector, you must re-authorize using your access token. This change is irreversible as soon as you select 'Re-authorize'.
1. Use **Edit connector account** component to make changes to onboarded inventory. If an organization/group is greyed out, please ensure that you have proper permissions to the environment and the scope is not onboarded elsewhere in the Tenant.
After onboarding your Azure DevOps, GitHub, or GitLab environments to Microsoft
## Next steps -- Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
+- Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 |
| [Consolidation of Defender for Cloud's Service Level 2 names](#consolidation-of-defender-for-clouds-service-level-2-names) | November 1, 2023 | December 2023 | | [Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management](#changes-to-how-microsoft-defender-for-clouds-costs-are-presented-in-microsoft-cost-management) | October 25, 2023 | November 2023 | | [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Deprecation of two DevOps security recommendations
+
+**Announcement date: November 30, 2023**
+
+**Estimated date for change: January 2024**
+
+With the general availability of DevOps environment posture management, we're updating our approach to having recommendations displayed in the subassessment format. Previously, we had broad recommendations encompassing multiple findings. Now, we're shifting to individual recommendations for each specific finding. With this change, the two broad recommendations will be deprecated:
+
+- `Azure DevOps Posture Management findings should be resolved`
+- `GitHub Posture Management findings should be resolved`
+
+This means instead of a singular recommendation for all discovered misconfigurations, we'll provide distinct recommendations for each issue, such as "Azure DevOps service connections should not grant access to all pipelines". This change aims to enhance clarity and visibility of specific issues.
+
+For more information, see the [new recommendations](recommendations-reference-devops.md).
+ ## Consolidation of Defender for Cloud's Service Level 2 names **Announcement date: November 1, 2023**
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|**IETF** | ARP<br> DHCP<br> DCE RPC<br> DNS<br> FTP (FTP_ADAT<br> FTP_DATA)<br> GSSAPI (RFC2743)<br> HTTP<br> ICMP<br> IPv4<br> IPv6<br> LLDP<br> MDNS<br> NBNS<br> NTLM (NTLMSSP Auth Protocol)<br> RPC<br> SMB / Browse / NBDGM<br> SMB / CIFS<br> SNMP<br> SPNEGO (RFC4178)<br> SSH<br> Syslog<br> TCP<br> Telnet<br> TFTP<br> TPKT<br> UDP | |**ISO** | CLNP (ISO 8473)<br> COTP (ISO 8073)<br> ISO Industrial Protocol<br> MQTT (IEC 20922) | | **Jenesys** |FOX <br>Niagara |
-|**Medical** |ASTM<br> HL7 |
+|**Medical** |ASTM<br> HL7 <br> DICOM <br> POCT1 |
|**Microsoft** | Horizon community dissectors<br> Horizon proprietary dissectors (developed by customers) | |**Mitsubishi** | Melsoft / Melsec (Mitsubishi Electric) | |**Omron** | FINS <br>HTTP |
deployment-environments Best Practice Catalog Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/best-practice-catalog-structure.md
+
+ Title: "Best practices for structuring an Azure Deployment Environments catalog"
+description: "This article provides guidelines for structuring an Azure Deployment Environments catalog for efficient caching."
++++ Last updated : 11/27/2023+
+#customer intent: As a platform engineer, I want to structure my catalog so that Azure Deployment Environments can find and cache environment definitions efficiently.
+++
+# Best practices for Azure Deployment Environment catalogs
+
+This article describes the best practice guidelines for structuring an Azure Deployment Environments catalog.
+
+## Structure the catalog for efficient caching
+As a platform engineer, you should structure your catalog in a way that makes it easier and quicker for Azure Deployment Environments to find and cache environment definitions efficiently. By organizing the repository into a specific structure, you can better target files for caching and improve the overall performance of the deployment process. It's essential for platform engineers to understand these guidelines and structure their repositories accordingly to ensure optimal results.
+
+When you attach a catalog to a dev center, Deployment Environments scans the catalog for an environment.yaml file. On locating the file, ADE assumes the files in that folder and subfolders form an environment definition. ADE caches only the required files, not the entire repository.
+
+The following diagram shows the recommended structure for a repo. Each template resides within a single folder.
++
+## Linked environment definitions
+In a linked environment definitions scenario, multiple .json files can point to a single ARM template. ADE checks linked environment definitions sequentially and retrieves the linked files and environment definitions from the repository. For best performance, these interactions should be minimized.
+
+## Update environment definitions and sync changes
+Over time, environment definitions need updates. You make those updates in your Git repository, and then you must manually sync the catalog up to update the changes to ADE.
+
+## Files outside recommended structure
+In the following example, the Azuredeploy.json file is above the environment.yaml file in the folder structure. This structure is not valid Azure Deployment Environments catalogs. Environment definitions cannot reference content outside of the catalog item folder.
++
+## Related content
+- [Add and configure a catalog from GitHub or Azure DevOps in Azure Deployment Environments](/azure/deployment-environments/how-to-configure-catalog?tabs=DevOpsRepoMSI)
+- [Add and configure an environment definition in Azure Deployment Environments](/azure/deployment-environments/configure-environment-definition)
deployment-environments How To Install Devcenter Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md
Title: Install the devcenter Azure CLI extension
-description: Learn how to install the Azure CLI and the Deployment Environments CLI extension so you can create Deployment Environments resources from the command line.
+description: Learn how to install the Azure CLI extension for Azure Deployment Environments so you can create resources from the command line.
Previously updated : 04/25/2023 Last updated : 11/22/2023 #Customer intent: As a platform engineer, I want to install the devcenter extension so that I can create Deployment Environments resources from the command line.
-# Azure Deployment Environments Azure CLI extension
+# Install the Azure CLI extension for Azure Deployment Environments
-In addition to the Azure admin portal and the developer portal, you can use the Deployment Environments Azure CLI extension to create resources. Azure Deployment Environments and Microsoft Dev Box use the same Azure CLI extension, which is called *devcenter*.
+In addition to the Azure admin portal and the developer portal, you can use the Azure Deployment Environments CLI extension to create resources. Azure Deployment Environments and Microsoft Dev Box use the same Azure CLI extension, which is called *devcenter*.
## Install the devcenter extension
-To install the devcenter extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the devcenter extension.
+You first need to install the Azure CLI, and then install the devcenter extension.
1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
-1. Install the devcenter extension
-``` azurecli
-az extension add --name devcenter
-```
-1. Check that the devcenter extension is installed
-``` azurecli
-az extension list
-```
+1. Install the devcenter extension by using the following command.
+ ``` azurecli
+ az extension add --name devcenter
+ ```
+
+1. Check that the devcenter extension was installed.
+ ``` azurecli
+ az extension list
+ ```
+ ### Update the devcenter extension
-You can update the devcenter extension if you already have it installed.
-To update a version of the extension that's installed
+If you already have the devcenter extension installed, you can update it.
``` azurecli az extension update --name devcenter ```+ ### Remove the devcenter extension
-To remove the extension, use the following command
+To remove the extension, use the following command.
```azurecli az extension remove --name devcenter ``` ## Get started with the devcenter extension
-You might find the following commands useful as you work with the devcenter extension.
+You might find the following commands useful while you work with the devcenter extension.
-1. Sign in to Azure CLI with your work account.
+1. Sign in to the Azure CLI with your account.
```azurecli az login
You might find the following commands useful as you work with the devcenter exte
1. Set your default subscription to the subscription where you're creating your specific Deployment Environments resources. ```azurecli
- az account set --subscription {subscriptionId}
+ az account set --subscription <subscriptionId>
```
-1. Set default resource group. Setting a default resource group means you don't need to specify the resource group for each command.
+1. Set a default resource group so that you don't need to specify the resource group for each command.
```azurecli
- az configure --defaults group={resourceGroupName}
+ az configure --defaults group=<resourceGroupName>
```
-1. Get Help for a command
+1. Get help for a command.
```azurecli az devcenter admin --help
You might find the following commands useful as you work with the devcenter exte
## Next steps
-For complete command listings, refer to the [Microsoft Deployment Environments and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
+For complete command listings, see the [Microsoft Dev Box and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 10/23/2023 Last updated : 11/27/2023 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Outbound endpoints have the following limitations:
- IPv6 enabled subnets aren't supported. - DNS private resolver does not support Azure ExpressRoute FastPath.
+- DNS private resolver inbound endpoint provisioning isn't compatible with [Azure Lighthouse](../lighthouse/overview.md).
+ - To see if Azure Lighthouse is in use, search for **Service providers** in the Azure portal and select **Service provider offers**.
## Next steps
event-grid Configure Firewall Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-mqtt.md
# Configure IP firewall for Azure Event Grid namespaces
-By default, Event Grid namespaces and entities in them such as Message Queuing Telemetry Transport (MQTT) topic spaces are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Publishers originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
+By default, Event Grid namespaces and entities in them such as Message Queuing Telemetry Transport (MQTT) topic spaces are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the MQTT clients that fall into the allowed IP range can connect to publish and subscribe. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
This article describes how to configure IP firewall settings for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
event-grid Configure Private Endpoints Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints-mqtt.md
# Configure private endpoints for Azure Event Grid namespaces with MQTT enabled
-You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. For more conceptual information, see [Network security](network-security.md).
+You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. When an MQTT client on a private network connects to the MQTT broker on a private link, the client can publish and subscribe to MQTT messages. For more conceptual information, see [Network security](network-security.md).
This article shows you how to enable private network access for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
event-grid Mqtt Certificate Chain Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-certificate-chain-client-authentication.md
Using the CA files generated to create certificate for the client.
## Upload the CA certificate to the namespace 1. In Azure portal, navigate to your Event Grid namespace.
-1. Under the MQTT section in left rail, navigate to CA certificates menu.
+1. Under the MQTT broker section in left rail, navigate to CA certificates menu.
1. Select **+ Certificate** to launch the Upload certificate page. 1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**. You can upload a file of .pem, .cer, or .crt type.
-1. On the Upload certificate page, give a Certificate name and browse for the certificate file.
-1. Select **Upload** button to add the parent certificate.
+ :::image type="content" source="./media/mqtt-certificate-chain-client-authentication/event-grid-namespace-parent-certificate-added.png" alt-text="Screenshot showing the added CA certificate listed in the CA certificates page." lightbox="./media/mqtt-certificate-chain-client-authentication/event-grid-namespace-parent-certificate-added.png":::
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
The steps for this tutorial use the values in the following configuration refere
:::image type="content" source="./media/expressroute-howto-add-gateway-portal-resource-manager/add-gateway-subnet.png" alt-text="Screenshot that shows the button to add the gateway subnet.":::
-1. The **Name** for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the autofilled **Address range** values to match your configuration requirements. We recommend creating a gateway subnet with a /27 or larger (/26, /25, and so on.). If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger.
+1. The **Name** for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the autofilled **Address range** values to match your configuration requirements. **You need to create the GatewaySubnet with a /27 or larger** (/26, /25, and so on.). /28 or smaller subnets are not supported for new deployments. If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger.
If you're using a dual stack virtual network and plan to use IPv6-based private peering over ExpressRoute, select **Add IP6 address space** and enter **IPv6 address range** values.
firewall Firewall Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-best-practices.md
+
+ Title: Azure Firewall best practices for performance
+description: Learn how to configure Azure Firewall to maximize performance
++++ Last updated : 11/17/2023+++
+# Best practices for Azure Firewall performance
+
+To maximize the [performance](firewall-performance.md) of your Azure Firewall and Firewall policy, itΓÇÖs important to follow best practices. However, certain network behaviors or features can affect the firewallΓÇÖs performance and latency, despite its performance optimization capabilities.
+
+## Performance issues common causes
+
+- **Exceeding rule limitations**
+
+ If you exceed limitations, such as using over 20,000 unique source/destination combinations in rules, it can affect firewall traffic processing and cause latency. Even though this is a soft limit, if you surpass this value it can affect overall firewall performance. For more information, see the [documented limits](../nat-gateway/tutorial-hub-spoke-nat-firewall.md).
+
+- **High traffic throughput**
+
+ Azure Firewall Standard supports up to 30 Gbps, while Premium supports up to 100 Gbps. For more information, see the [throughput limitations](firewall-performance.md#performance-data). You can monitor your throughput or data processing in Azure Firewall metrics. For more information, see [Azure Firewall metrics](logs-and-metrics.md#metrics).
+
+- **High Number of Connections**
+
+ An excessive number of connections passing through the firewall can lead to SNAT (Source Network Address Translation) port exhaustion.
+
+- **IDPS Alert + Deny Mode**
+
+ If you enable IDPS Alert + Deny Mode, the firewall drops packets that match an IDPS signature. This affects performance.
+
+## Recommendations
+
+- **Optimize rule configuration and processing**
+
+ - Organize rules using firewall policy into Rule Collection Groups and Rule Collections, prioritizing them based on their use frequency.
+ - Use [IP Groups](ip-groups.md) or IP prefixes to reduce the number of IP table rules.
+ - Prioritize rules with the highest number of hits.
+ - Ensure that you are within the following [rule limitations](../nat-gateway/tutorial-hub-spoke-nat-firewall.md).
+- **Use or migrate to Azure Firewall Premium**
+ - Azure Firewall Premium uses advanced hardware and offers a higher-performing underlying engine.
+ - Best for heavier workloads and higher traffic volumes.
+ - It also includes built-in accelerated networking software, which can achieve throughput of up to 100 Gbps, unlike the Standard version.
+- **Add multiple public IP addresses to the firewall to prevent SNAT port exhaustion**
+ - To prevent SNAT port exhaustion, consider adding multiple public IP addresses (PIPs) to your firewall. Azure Firewall provides [2,496 SNAT ports per each additional PIP](../nat-gateway/tutorial-hub-spoke-nat-firewall.md).
+ - If you prefer not to add more PIPs, you can add an Azure NAT Gateway to scale SNAT port usage. This provides advanced SNAT port allocation capabilities.
+- **Start with IDPS Alert mode before you enable Alert + Deny mode**
+ - While the *Alert + Deny* mode offers enhanced security by blocking suspicious traffic, it can also introduce more processing overhead. If you disable this mode, you might observe performance improvement, especially in scenarios where the firewall is primarily used for routing and not deep packet inspection.
+ - It's essential to remember that traffic through the firewall is denied by default until you explicitly configure *allow* rules. Therefore, even when IDPS *Alert + Deny* mode is disabled, your network remains protected, and only explicitly permitted traffic is allowed to pass through the firewall. It can be a strategic choice to disable this mode to optimize performance without compromising the core security features provided by the Azure Firewall.
+
+## Testing and monitoring
+
+To ensure optimal performance for your Azure Firewall, you should continuously and proactively monitor it. It's crucial to regularly assess the health and key metrics of your firewall to identify potential issues and maintain efficient operation, especially during configuration changes.
+
+Use the following best practices for testing and monitoring:
+
+- **Test latency introduced by the firewall**
+ - To assess the latency added by the firewall, measure the latency of your traffic from the source to the destination by temporarily bypassing the firewall. To do this, reconfigure your routes to bypass the firewall. Compare the latency measurements with and without the firewall to understand its effect on traffic.
+- **Measure firewall latency using latency probe metrics**
+ - Use the *latency probe* metric to measure the average latency of the Azure Firewall. This metric provides an indirect metric of the firewallΓÇÖs performance. Remember that intermittent latency spikes are normal.
+- **Measure traffic throughput metric**
+ - Monitor the *traffic throughput* metric to understand how much data passes through the firewall. This helps you gauge the firewallΓÇÖs capacity and its ability to handle the network traffic.
+- **Measure data processed**
+ - Keep track of the *data processed* metric to assess the volume of data processed by the firewall.
+- **Identify rule hits and performance spikes**
+ - Look for spikes in network performance or latency. Correlate rule hit timestamps, such as application rules hit count and network rules hit count, to determine if rule processing is a significant factor contributing to performance or latency issues. By analyzing these patterns, you can identify specific rules or configurations that you might need to optimize.
+- **Add alerts to key metrics**
+ - In addition to regular monitoring, it's crucial to set up alerts for key firewall metrics. This ensures that you're promptly notified when specific metrics surpass predefined thresholds. To configure alerts, see [Azure Firewall logs and metrics](logs-and-metrics.md#alert-on-azure-firewall-metrics) for detailed instructions about setting up effective alerting mechanisms. Proactive alerting enhances your ability to respond swiftly to potential issues and maintain optimal firewall performance.
+
+## Next steps
+
+- [Azure Firewall performance](firewall-performance.md)
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
The following table provides a comparison between Azure Front Door and Azure CDN
| Easy integration with Azure services, such as Storage and Web Apps | Yes | Yes | Yes | Yes | Yes | Yes | | Management via REST API, .NET, Node.js, or PowerShell | Yes | Yes | Yes | Yes | Yes | Yes | | Compression MIME types | Configurable | Configurable | Configurable | Configurable | Configurable | Configurable |
-| Compression encodings | gzip, brotli | gzip, brotli | gzip, brotli | gzip, brotli | gzip, deflate, bzip2 | gzip, deflate, bzip2 |
+| Compression encodings | gzip, brotli | gzip, brotli | gzip, brotli | gzip, brotli | gzip, deflate, bzip2 | gzip, deflate, bzip2, brotli |
| Azure Policy integration | No | No | Yes | No | No | No | | Azure Advisory integration | Yes | Yes | No | No | Yes | Yes | | Managed Identities with Azure Key Vault | Yes | Yes | No | No | No | No |
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 10/19/2022 Last updated : 11/28/2023 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
In this quickstart, you learn how to create an [Apache Kafka](https://kafka.apac
[!INCLUDE [delete-cluster-warning](../includes/hdinsight-delete-cluster-warning.md)]
-The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
+Only resources within the same virtual network have access to the Kafka API. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
In this section, you get the host information from the Apache Ambari REST API on
When prompted, enter the name of the Kafka cluster.
-3. To set an environment variable with Zookeeper host information, use the command below. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy in case one host is unreachable.
+3. To set an environment variable with Zookeeper host information, use the following command. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy in case one host is unreachable.
```bash export KAFKAZKHOSTS=`curl -sS -u admin -G https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2`
hdinsight Apache Spark Jupyter Spark Sql Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell.md
Title: 'Quickstart: Create Apache Spark cluster on Azure HDInsight with Azure PowerShell'
-description: This quickstart shows how to use Azure PowerShell to create an Apache Spark cluster in Azure HDInsight, and run a simple Spark SQL query.
+description: This quickstart shows how to use Azure PowerShell to create an Apache Spark cluster in Azure HDInsight, and run Spark SQL query.
Previously updated : 10/18/2022 Last updated : 11/28/2023 #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a spark cluster and query some data.
In this quickstart, you use Azure PowerShell to create an Apache Spark cluster i
[Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md) | [Apache Spark](https://spark.apache.org/) | [Apache Hive](https://hive.apache.org/) | [Jupyter Notebook](https://jupyter.org/)
-If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
+If you're using multiple clusters together, you can create a virtual network, and if you're using a Spark cluster you can use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
## Prerequisite
Creating an HDInsight cluster includes creating the following Azure objects and
- An Azure resource group. An Azure resource group is a container for Azure resources. - An Azure storage account or Azure Data Lake Storage. Each HDInsight cluster requires a dependent data storage. In this quickstart, you create a cluster that uses Azure Storage Blobs as the cluster storage. For more information on using Data Lake Storage Gen2, see [Quickstart: Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md).-- An cluster of different cluster types on HDInsight. In this quickstart, you create a Spark 2.3 cluster.
+- A cluster of different cluster types on HDInsight. In this quickstart, you create a Spark 2.3 cluster.
You use a PowerShell script to create the resources.
When you run the PowerShell script, you are prompted to enter the following valu
|Parameter|Value| ||| |Azure resource group name | Provide a unique name for the resource group.|
-|Location| Specify the Azure region, for example 'Central US'. |
+|Location| Specify the Azure region, for example 'Central US.' |
|Default storage account name | Provide a unique name for the storage account. | |Cluster name | Provide a unique name for the HDInsight cluster.| |Cluster login credentials | You use this account to connect to the cluster dashboard later in the quickstart.|
If you run into an issue with creating HDInsight clusters, it could be that you
1. In the [Azure portal](https://portal.azure.com), search for and select **HDInsight clusters**.
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-powershell/azure-portal-search-hdinsight-cluster.png" alt-text="Screenshot shows the Azure portal search for H D Insight." border="true":::
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-powershell/azure-portal-search-hdinsight-cluster.png" alt-text="Screenshot shows the Azure portal search for HDInsight." border="true":::
1. From the list, select the cluster you created.
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-powershell/azure-portal-open-hdinsight-cluster.png" alt-text="Screenshot shows H D Insight clusters with the cluster that you created." border="true":::
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql-use-powershell/azure-portal-open-hdinsight-cluster.png" alt-text="Screenshot shows HDInsight clusters with the cluster that you created." border="true":::
1. On the cluster **Overview** page, select **Cluster dashboards**, and then select **Jupyter Notebook**. If prompted, enter the cluster login credentials for the cluster.
hdinsight Apache Spark Manage Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-manage-dependencies.md
Previously updated : 10/18/2022 Last updated : 11/28/2023 #Customer intent: As a developer for Apache Spark and Apache Spark in Azure HDInsight, I want to learn how to manage my Spark application dependencies and install packages on my HDInsight cluster.
When a Spark session starts in Jupyter Notebook on Spark kernel for Scala, you c
* [Maven Repository](https://search.maven.org/), or community-contributed packages at [Spark Packages](https://spark-packages.org/). * Jar files stored on your cluster's primary storage.
-You'll use the `%%configure` magic to configure the notebook to use an external package. In notebooks that use external packages, make sure you call the `%%configure` magic in the first code cell. This ensures that the kernel is configured to use the package before the session starts.
+You can use the `%%configure` magic to configure the notebook to use an external package. In notebooks that use external packages, make sure you call the `%%configure` magic in the first code cell. This ensures that the kernel is configured to use the package before the session starts.
> >[!IMPORTANT]
import com.microsoft.azure.cosmosdb.spark._
## Jar libs for cluster In some cases, you may want to configure the jar dependencies at cluster level so that every application can be set up with same dependencies by default. The approach is to add your jar paths to Spark driver and executor class path.
-1. Run below sample script actions to copy jar files from primary storage `wasb://mycontainer@mystorageaccount.blob.core.windows.net/libs/*` to cluster local file system `/usr/libs/sparklibs`. The step is needed as linux uses `:` to separate class path list, but HDInsight only support storage paths with scheme like `wasb://`. The remote storage path won't work correctly if you directly add it to class path.
+1. Run sample script actions to copy jar files from primary storage `wasb://mycontainer@mystorageaccount.blob.core.windows.net/libs/*` to cluster local file system `/usr/libs/sparklibs`. The step is needed as linux uses `:` to separate class path list, but HDInsight only support storage paths with scheme like `wasb://`. The remote storage path won't work correctly if you directly add it to class path.
```bash sudo mkdir -p /usr/libs/sparklibs
HDInsight cluster has built-in jar dependencies, and updates for these jar versi
## Python packages for one Spark job ### Use Jupyter Notebook
-HDInsight Jupyter Notebook PySpark kernel doesn't support installing Python packages from PyPi or Anaconda package repository directly. If you have `.zip`, `.egg`, or `.py` dependencies, and want to reference them for one Spark session, follow below steps:
-1. Run below sample script actions to copy `.zip`, `.egg` or `.py` files from primary storage `wasb://mycontainer@mystorageaccount.blob.core.windows.net/libs/*` to cluster local file system `/usr/libs/pylibs`. The step is needed as linux uses `:` to separate search path list, but HDInsight only support storage paths with scheme like `wasb://`. The remote storage path won't work correctly when you use `sys.path.insert`.
+HDInsight Jupyter Notebook PySpark kernel doesn't support installing Python packages from PyPi or Anaconda package repository directly. If you have `.zip`, `.egg`, or `.py` dependencies, and want to reference them for one Spark session, follow steps:
+
+1. Run sample script actions to copy `.zip`, `.egg` or `.py` files from primary storage `wasb://mycontainer@mystorageaccount.blob.core.windows.net/libs/*` to cluster local file system `/usr/libs/pylibs`. The step is needed as linux uses `:` to separate search path list, but HDInsight only support storage paths with scheme like `wasb://`. The remote storage path won't work correctly when you use `sys.path.insert`.
```bash sudo mkdir -p /usr/libs/pylibs sudo hadoop fs -copyToLocal wasb://mycontainer@mystorageaccount.blob.core.windows.net/libs/*.* /usr/libs/pylibs ```
-2. In your notebook, run below code in a code cell with PySpark kernel:
+2. In your notebook, run following code in a code cell with PySpark kernel:
```python import sys
iot-operations Howto Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-custom/howto-helm.md
var baseAkriValues = loadYamlContent('base.yml')
var overlayAkriValues = loadYamlContent('overlay.yml') var akriValues = union(baseAkriValues, overlayAkriValues)
-resource helmChart 'Microsoft.iotoperationsorchestrator/targets@2023-05-22-preview' = {
+resource helmChart 'Microsoft.iotoperationsorchestrator/targets@2023-10-04-preview' = {
name: 'akri-helm-chart-override' location: clusterLocation extendedLocation: {
iot-operations Howto K8s https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-custom/howto-k8s.md
param customLocationName string
var k8sConfigMap = loadYamlContent('config-map.yml')
-resource k8sResource 'Microsoft.iotoperationsorchestrator/targets@2023-05-22-preview' = {
+resource k8sResource 'Microsoft.iotoperationsorchestrator/targets@2023-10-04-preview' = {
name: 'k8s-resource' location: clusterLocation extendedLocation: {
machine-learning Concept What Is Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-what-is-managed-feature-store.md
Managed feature store provides the following security capabilities:
## Next steps - [Understanding top-level entities in managed feature store](concept-top-level-entities-in-managed-feature-store.md)-- [Manage access control for managed feature store](how-to-setup-access-control-feature-store.md)
+- [Manage access control for managed feature store](how-to-setup-access-control-feature-store.md)
+- [Azure Machine Learning managed feature stores samples repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/featurestore_sample)
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Azure Machine Learning workspaces have a five built-in roles that are available
| Role | Access level | | | | | **AzureML Data Scientist** | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. |
-| **AzureML Compute Operator** | Can create, manage and access compute resources within a workspace.|
+| **AzureML Compute Operator** | Can create, manage, delete, and access compute resources within a workspace.|
| **Reader** | Read-only actions in the workspace. Readers can list and view assets, including [datastore](how-to-access-data.md) credentials, in a workspace. Readers can't create or update these assets. | | **Contributor** | View, create, edit, or delete (where applicable) assets in a workspace. For example, contributors can create an experiment, create or attach a compute cluster, submit a run, and deploy a web service. | | **Owner** | Full access to the workspace, including the ability to view, create, edit, or delete (where applicable) assets in a workspace. Additionally, you can change role assignments. |
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
To view log output, select the **Deployment logs** tab in the endpoint's **Detai
:::image type="content" source="media/how-to-deploy-online-endpoints/deployment-logs.png" lightbox="media/how-to-deploy-online-endpoints/deployment-logs.png" alt-text="A screenshot of observing deployment logs in the studio.":::
-By default, logs are pulled from the inference server. To see logs from the storage initializer container, use the Azure CLI or Python SDK (see each tab for details). For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
+By default, logs are pulled from the inference server. To see logs from the storage initializer container, use the Azure CLI or Python SDK (see each tab for details). Logs from the storage initializer container provide information on whether code and model data were successfully downloaded to the container. For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
# [ARM template](#tab/arm)
If you aren't going use the deployment, you should delete it by running the foll
-## Next steps
+## Related content
- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md) - [Deploy models with REST](how-to-deploy-with-rest.md)
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
There are three logs that can be enabled for online endpoints:
* You can also use this log for performance analysis in determining the time required by the model to process each request.
-* **AMLOnlineEndpointEventLog**: Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events:
+* **AMLOnlineEndpointEventLog**: Contains event information regarding the container's life cycle. Currently, we provide information on the following types of events:
| Name | Message | | -- | -- |
Curated environments include integration with Application Insights, and you can
See [Application Insights overview](../azure-monitor/app/app-insights-overview.md) for more.
+In the studio, you can use the **Monitoring** tab on an online endpoint's page to see high-level activity monitor graphs for the managed online endpoint. To use the monitoring tab, you must select **Enable Application Insight diagnostic and data collection** when you create your endpoint.
-## Next steps
++
+## Related content
* Learn how to [view costs for your deployed endpoint](./how-to-view-online-endpoints-costs.md). * Read more about [metrics explorer](../azure-monitor/essentials/metrics-charts.md).
machine-learning How To Safely Rollout Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md
Alternatively, you can use the **Models** page to add a deployment:
:::image type="content" source="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-models-page.png" lightbox="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-models-page.png" alt-text="A screenshot of Add deployment option from Models page."::: 1. Follow the previous steps 3 to 9 to finish creating the green deployment.
+> [!NOTE]
+> When adding a new deployment to an endpoint, you can adjust the traffic balance between deployments on the "Traffic" page. At this point, though, you should keep the default traffic allocation to the deployments (100% traffic to "blue" and 0% traffic to "green").
++ ### Test the new deployment Though `green` has 0% of traffic allocated, you can still invoke the endpoint and deployment. Use the **Test** tab in the endpoint's details page to test your managed online deployment. Enter sample input and view the results.
Alternatively, you can delete a managed online endpoint directly by selecting th
-## Next steps
+## Related content
+ - [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints) - [Deploy models with REST](how-to-deploy-with-rest.md) - [Use network isolation with managed online endpoints](how-to-secure-online-endpoint.md)
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
- Title: Use managed online endpoints in the studio-
-description: 'Learn how to create and use managed online endpoints using the Azure Machine Learning studio.'
-------- Previously updated : 09/07/2022--
-# Create and use managed online endpoints in the studio
-
-Learn how to use the studio to create and manage your managed online endpoints in Azure Machine Learning. Use managed online endpoints to streamline production-scale deployments. For more information on managed online endpoints, see [What are endpoints](concept-endpoints.md).
-
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Create a managed online endpoint
-> * View managed online endpoints
-> * Add a deployment to a managed online endpoint
-> * Update managed online endpoints
-> * Delete managed online endpoints and deployments
-
-## Prerequisites
-- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).-- The examples repository - Clone the [Azure Machine Learning Example repository](https://github.com/Azure/azureml-examples). This article uses the assets in `/cli/endpoints/online`.-
-## Create a managed online endpoint
-
-Use the studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You can't create an empty managed online endpoint.
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. In the left navigation bar, select the **Endpoints** page.
-1. Select **+ Create**.
---
-### Register the model
-
-A model registration is a logical entity in the workspace that can contain a single model file, or a directory containing multiple files. The steps in this article assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model.
-
-To register the example model using Azure Machine Learning studio, use the following steps:
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. In the left navigation bar, select the **Models** page.
-1. Select **Register**, and then **From local files**.
-1. Select __Unspecified type__ for the __Model type__, then select __Browse__, and __Browse folder__.
-
- :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/register-model-folder.png" alt-text="A screenshot of the browse folder option.":::
-
-1. Select the `\azureml-examples\cli\endpoints\online\model-1\model` folder from the local copy of the repo you downloaded earlier. When prompted, select __Upload__. Once the upload completes, select __Next__.
-1. Enter a friendly __Name__ for the model. The steps in this article assume it's named `model-1`.
-1. Select __Next__, and then __Register__ to complete registration.
-
-For more information on working with registered models, see [Register and work with models](how-to-manage-models.md).
-
-### Follow the setup wizard to configure your managed online endpoint.
-
-You can also create a managed online endpoint from the **Models** page in the studio. This is an easy way to add a model to an existing managed online deployment.
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. In the left navigation bar, select the **Models** page.
-1. Select a model by checking the circle next to the model name.
-1. Select **Deploy** > **Deploy to real-time endpoint**.
-
- :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" lightbox="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" alt-text="A screenshot of creating a managed online endpoint from the Models UI.":::
-
-1. Enter an __Endpoint name__ and select __Managed__ as the compute type.
-1. Select __Next__, accepting defaults, until you're prompted for the environment. Here, select the following:
-
- * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-1\onlinescoring\score.py` file from the repo you downloaded earlier.
- * __Choose an environment__ section: Select the **Scikit-learn 0.24.1** curated environment.
-
-1. Select __Next__, accepting defaults, until you're prompted to create the deployment. Select the __Create__ button.
-
-## View managed online endpoints
-
-You can view your managed online endpoints in the **Endpoints** page. Use the endpoint details page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
-
-1. In the left navigation bar, select **Endpoints**.
-1. (Optional) Create a **Filter** on **Compute type** to show only **Managed** compute types.
-1. Select an endpoint name to view the endpoint detail page.
--
-### Test
-
-Use the **Test** tab in the endpoints details page to test your managed online deployment. Enter sample input and view the results.
-
-1. Select the **Test** tab in the endpoint's detail page.
-1. Use the dropdown to select the deployment you want to test.
-1. Enter sample input.
-1. Select **Test**.
--
-### Monitoring
-
-Use the **Monitoring** tab to see high-level activity monitor graphs for your managed online endpoint.
-
-To use the monitoring tab, you must select "**Enable Application Insight diagnostic and data collection**" when you create your endpoint.
--
-For more information on viewing other monitors and alerts, see [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md).
-
-### Deployment logs
-
-You can get logs from the containers that are running on the VM where the model is deployed. The amount of information you get depends on the provisioning status of the deployment. If the specified container is up and running, you'll see its console output; otherwise, you'll get a message to try again later.
-
-Use the **Deployment logs** tabs in the endpoint's details page to see log output from container.
-
-1. Select the **Deployment logs** tab in the endpoint's details page.
-1. Use the dropdown to select the deployment whose log you want to see.
--
-The logs are pulled from the inference server. Logs include the console log (from the inference server) which contains print/log statements from your scoring script (`score.py`).
-
-To get logs from the storage initializer container, use the Azure CLI or Python SDK. These logs contain information on whether code and model data were successfully downloaded to the container. See the [get container logs section in troubleshooting online endpoints deployment](how-to-troubleshoot-online-endpoints.md#get-container-logs).
-
-## Add a deployment to a managed online endpoint
-
-You can add a deployment to your existing managed online endpoint.
-
-From the **Endpoint details page**
-
-1. Select **+ Add Deployment** button in the [endpoint details page](#view-managed-online-endpoints).
-2. Follow the instructions to complete the deployment.
--
-Alternatively, you can use the **Models** page to add a deployment:
-
-1. In the left navigation bar, select the **Models** page.
-1. Select a model by checking the circle next to the model name.
-1. Select **Deploy** > **Deploy to real-time endpoint**.
-1. Choose to deploy to an existing managed online endpoint.
--
-> [!NOTE]
-> You can adjust the traffic balance between deployments in an endpoint when adding a new deployment.
->
-> :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/adjust-deployment-traffic.png" lightbox="media/how-to-create-managed-online-endpoint-studio/adjust-deployment-traffic.png" alt-text="A screenshot of how to use sliders to control traffic distribution across multiple deployments.":::
-
-## Update managed online endpoints
-
-You can update deployment traffic percentage and instance count from Azure Machine Learning studio.
-
-### Update deployment traffic allocation
-
-Use **deployment traffic allocation** to control the percentage of incoming of requests going to each deployment in an endpoint.
-
-1. In the endpoint details page, Select **Update traffic**.
-2. Adjust your traffic and select **Update**.
-
-> [!TIP]
-> The **Total traffic percentage** must sum to either 0% (to disable traffic) or 100% (to enable traffic).
-
-### Update deployment instance count
-
-Use the following instructions to scale an individual deployment up or down by adjusting the number of instances:
-
-1. In the endpoint details page. Find the card for the deployment you want to update.
-1. Select the **edit icon** in the deployment detail card.
-1. Update the instance count.
-1. Select **Update**.
-
-## Delete managed online endpoints and deployments
-
-Learn how to delete an entire managed online endpoint and it's associated deployments. Or, delete an individual deployment from a managed online endpoint.
-
-### Delete a managed online endpoint
-
-Deleting a managed online endpoint also deletes any deployments associated with it.
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. In the left navigation bar, select the **Endpoints** page.
-1. Select an endpoint by checking the circle next to the model name.
-1. Select **Delete**.
-
-Alternatively, you can delete a managed online endpoint directly in the [endpoint details page](#view-managed-online-endpoints).
-
-### Delete an individual deployment
-
-Use the following steps to delete an individual deployment from a managed online endpoint. This does affect the other deployments in the managed online endpoint:
-
-> [!NOTE]
-> You cannot delete a deployment that has allocated traffic. You must first [set traffic allocation](#update-deployment-traffic-allocation) for the deployment to 0% before deleting it.
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. In the left navigation bar, select the **Endpoints** page.
-1. Select your managed online endpoint.
-1. In the endpoint details page, find the deployment you want to delete.
-1. Select the **delete icon**.
-
-## Next steps
-
-In this article, you learned how to use Azure Machine Learning managed online endpoints. See these next steps:
--- [What are endpoints?](concept-endpoints.md)-- [How to deploy online endpoints with the Azure CLI](how-to-deploy-online-endpoints.md)-- [Deploy models with REST](how-to-deploy-with-rest.md)-- [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)-- [Troubleshooting managed online endpoints deployment and scoring](./how-to-troubleshoot-online-endpoints.md)-- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints)
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md
This table provides an index of tools in prompt flow. If existing tools can't me
| [Embedding](./embedding-tool.md) | Use Open AI's embedding model to create an embedding vector representing the input text. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Open Source LLM](./open-source-llm-tool.md) | Use an Open Source model from the Azure Model catalog, deployed to an Azure Machine Learning Online Endpoint for LLM Chat or Completion API calls. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Content Safety (Text)](./content-safety-text-tool.md) | Use Azure Content Safety to detect harmful content. | Default | [promptflow-contentsafety](https://pypi.org/project/promptflow-contentsafety/) |
+| [Content Safety (Text)](./content-safety-text-tool.md) | Use Azure Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-contentsafety/) |
| [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Search vector based query from the FAISS index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) | | [Vector DB Lookup](./vector-db-lookup-tool.md) | Search vector based query from existing Vector Database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) | | [Vector Index Lookup](./vector-index-lookup-tool.md) | Search text or vector based query from Azure Machine Learning Vector Index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
machine-learning Reference Checkpoint Performance For Large Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-checkpoint-performance-for-large-models.md
To enable full Nebula compatibility with PyTorch-based training scripts, modify
nm.init(persistent_storage_path, persistent_time_interval=2) ```
-1. To save checkpoints, replace the original `torch.save()` statement to save your checkpoint with Nebula:
+1. To save checkpoints, replace the original `torch.save()` statement to save your checkpoint with Nebula. Please ensure that your checkpoint instance begins with "global_step", such as "global_step500" or "global_step1000":
```python
- checkpoint = nm.Checkpoint()
- checkpoint.save(<'CKPT_NAME'>, model)
+ checkpoint = nm.Checkpoint('global_step500')
+ checkpoint.save('<CKPT_NAME>', model)
``` > [!NOTE] > ``<'CKPT_TAG_NAME'>`` is the unique ID for the checkpoint. A tag is usually the number of steps, the epoch number, or any user-defined name. The optional ``<'NUM_OF_FILES'>`` optional parameter specifies the state number which you would save for this tag.
To enable full Nebula compatibility with PyTorch-based training scripts, modify
- list all checkpoints - get latest checkpoints
-```python
-# Managing checkpoints
-## List all checkpoints
-ckpts = nm.list_checkpoints()
-## Get Latest checkpoint path
-latest_ckpt_path = nm.get_latest_checkpoint_path("checkpoint", persisted_storage_path)
-```
+ ```python
+ # Managing checkpoints
+ ## List all checkpoints
+ ckpts = nm.list_checkpoints()
+ ## Get Latest checkpoint path
+ latest_ckpt_path = nm.get_latest_checkpoint_path("checkpoint", persisted_storage_path)
+ ```
# [Using DeepSpeed](#tab/DEEPSPEED)
machine-learning Tutorial Cloud Workstation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md
Previously updated : 09/27/2023 Last updated : 11/28/2023 #Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
machine-learning Tutorial Develop Feature Set With Custom Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-develop-feature-set-with-custom-source.md
Previously updated : 10/27/2023 Last updated : 11/28/2023 - sdkv2
An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and inference steps look up the feature data. For more information about feature stores, see [feature store concepts](./concept-what-is-managed-feature-store.md).
-Part 1 of this tutorial series showed how to create a feature set specification with custom transformations, enable materialization and perform a backfill. Part 2 of this tutorial series showed how to experiment with features in the experimentation and training flows. Part 4 described how to run batch inference.
+Part 1 of this tutorial series showed how to create a feature set specification with custom transformations, enable materialization and perform a backfill. Part 2 showed how to experiment with features in the experimentation and training flows. Part 3 explained recurrent materialization for the `transactions` feature set, and showed how to run a batch inference pipeline on the registered model. Part 4 described how to run batch inference.
In this tutorial, you'll
In this tutorial, you'll
> [!NOTE] > This tutorial uses an Azure Machine Learning notebook with **Serverless Spark Compute**.
-* Make sure you execute the notebook from Tutorial 1. That notebook includes creation of a feature store and a feature set, followed by enabling of materialization and performance of backfill.
+* Make sure you complete the previous tutorials in this series. This tutorial reuses feature store and other resources created in those earlier tutorials.
## Set up
This tutorial uses the Python feature store core SDK (`azureml-featurestore`). T
You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yml` file covers them.
-### Configure the Azure Machine Learning Spark notebook.
+### Configure the Azure Machine Learning Spark notebook
You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook *featurestore_sample/notebooks/sdk_only/5. Develop a feature set with custom source.ipynb*. Keep this tutorial open and refer to it for documentation links and more explanation.
You can create a new notebook and execute the instructions in this tutorial step
2. Configure the session:
- 1. When the toolbar displays **Configure session**, select it.
- 2. On the **Python packages** tab, select **Upload Conda file**.
- 3. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
- 4. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
+ 1. Select **Configure session** in the top status bar.
+ 2. Select the **Python packages** tab, s
+ 3. Select **Upload Conda file**.
+ 4. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 5. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
## Set up the root directory for the samples This code cell sets up the root directory for the samples. It needs about 10 minutes to install all dependencies and start the Spark session.
If you created a resource group for the tutorial, you can delete that resource g
## Next steps * [Network isolation with feature store](./tutorial-network-isolation-for-feature-store.md)
-* [Azure Machine Learning feature stores samples repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/featurestore_sample)
+* [Azure Machine Learning feature stores samples repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/featurestore_sample)
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Previously updated : 10/27/2023 Last updated : 11/28/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
Before you proceed with this tutorial, be sure to complete the first and second
To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *3. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
+ 1. In the **Compute** dropdown list in the top nav, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
2. Configure the session:
- 1. When the toolbar displays **Configure session**, select it.
- 2. On the **Python packages** tab, select **Upload conda file**.
- 3. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
- 4. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
+ 1. Select **Configure session** in the top status bar.
+ 2. Select the **Python packages** tab.
+ 3. Select **Upload conda file**.
+ 4. Select the `azureml-examples/sdk/python/featurestore-sample/project/env/online.yml` file from your local machine.
+ 5. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
2. Start the Spark session.
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Previously updated : 11/01/2023 Last updated : 11/28/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
Before you proceed with this tutorial, be sure to cover these prerequisites:
* An Azure Machine Learning workspace. For more information about workspace creation, see [Quickstart: Create workspace resources](./quickstart-create-resources.md).
-* On your user account, the Owner or Contributor role for the resource group where the feature store is created.
+* On your user account, the Owner role for the resource group where the feature store is created.
If you choose to use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group.
This tutorial uses an Azure Machine Learning Spark notebook for development.
:::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot that shows selection of the sample directory in Azure Machine Learning studio.":::
-1. The **Select target directory** panel opens. Select the user directory (in this case, **testUser**), and then select **Clone**.
+1. The **Select target directory** panel opens. Select the **Users** directory, then select _your user name_, and finally select **Clone**.
:::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot showing selection of the target directory location in Azure Machine Learning studio for the sample resource."::: 1. To configure the notebook environment, you must upload the *conda.yml* file: 1. Select **Notebooks** on the left pane, and then select the **Files** tab.
- 1. Browse to the *env* directory (select **Users** > **testUser** > **featurestore_sample** > **project** > **env**), and then select the *conda.yml* file. In this path, *testUser* is the user directory.
+ 1. Browse to the *env* directory (select **Users** > **your_user_name** > **featurestore_sample** > **project** > **env**), and then select the *conda.yml* file.
1. Select **Download**. :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot that shows selection of the Conda YAML file in Azure Machine Learning studio.":::
+ 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **Configure session**.
+ 1. Select **Configure session** in the top status bar.
+ 1. Select **Python packages**.
+ 1. Select **Upload conda files**.
+ 1. Select the `conda.yml` file you downloaded on your local device.
+ 1. (Optional) Increase the session time-out (idle time in minutes) to reduce the serverless spark cluster startup time.
+ 1. In the Azure Machine Learning environment, open the notebook, and then select **Configure session**. :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot that shows selections for configuring a session for a notebook.":::
Not applicable.
### [SDK and CLI track](#tab/SDK-and-CLI-track)
-1. Install the Azure Machine Learning extension.
+1. Install the Azure Machine Learning CLI extension.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
As a best practice, entities help enforce use of the same join key definition ac
1. Initialize the feature store CRUD client.
- As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
+ As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
In this code sample, the client is scoped at feature store level.
The Storage Blob Data Reader role must be assigned to your user account on the o
### [SDK track](#tab/SDK-track)
+#### Set spark.sql.shuffle.partitions in the yaml file according to the feature data size
+
+ The spark configuration `spark.sql.shuffle.partitions` is an OPTIONAL parameter that can affect the number of parquet files generated (per day) when the feature set is materialized into the offline store. The default value of this parameter is 200. As best practice, avoid generation of many small parquet files. If offline feature retrieval becomes slow after feature set materialization, go to the corresponding folder in the offline store to check whether the issue involves too many small parquet files (per day), and adjust the value of this parameter accordingly.
+
+ > [!NOTE]
+ > The sample data used in this notebook is small. Therefore, this parameter is set to 1 in the
+ > featureset_asset_offline_enabled.yaml file.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset)] ### [SDK and CLI track](#tab/SDK-and-CLI-track)
+#### Set spark.sql.shuffle.partitions in the yaml file according to the feature data size
+
+ The spark configuration `spark.sql.shuffle.partitions` is an OPTIONAL parameter that can affect the number of parquet files generated (per day) when the feature set is materialized into the offline store. The default value of this parameter is 200. As best practice, avoid generation of many small parquet files. If offline feature retrieval becomes slow after feature set materialization, go to the corresponding folder in the offline store to check whether the issue involves too many small parquet files (per day), and adjust the value of this parameter accordingly.
+
+ > [!NOTE]
+ > The sample data used in this notebook is small. Therefore, this parameter is set to 1 in the
+ > featureset_asset_offline_enabled.yaml file.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset-cli)]
You can explore feature materialization status for a feature set in the **Materi
- The data can have a maximum of 2,000 *data intervals*. If your data contains more than 2,000 *data intervals*, create a new feature set version. - You can provide a list of more than one data statuses (for example, `["None", "Incomplete"]`) in a single backfill job. - During backfill, a new materialization job is submitted for each *data interval* that falls within the defined feature window.-- If a materialization job is pending, or it is running for a *data interval* that hasn't yet been backfilled, a new job isn't submitted for that *data interval*.
+- If a materialization job is pending, or that job is running for a *data interval* that hasn't yet been backfilled, a new job isn't submitted for that *data interval*.
- You can retry a failed materialization job. > [!NOTE]
machine-learning Tutorial Online Materialization Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-online-materialization-inference.md
Previously updated : 10/27/2023 Last updated : 11/28/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
You don't need to explicitly install these resources for this tutorial, because
You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook *featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
+ 1. In the **Compute** dropdown list in the top nav, select **Serverless Spark Compute**.
2. Configure the session:
- 1. Download *featurestore-sample/project/env/online.yml* file to your local machine.
- 2. When the toolbar displays **Configure session**, select it.
- 3. On the **Python packages** tab, select **Upload Conda file**.
- 4. Upload the *online.yml* file in the same way as described in [uploading *conda.yml* file in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Download *azureml-examples/sdk/python/featurestore-sample/project/env/online.yml* file to your local machine.
+ 2. In **configure session** in the top nav, select **Python packages**
+ 3. Select **Upload Conda file**
+ 4. Upload the *online.yml* file from your local machine, with the same steps as described in [uploading *conda.yml* file in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
5. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns. 2. This code cell starts the Spark session. It needs about 10 minutes to install all dependencies and start the Spark session.
managed-ccf Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-java.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying
## Setup
-This quickstart uses the Azure Identity library, along with Azure CLI or Azure PowerShell, to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+This quickstart uses the Azure Identity library, along with Azure CLI or Azure PowerShell, to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studi- [OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux.o Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
### Sign in to Azure
az group delete --resource-group myResourceGroup
## Next steps
-In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+In this quickstart, you created a Managed CCF resource by using the Azure SDK for Java. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
- [Azure Managed CCF overview](overview.md) - [Quickstart: Azure portal](quickstart-portal.md)
managed-ccf Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-python.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying
- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Python versions supported by the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites). - [OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux.
+- The minimum supported version of the Python package is 2.0.0b3.
## Setup
Install the Azure Active Directory identity client library:
pip install azure-identity ```
-Install the Azure confidential ledger management plane client library.
+Install the Azure confidential ledger management plane client library. The minimum supported version is 2.0.0b3 or later.
```terminal
-pip install azure.mgmt.confidentialledger
+pip install azure-mgmt-confidentialledger==2.0.0b3
``` ### Create a resource group
managed-ccf Quickstart Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-typescript.md
This quickstart uses the Azure Identity library, along with Azure CLI or Azure P
[!INCLUDE [Sign in to Azure](../../includes/confidential-ledger-sign-in-azure.md)]
-### Install the packages
-
-In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
+### Initialize a new npm project
+In a terminal or command prompt, create a suitable project folder and initialize an `npm` project. You may skip this step if you have an existing node project.
+```terminal
+cd <work folder>
+npm init -y
+```
+### Install the packages
Install the Azure Active Directory identity client library. ```terminal
-npm install @azure/identity
+npm install --save @azure/identity
``` Install the Azure Confidential Ledger management plane client library. ```terminal
-npm install @azure/arm-confidentialledger@1.3.0-beta.1
+npm install -save @azure/arm-confidentialledger@1.3.0-beta.1
+```
+
+Install the TypeScript compiler and tools globally
+
+```terminal
+npm install -g typescript
``` ### Create a resource group
npm install @azure/arm-confidentialledger@1.3.0-beta.1
### Use the Management plane client library
-The Azure SDK for JavaScript and TypeScript library (azure/arm-confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+The Azure SDK for JavaScript and TypeScript library [azure/arm-confidentialledger](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/confidentialledger/arm-confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource.
+
+To run the below samples, please save the code snippets into a file with a `.ts` extension into your project folder and compile it as part of your TypeScript project, or compile the script into JavaScript separately by running:
-```JavaScript
+```terminal
+tsc <filename.ts>
+```
+
+The compiled JavaScript file will have the same name but a `*.js` extension. Then run the script in nodeJS:
+```terminal
+node <filename.js>
+```
+
+The following sample TypeScript code creates and views the properties of a Managed CCF resource.
+
+```TypeScript
import { ConfidentialLedgerClient, ManagedCCFProperties, ManagedCCF, KnownLanguageRuntime, DeploymentType, MemberIdentityCertificate } from "@azure/arm-confidentialledger"; import { DefaultAzureCredential } from "@azure/identity";
-import { Console } from "console";
-const subscriptionId = "0000000-0000-0000-0000-000000000001"; // replace
+// Please replace these variables with appropriate values for your project
+const subscriptionId = "0000000-0000-0000-0000-000000000001";
const rgName = "myResourceGroup";
-const ledgerId = "confidentialbillingapp";
+const ledgerId = "testApp";
+const memberCert0 = "--BEGIN CERTIFICATE--\nMIIBvjCCAUSgAwIBAg...0d71ZtULNWo\n--END CERTIFICATE--";
+const memberCert1 = "--BEGIN CERTIFICATE--\nMIIBwDCCAUagAwIBAgI...2FSyKIC+vY=\n--END CERTIFICATE--";
-let client: ConfidentialLedgerClient;
-
-export async function main() {
+async function main() {
console.log("Creating a new instance.")
- client = new ConfidentialLedgerClient(new DefaultAzureCredential(), subscriptionId);
+ const client = new ConfidentialLedgerClient(new DefaultAzureCredential(), subscriptionId);
- let properties = <ManagedCCFProperties> {
+ const properties = <ManagedCCFProperties> {
deploymentType: <DeploymentType> { appSourceUri: "", languageRuntime: KnownLanguageRuntime.JS }, memberIdentityCertificates: [ <MemberIdentityCertificate>{
- certificate: "--BEGIN CERTIFICATE--\nMIIBvjCCAUSgAwIBAg...0d71ZtULNWo\n--END CERTIFICATE--",
+ certificate: memberCert0,
encryptionkey: "", tags: { "owner":"member0" } }, <MemberIdentityCertificate>{
- certificate: "--BEGIN CERTIFICATE--\nMIIBwDCCAUagAwIBAgI...2FSyKIC+vY=\n--END CERTIFICATE--",
+ certificate: memberCert1,
encryptionkey: "", tags: { "owner":"member1"
export async function main() {
nodeCount: 3, };
- let mccf = <ManagedCCF> {
+ const mccf = <ManagedCCF> {
location: "SouthCentralUS", properties: properties, }
- let createResponse = await client.managedCCFOperations.beginCreateAndWait(rgName, ledgerId, mccf);
+ const createResponse = await client.managedCCFOperations.beginCreateAndWait(rgName, ledgerId, mccf);
console.log("Created. Instance id: " + createResponse.id); // Get details of the instance console.log("Getting instance details.");
- let getResponse = await client.managedCCFOperations.get(rgName, ledgerId);
+ const getResponse = await client.managedCCFOperations.get(rgName, ledgerId);
console.log(getResponse.properties?.identityServiceUri); console.log(getResponse.properties?.nodeCount); // List mccf instances in the RG console.log("Listing the instances in the resource group.");
- let instancePages = await client.managedCCFOperations.listByResourceGroup(rgName).byPage();
+ const instancePages = await client.managedCCFOperations.listByResourceGroup(rgName).byPage();
for await(const page of instancePages){ for(const instance of page) {
export async function main() {
console.log("Deleted."); }
-main().catch((err) => {
- console.error(err);
-});
+(async () => {
+ try {
+ await main();
+ } catch(err) {
+ console.error(err);
+ }
+})();
```
+## Delete the Managed CCF resource
+The following piece of code deletes the Managed CCF resource. Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+```TypeScript
+import { ConfidentialLedgerClient, ManagedCCFProperties, ManagedCCF, KnownLanguageRuntime, DeploymentType, MemberIdentityCertificate } from "@azure/arm-confidentialledger";
+import { DefaultAzureCredential } from "@azure/identity";
+
+const subscriptionId = "0000000-0000-0000-0000-000000000001"; // replace
+const rgName = "myResourceGroup";
+const ledgerId = "confidentialbillingapp";
+
+async function deleteManagedCcfResource() {
+ const client = new ConfidentialLedgerClient(new DefaultAzureCredential(), subscriptionId);
+
+ console.log("Delete the instance.");
+ await client.managedCCFOperations.beginDeleteAndWait(rgName, ledgerId);
+ console.log("Deleted.");
+}
+
+(async () => {
+ try {
+ await deleteManagedCcfResource();
+ } catch(err) {
+ console.error(err);
+ }
+})();
+```
## Clean up resources Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
When you're finished, you can delete all of the resources from your Azure subscr
## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost)-- [How do I connect to the MySQL database that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-mysql-database-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How do I connect to a MySQL database that's secured behind a virtual network?](#how-do-i-connect-to-a-mysql-database-thats-secured-behind-a-virtual-network)
- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) - [Why is the GitHub Actions deployment so slow?](#why-is-the-github-actions-deployment-so-slow)
Pricing for the create resources is as follows:
- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). - The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-#### How do I connect to the MySQL database that's secured behind the virtual network with other tools?
-- For basic access from a commmand-line tool, you can run `mysql` from the app's SSH terminal.-- To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.-- You can also [integrate Azure Cloud Shell](../../cloud-shell/private-vnet.md) with the virtual network.
+### How do I connect to a MySQL database that's secured behind a virtual network?
+
+To connect to a MySQL database, you can use several methods based on the tools and environments at your disposal:
+
+- **Command-line tool access**:
+ - Use the `mysql` command from the app's SSH terminal for basic access.
+- **Desktop tools (for example, MySQL Workbench)**:
+ - **Using SSH tunneling with Azure CLI**:
+ - Create an [SSH session](../../app-service/configure-linux-open-ssh-session.md#open-ssh-session-from-remote-shell) to the web app by using the Azure CLI.
+ - Use the SSH session to tunnel the traffic to MySQL.
+ - **Using site-to-site VPN or Azure VM**:
+ - Your machine must be part of the virtual network.
+ - Consider using:
+ - An Azure VM linked to one of the subnets.
+ - A machine in an on-premises network that has a [site-to-site VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) to the Azure virtual network.
+- **Azure Cloud Shell integration**:
+ - [Integrate Azure Cloud Shell](../../cloud-shell/private-vnet.md) with the virtual network for direct access.
++ #### How does local app development work with GitHub Actions?
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
az aro create \
``` > [!NOTE]
-> The UserDefinedRouting flag can only be used when creating clusters with `--apiserver-visibility Private` and `--ingress-visibility Private` parameters.
+> The UserDefinedRouting flag can only be used when creating clusters with `--apiserver-visibility Private` and `--ingress-visibility Private` parameters. Ensure you are using the latest Azure CLI. Clusters deployed with Azure CLI 2.52.0 and older will get deployed with public IPs.
> This User Defined Routing option prevents a public IP address from being provisioned. User Defined Routing (UDR) allows you to create custom routes in Azure to override the default system routes or to add more routes to a subnet's route table. See
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
Title: Azure Orbital Ground Station - contact profile
+ Title: Azure Orbital Ground Station - Contact profile resource
description: Learn more about the contact profile resource, including how to create, modify, and delete the profile.
The contact profile resource stores pass requirements such as links and endpoint
You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink-only pass.
-These resources are mutable and do not undergo an authorization process like the spacecraft resources do. One contact profile can be used with many spacecraft resources.
+These resources are mutable and don't undergo an authorization process like the spacecraft resources do. One contact profile can be used with many spacecraft resources.
See [how to configure a contact profile](contact-profile.md) for a full list of parameters.
The minimum pass time and minimum elevation parameters are used by Azure Orbital
## Understanding links and channels
-A whole band, unique in direction and polarity, is called a link. Channels, which are children under links, specify the center frequency, bandwidth, and endpoints. Typically there is only one channel per link, but some applications require multiple channels per link.
+A whole band, unique in direction and polarity, is called a link. Channels, which are children under links, specify the center frequency, bandwidth, and endpoints. Typically there's only one channel per link, but some applications require multiple channels per link.
You can specify EIRP and G/T requirements for each link. EIRP applies to uplinks and G/T applies to downlinks. You can provide a name for each link and channel to keep track of these properties. Each channel has a modem associated with it. Follow the steps in [how to setup software modem](modem-chain.md) to understand the options.
-Refer to the example below to understand how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink.
+Refer to the example below to understand how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink. To find this information about your contact profile, navigate to the contact profile resource overview and click 'JSON view.'
```json {
Refer to the example below to understand how to specify an RHCP channel and an L
## Modifying or deleting a contact profile
-You can modify or delete the contact profile via the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/).
+You can modify or delete the contact profile via the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/).
-## Configuring a contact profile for third party ground stations
+In the Azure portal, navigate to the contact profile resource.
+- To modify minimum viable contact duration, minimum elevation, auto tracking, or events hubs telemetry, click 'Overview' on the left panel then click 'Edit properties.'
+- To edit links and channels, click 'Links' under 'Configurations' on the left panel then click 'Edit link' on the desired link.
+- To edit third-party configurations, click 'Third-Party Configurations' under 'Configurations' on the left panel then click 'Edit' on the desired configuration.
+- To delete a contact profile, click 'Overview' on the left panel then click 'Delete.'
-When you onboard a third party network, you will receive a token that identifies your profile. Use this token in the contact profile resource to link a contact profile to the third party network.
+## Configuring a contact profile for applicable partner ground stations
+
+After onboarding with a partner ground station network, you receive a name that identifies your configuration file. When [creating your contact profile](contact-profile.md#create-a-contact-profile-resource), add this configuration name to your link in the 'Third-Party Configuration" parameter. This links your contact profile to the partner network.
## Next steps
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
To contact a satellite, it must be registered and authorized as a spacecraft res
- [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher to submit a spacecraft authorization request. - Private spacecraft: an active spacecraft license and [relevant ground station licenses](initiate-licensing.md). - An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station.
+ - [KSAT Lite](https://azuremarketplace.microsoft.com/marketplace/apps/kongsbergsatelliteservicesas1657024593438.ksatlite?exp=ubp8&tab=Overview)
+ - [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
## Sign in to Azure
Submit a spacecraft authorization request in order to schedule [contacts](concep
| **Field** | **Value** | | | |
-| When did the problem start? | Select the current date & time |
-| Description | List your spacecraft's **frequency bands** and **desired ground stations** |
-| File upload | Upload any **pertinent licensing material**, if applicable |
+| When did the problem start? | Select the **current date & time**. |
+| Select Ground Stations | Select all desired **Microsoft or partner ground stations** that you are licensed for. If you do not see appropriate partner ground station(s), your subscription must be approved to access those sites by the Azure Orbital Ground Station team. |
+| Do you Accept and Acknowledge the Azure Orbital Supplemental Terms? | Review the terms in the link by hovering over the information icon then select **Yes**. |
+| Description | List your spacecraft's **frequency band(s)**. |
+| File upload | Upload any **pertinent licensing material**, if applicable. |
6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab. 7. Select the **Review + create** tab, or select the **Review + create** button.
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| | Azure Key Vault | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Key Vault.](../key-vault/general/private-link-service.md) | |Azure App Configuration | All public regions | | GA </br> [Learn how to create a private endpoint for Azure App Configuration](../azure-app-configuration/concept-private-endpoint.md) |
+|Azure Application Gateway | All public regions | | GA </br> [Azure Application Gateway Private Link](../application-gateway/private-link.md) |
+ ### Storage |Supported services |Available regions | Other considerations | Status |
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
>||||| >| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com | >| Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com |
->| Azure Bot Service (Microsoft.BotService/botServices) | Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
->| Azure Bot Service (Microsoft.BotService/botServices) | Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
+>| Azure Bot Service (Microsoft.BotService/botServices) | Bot | privatelink.directline.botframework.com | directline.botframework.com |
+>| Azure Bot Service (Microsoft.BotService/botServices) | Token | privatelink.token.botframework.com | token.botframework.com |
### Analytics
For Azure services, use the recommended zone names as described in the following
>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders | >||||| >| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net |
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.net | sql.azuresynapse.net |
>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net | >| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.net | azuresynapse.net | >| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
For Azure services, use the recommended zone names as described in the following
>| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | >| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com | >| Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
->| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net |
+>| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net </br> privatelink.blob.core.windows.net </br> privatelink.queue.core.windows.net </br> privatelink.table.core.windows.net | {regionName}.kusto.windows.net </br> blob.core.windows.net </br> queue.core.windows.net </br> table.core.windows.net |
>| Microsoft Power BI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | tenant | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com | >| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net |
For Azure services, use the recommended zone names as described in the following
>| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | {regionName}.privatelink.batch.azure.com | {regionName}.batch.azure.com | >| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | {regionName}.service.privatelink.batch.azure.com | {regionName}.service.batch.azure.com | >| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.microsoft.com | wvd.microsoft.com |
->| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.microsoft.com | wvd.microsoft.com |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | feed | privatelink.wvd.microsoft.com | wvd.microsoft.com |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/hostpools) | connection | privatelink.wvd.microsoft.com | wvd.microsoft.com |
### Containers
For Azure services, use the recommended zone names as described in the following
>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders | >||||| >| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) | management | privatelink.{regionName}.azmk8s.io </br> {subzone}.privatelink.{regionName}.azmk8s.io | {regionName}.azmk8s.io |
->| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.io </br> {regionName}.privatelink.azurecr.io | azurecr.io </br> {regionName}.azurecr.io |
+>| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.io </br> {regionName}.data.privatelink.azurecr.io | azurecr.io </br> {regionName}.data.azurecr.io |
### Databases
For Azure services, use the recommended zone names as described in the following
>||||| >| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.windows.net | database.windows.net | >| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.com | documents.azure.com |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Sql | privatelink.documents.azure.com | documents.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
>| Azure Cosmos DB (Microsoft.DBforPostgreSQL/serverGroupsv2) | coordinator | privatelink.postgres.cosmos.azure.com | postgres.cosmos.azure.com | >| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com | >| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
For Azure services, use the recommended zone names as described in the following
>| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net | >| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | >| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
+>| Azure Event Grid (Microsoft.EventGrid/namespaces) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net |
+>| Azure Event Grid (Microsoft.EventGrid/partnerNamespaces) | partnernamespace | privatelink.eventgrid.azure.net | eventgrid.azure.net |
>| Azure API Management (Microsoft.ApiManagement/service) | gateway | privatelink.azure-api.net | azure-api.net | >| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
For Azure services, use the recommended zone names as described in the following
>||||| >| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net | >| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.net | azure-devices-provisioning.net |
->| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
+>| Device Update for IoT Hubs (Microsoft.DeviceUpdate/accounts) | DeviceUpdate | privatelink.api.adu.microsoft.com | api.adu.microsoft.com |
+>| Azure IoT Central (Microsoft.IoTCentral/IoTApps) | iotApp | privatelink.azureiotcentral.com | azureiotcentral.com |
+>| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | API | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
### Media
For Azure services, use the recommended zone names as described in the following
>| Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.com | {regionCode}.siterecovery.windowsazure.com | >| Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net | >| Microsoft Purview (Microsoft.Purview/accounts) | account | privatelink.purview.azure.com | purview.azure.com |
->| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
+>| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purviewstudio.azure.com |
>| Azure Migrate (Microsoft.Migrate/migrateProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com | >| Azure Migrate (Microsoft.Migrate/assessmentProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com | >| Azure Resource Manager (Microsoft.Authorization/resourceManagementPrivateLinks) | ResourceManagement | privatelink.azure.com | azure.com |
+>| Azure Managed Grafana (Microsoft.Dashboard/grafana) | grafana | privatelink.grafana.azure.com | grafana.azure.com |
### Security
For Azure services, use the recommended zone names as described in the following
>| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net | >| Azure Key Vault (Microsoft.KeyVault/managedHSMs) | managedhsm | privatelink.managedhsm.azure.net | managedhsm.azure.net >| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.io | azconfig.io |
+>| Azure Attestation (Microsoft.Attestation/attestationProviders) | standard | privatelink.attest.azure.net | attest.azure.net |
### Storage
For Azure services, use the recommended zone names as described in the following
>| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.windows.net | web.core.windows.net | >| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.windows.net | dfs.core.windows.net | >| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.net | afs.azure.net |
+>| Azure Managed Disks (Microsoft.Compute/diskAccesses) | disks | privatelink.blob.core.windows.net | privatelink.blob.core.windows.net |
### Web
For Azure services, use the recommended zone names as described in the following
>||||| >| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net | >| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net | >| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net | >| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
For Azure services, use the recommended zone names as described in the following
>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders | >||||| >| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.chinacloudapi.cn | database.chinacloudapi.cn |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.cn | documents.azure.cn |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.cn | mongo.cosmos.azure.cn |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn |
->| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Sql | privatelink.documents.azure.cn | documents.azure.cn |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.cn | mongo.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn |
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn | >| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn | >| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Private-link resource name | Resource type | Sub-resources | | | - | - |
-| Application Gateway | Microsoft.Network/applicationgateways | application gateway |
+| Application Gateway | Microsoft.Network/applicationgateways |Frontend IP Configuration name|
| Azure AI services | Microsoft.CognitiveServices/accounts | account | | Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | fhir | | Azure App Configuration | Microsoft.Appconfiguration/configurationStores | configurationStores |
A private-link resource is the destination target of a specified private endpoin
> [!NOTE] > You can create private endpoints only on a General Purpose v2 (GPv2) storage account.
-
+ ## Network security of private endpoints When you use private endpoints, traffic is secured to a private-link resource. The platform validates network connections, allowing only those that reach the specified private-link resource. To access more subresources within the same Azure service, more private endpoints with corresponding targets are required. In the case of Azure Storage, for instance, you would need separate private endpoints to access the _file_ and _blob_ subresources.
reliability Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md
The following table describes the support matrix for moving virtual machines net
| NIC | Supported | By default, a new resource is created, however, you can specify an existing resource in the target configuration. | | VNET | Supported| By default, the source virtual network (VNET) is used, or you can specify an existing resource in the target configuration. |
+### How to move a VM from regional to zonal configuration
+
+Before moving a VM from regional to zonal configuration, see [FAQ - Move Azure single instance VM from regional to zonal](../virtual-machines/move-virtual-machines-regional-zonal-faq.md).
+
+To learn how to move VMs from regional to zonal configuration within same region in the Azure portal, see [Move Azure single instance VMs from regional to zonal configuration](../virtual-machines/move-virtual-machines-regional-zonal-portal.md).
+
+To learn how to do the same using Azure PowerShell and CLI, see [Move a VM in an availability zone using Azure PowerShell and CLI](../virtual-machines/move-virtual-machines-regional-zonal-powershell.md).
## Migration Option 3: Azure Resource Mover
reliability Reliability Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md
Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant an
### SLA -- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).
+- **Zonal** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).
-- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).
+- **Zone-redundancy** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).
### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Next, download the SAP installation media to the VM using a script.
1. Where `BOM_directory_path` is the absolute path to **SAP-automation-samples/SAP**. e.g. */home/loggedinusername/SAP-automation-samples/SAP*
-1. Where `orchestration_ansible_user` is the user with **admin** privileges like (e.g. root).
+1. Where `orchestration_ansible_user` is the user with **admin** privileges, e.g. *root*.
Now you can [install the SAP software](install-software.md) through Azure Center for SAP solutions.
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
When the cluster health attribute is set for a node, the location constraint tri
1. **[A]** Make sure that the package for the `azure-events-az` agent is already installed and up to date. ```bash
- sudo dnf info resource-agents
+ RHEL 8.x: sudo dnf info resource-agents
+ RHEL 9.x: sudo dnf info resource-agents-cloud
``` Minimum version requirements: * RHEL 8.4: `resource-agents-4.1.1-90.13` * RHEL 8.6: `resource-agents-4.9.0-16.9`
- * RHEL 8.8 and newer: `resource-agents-4.9.0-40.1`
- * RHEL 9.0 and newer: `resource-agents-cloud-4.10.0-34.2`
+ * RHEL 8.8: `resource-agents-4.9.0-40.1`
+ * RHEL 9.0: `resource-agents-cloud-4.10.0-9.6`
+ * RHEL 9.2 and newer: `resource-agents-cloud-4.10.0-34.1`
1. **[1]** Configure the resources in Pacemaker.
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
Title: Custom AML skill in skillsets
description: Extend capabilities of Azure AI Search skillsets with Azure Machine Learning models. -
search Cognitive Search Create Custom Skill Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-create-custom-skill-example.md
Title: 'Custom skill example using Bing Entity Search API' description: Demonstrates using the Bing Entity Search service in a custom skill mapped to an AI-enriched indexing pipeline in Azure AI Search.--++ Last updated 12/01/2022
search Cognitive Search Custom Skill Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-scale.md
Title: 'Scale and manage custom skill'
description: Learn the tools and techniques for efficiently scaling out a custom skill for maximum throughput. Custom skills invoke custom AI models or logic that you can add to an AI-enriched indexing pipeline in Azure AI Search. --++ - ignite-2023
search Cognitive Search Skill Conditional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-conditional.md
Title: Conditional cognitive skill description: The conditional skill in Azure AI Search enables filtering, creating defaults, and merging values in a skillset definition.--++ - ignite-2023
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-custom-entity-lookup.md
Title: Custom Entity Lookup skill description: Extract different custom entities from text in an Azure AI Search enrichment pipeline.--++ - ignite-2023
search Cognitive Search Skill Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-deprecated.md
Title: Deprecated Cognitive Skills description: This page contains a list of cognitive skills that are considered deprecated and won't be supported moving forward.--++ - ignite-2023
search Cognitive Search Skill Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition.md
Title: Entity Recognition cognitive skill (v2) description: Extract different types of entities from text in an enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Keyphrases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-keyphrases.md
Title: Key Phrase Extraction cognitive skill description: Evaluates unstructured text, and for each record, returns a list of key phrases in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-language-detection.md
Title: Language detection cognitive skill description: Evaluates unstructured text, and for each record, returns a language identifier with a score indicating the strength of the analysis in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Named Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-named-entity-recognition.md
Title: Named Entity Recognition skill (v2) description: Extract named entities for person, location and organization from text in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Sentiment V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment-v3.md
Title: Sentiment cognitive skill (v3)
description: Provides sentiment labels for text in an AI enrichment pipeline in Azure AI Search. -
search Cognitive Search Skill Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment.md
Title: Sentiment cognitive skill (v2) description: Extract a positive-negative sentiment score from text in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Shaper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-shaper.md
Title: Shaper cognitive skill description: Extract metadata and structured information from unstructured data and shape it as a complex type in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Textmerger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textmerger.md
Title: Text Merge cognitive skill description: Merge text from a collection of fields into one consolidated field. Use this cognitive skill in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textsplit.md
Title: Text split skill description: Break text into chunks or pages of text based on length in an AI enrichment pipeline in Azure AI Search.--++ - ignite-2023
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
Last updated 11/27/2023
# Upgrade to the latest REST API in Azure AI Search
-Use this article to migrate data plane REST API calls to newer *stable* versions of the [**Search REST API**](/rest/api/searchservice/).
+Use this article to migrate data plane calls to newer *stable* versions of the [**Search REST API**](/rest/api/searchservice/).
+ [**2023-11-01**](/rest/api/searchservice/search-service-api-versions#2023-11-01) is the most recent stable version. Semantic ranking and vector search support are generally available in this version.
-+ [**2023-10-01-preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview version. Integrated data chunking and vectorization using the [Text Split](cognitive-search-skill-textsplit.md) skill and [AzureOpenAiEmbedding](cognitive-search-skill-azure-openai-embedding.md) skill are introduced in this version. There's no migration guidance for preview API versions, but you can review code samples and walkthroughs for guidance. See [Integrated vectorization (preview)](vector-search-integrated-vectorization.md) for your first step.
++ [**2023-10-01-preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview version. [Integrated data chunking and vectorization](vector-search-integrated-vectorization.md) using the [Text Split](cognitive-search-skill-textsplit.md) skill and [Azure OpenAI Embedding](cognitive-search-skill-azure-openai-embedding.md) skill are introduced in this version. There's no migration guidance for preview API versions, but you can review [code samples](https://github.com/Azure/azure-search-vector-samples) and [walkthroughs](vector-search-how-to-configure-vectorizer.md) for guidance. > [!NOTE]
-> New filter controls on the table of contents provide version-specific API reference pages. To get the right information, open a reference page and then apply the filter.
+> API reference docs are now versioned. To get the right information, open a reference page and then apply the version-specific filter located above the table of contents.
<a name="UpgradeSteps"></a>
If any of these situations apply to you, change your code to maintain existing f
This version has breaking changes and behavioral differences for semantic ranking and vector search support.
-[Semantic ranking](semantic-search-overview.md) no longer uses `queryLanguage`. It also requires a `semanticConfiguration` definition. If you're migrating from 2020-06-30-preview, a semantic configuration replaces `searchFields`. See [Migrate from preview version](semantic-how-to-query-request.md#migrate-from-preview-versions) for steps.
++ [Semantic ranking](semantic-search-overview.md) no longer uses `queryLanguage`. It also requires a `semanticConfiguration` definition. If you're migrating from 2020-06-30-preview, a semantic configuration replaces `searchFields`. See [Migrate from preview version](semantic-how-to-query-request.md#migrate-from-preview-versions) for steps.
-[Vector search](vector-search-overview.md) support was introduced in [Create or Update Index (2023-07-01-preview)](/rest/api/searchservice/preview-api/create-or-update-index). If you're migrating from that version, there are new options and several breaking changes. New options include vector filter mode, vector profiles, and an exhaustive K-nearest neighbors algorithm and query-time exhaustive k-NN flag. Breaking changes include renaming and restructuring the vector configuration in the index, and vector query syntax.
++ [Vector search](vector-search-overview.md) support was introduced in [Create or Update Index (2023-07-01-preview)](/rest/api/searchservice/preview-api/create-or-update-index). If you're migrating from that version, there are new options and several breaking changes. New options include vector filter mode, vector profiles, and an exhaustive K-nearest neighbors algorithm and query-time exhaustive k-NN flag. Breaking changes include renaming and restructuring the vector configuration in the index, and vector query syntax.
-If you added vector support using 2023-10-01-preview, there are no breaking changes. There's one behavior difference: the `vectorFilterMode` default changed from postfilter to prefilter. Change the API version and test your code to confirm the migration from the previous preview version (2023-07-01-preview).
+If you added vector support using 2023-10-01-preview, there are no breaking changes, but there's one behavior difference: the `vectorFilterMode` default changed from postfilter to prefilter for [filter expressions](vector-search-filters.md). The default is prefilter for indexes created after 2023-10-01. Indexes created before that date only support postfilter, regardless of how you set the filter mode.
> [!TIP]
-> You can upgrade a 2023-07-01-preview index in the Azure portal. The portal detects the previous version and provides a **Migrate** button. Select **Edit JSON** to review the updated schema before selecting **Migrate**. The new and changed schema conforms to the steps described in this section. Portal migration only handles indexes with one vector field. Indexes with more than one vector field require manual migration.
+> Azure portal supports a one-click upgrade path for 2023-07-01-preview indexes. The portal detects that version and provides a **Migrate** button. Before selecting **Migrate**, select **Edit JSON** to review the updated schema first. You should find a schema that conforms to the changes described in this section. Portal migration only handles indexes with one vector field. Indexes with more fields require manual migration.
Here are the steps for migrating from 2023-07-01-preview:
Here are the steps for migrating from 2023-07-01-preview:
```http { "search": (this parameter is ignored in vector search),
- "vectors": [{
+ "vectors": [
+ {
"value": [ 0.103, 0.0712,
Here are the steps for migrating from 2023-07-01-preview:
], "fields": "contentVector", "k": 5
- }],
+ }
+ ],
"select": "title, content, category" } ```
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
description: Set up a SharePoint indexer to automate indexing of document library content in Azure AI Search. -
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
description: Learn how to set up an indexer connection to an Azure Cosmos DB account via a managed identity -
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
description: Learn how to set up an Azure AI Search indexer connection to an Azure SQL Managed Instance using a managed identity -
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
Title: Indexer troubleshooting guidance
description: This article provides indexer problem and resolution guidance for cases when no error messages are returned from the service search. -
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-modeling-multitenant-saas-applications.md
Title: Multitenancy and content isolation description: Learn about common design patterns for multitenant SaaS applications while using Azure AI Search.--++ - ignite-2023
search Search Performance Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-analysis.md
Title: Analyze performance description: Learn about the tools, behaviors, and approaches for analyzing query and indexing performance in Azure AI Search.--++ - ignite-2023
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
Title: Performance tips description: Learn about tips and best practices for maximizing performance on a search service.--++ - ignite-2023
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
- ignite-2023 Previously updated : 01/05/2023 Last updated : 11/28/2023 # Tutorial: Create a custom analyzer for phone numbers
With our character filters, tokenizer, and token filters in place, we're ready t
{ "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer", "name": "phone_analyzer",
- "tokenizer": "custom_tokenizer_phone",
+ "tokenizer": "keyword_v2",
"tokenFilters": [ "custom_ngram_filter" ],
search Vector Search How To Configure Vectorizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-configure-vectorizer.md
A *vectorizer* is a component of a [search index](search-what-is-an-index.md) th
A vectorizer is used during indexing and queries. It allows the search service to handle chunking and coding on your behalf.
-You can use the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md), the [2023-10-01-Preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST APIs, or any Azure beta SDK package that's been updated to provide this feature.
+You can use the [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md), the [2023-10-01-Preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST APIs, or any Azure beta SDK package that's been updated to provide this feature.
## Prerequisites
sentinel Connect Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-defender-for-cloud.md
Title: Connect Microsoft Defender for Cloud alerts to Microsoft Sentinel
+ Title: Ingest Microsoft Defender for Cloud subscription-based alerts to Microsoft Sentinel
description: Learn how to connect security alerts from Microsoft Defender for Cloud and stream them into Microsoft Sentinel.
-# Connect Microsoft Defender for Cloud alerts to Microsoft Sentinel
+# Ingest Microsoft Defender for Cloud alerts to Microsoft Sentinel
-[Microsoft Defender for Cloud](../defender-for-cloud/index.yml)'s integrated cloud workload protections allow you to detect and quickly respond to threats across hybrid and multi-cloud workloads.
+[Microsoft Defender for Cloud](../defender-for-cloud/index.yml)'s integrated cloud workload protections allow you to detect and quickly respond to threats across hybrid and multicloud workloads.
-This connector allows you to stream [security alerts from Defender for Cloud](../defender-for-cloud/alerts-reference.md) into Microsoft Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context.
+This connector allows you to ingest [security alerts from Defender for Cloud](../defender-for-cloud/alerts-reference.md) into Microsoft Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context.
As [Microsoft Defender for Cloud Defender plans](../defender-for-cloud/defender-for-cloud-introduction.md#protect-cloud-workloads) are enabled per subscription, this data connector is also enabled or disabled separately for each subscription.
-Microsoft Defender for Cloud was formerly known as Azure Security Center. Defender for Cloud's enhanced security features were formerly known collectively as Azure Defender.
-
+The new **Tenant-based Microsoft Defender for Cloud connector**, in PREVIEW, allows you to collect Defender for Cloud alerts over your entire tenant, without having to enable each subscription separately. It also leverages [Defender for Cloud's integration with Microsoft Defender XDR](ingest-defender-for-cloud-incidents.md) (formerly Microsoft 365 Defender) to ensure that all of your Defender for Cloud alerts are fully included in any incidents you receive through [Microsoft Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md).
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
Enabling **bi-directional sync** will automatically sync the status of original
- You will need the `SecurityInsights` resource provider to be registered for each subscription where you want to enable the connector. Review the guidance on the [resource provider registration status](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) and the ways to register it. - To enable bi-directional sync, you must have the **Contributor** or **Security Admin** role on the relevant subscription.+ - Install the solution for **Microsoft Defender for Cloud** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md). ## Connect to Microsoft Defender for Cloud
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
Last updated 02/01/2023
# Connect data from Microsoft 365 Defender to Microsoft Sentinel
-Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)** and **Microsoft Entra ID Protection (AADIP)**.
+Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, **Microsoft Defender for Cloud Apps**, and **Microsoft Defender for Cloud**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention** and **Microsoft Entra ID Protection**.
The connector also lets you stream **advanced hunting** events from *all* of the above Defender components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
sentinel Ingest Defender For Cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ingest-defender-for-cloud-incidents.md
+
+ Title: Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration
+description: Learn how using Microsoft Defender for Cloud's integration with Microsoft Defender XDR lets you ingest Microsoft Defender for Cloud incidents through Microsoft Defender XDR. This lets you add Defender for Cloud incidents to your Microsoft Sentinel incidents queue while seamlessly applying Defender XDR's strengths to help investigate all your cloud workload security incidents.
+++ Last updated : 11/28/2023++
+# Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration
+
+Microsoft Defender for Cloud is now [integrated with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview), formerly known as Microsoft 365 Defender. This integration, currently **in Preview**, allows Defender XDR to collect alerts from Defender for Cloud and create Defender XDR incidents from them.
+
+Thanks to this integration, Microsoft Sentinel customers who have enabled [Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md) will now be able to ingest and synchronize Defender for Cloud incidents, with all their alerts, through Microsoft Defender XDR.
+
+To support this integration, Microsoft Sentinel has added a new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector will allow Microsoft Sentinel customers to receive Defender for Cloud alerts and incidents across their entire tenants, without having to monitor and maintain the connector's enrollment to all their Defender for Cloud subscriptions.
+
+This connector can be used to ingest Defender for Cloud alerts, regardless of whether you have Defender XDR incident integration enabled.
+
+> [!IMPORTANT]
+> The Defender for Cloud integration with Defender XDR, and the Tenant-based Microsoft Defender for Cloud connector, are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Choose how to use this integration and the new connector
+
+How you choose to use this integration, and whether you want to ingest complete incidents or just alerts, will depend in large part on what you're already doing with respect to Microsoft Defender XDR incidents.
+
+- If you're already ingesting Defender XDR incidents, or if you're choosing to start doing so now, you're strongly advised to enable this new tenant-based connector. Your Defender XDR incidents will now include Defender for Cloud-based incidents with fully populated alerts from all Defender for Cloud subscriptions in your tenant.
+
+ If, in this situation, you remain with the legacy subscription-based Defender for Cloud connector and don't connect the new tenant-based one, you may receive Defender for Cloud incidents that contain empty alerts (in the case of a subscription to which the connector isn't enrolled).
+
+- If you don't intend to enable [Microsoft Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md), you can still receive Defender for Cloud *alerts*, regardless of which version of the connector you enable. However, the new tenant-based connector still affords you the advantage of not needing the permissions to monitor and maintain your list of Defender for Cloud subscriptions in the connector.
+
+- If you *have* enabled Defender XDR integration, but you only want to receive Defender for Cloud *alerts* but not *incidents*, you can use [automation rules](create-manage-use-automation-rules.md) to immediately close Defender for Cloud incidents as they arrive.
+
+ If that's not an adequate solution, or if you still want to collect alerts from Defender for Cloud on a per-subscription basis, you can completely opt-out of the Defender for Cloud integration in the Microsoft Defender XDR portal, and then use the legacy, subscription-based version of the Defender for Cloud connector to receive those alerts.
+
+## Set up the integration in Microsoft Sentinel
+
+If you haven't already enabled [incident integration in your Microsoft 365 Defender connector](connect-microsoft-365-defender.md), do so first.
+
+Then, enable the new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector is available through the **Microsoft Defender for Cloud solution**, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the content hub.
+
+If you had previously enabled the legacy, subscription-based Defender for Cloud connector (which will be displayed as **Subscription-based Microsoft Defender for Cloud (Legacy)**), then you're advised to disable it to prevent duplication of alerts in your logs.
+
+If you have any [Scheduled or Microsoft Security analytics rules](detect-threats-built-in.md) that create incidents from Defender for Cloud alerts, you're encouraged to disable these rules, since you'll be receiving ready-made incidents created by&mdash;and synchronized with&mdash;Microsoft 365 Defender.
+
+If there are specific types of Defender for Cloud alerts for which you don't want to create incidents, you can use automation rules to close these incidents immediately, or you can use the built-in tuning capabilities in the Microsoft 365 Defender portal.
+
+## Next steps
+
+In this article, you learned how to use Microsoft Defender for Cloud's integration with Microsoft Defender XDR to ingest incidents and alerts into Microsoft Sentinel.
+
+Learn more about the Microsoft Defender for Cloud integration with Microsoft Defender XDR.
+- See [Microsoft Defender for Cloud in Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud), and particularly the [Impact to Microsoft Sentinel users](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud#impact-to-microsoft-sentinel-users) section, from the Microsoft Defender XDR documentation.
+- See [Alerts and incidents in Microsoft 365 Defender (Preview)](../defender-for-cloud/concept-integration-365.md) from the Microsoft Defender for Cloud documentation.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/micros
This integration gives Microsoft 365 security incidents the visibility to be managed from within Microsoft Sentinel, as part of the primary incident queue across the entire organization, so you can see ΓÇô and correlate ΓÇô Microsoft 365 incidents together with those from all of your other cloud and on-premises systems. At the same time, it allows you to take advantage of the unique strengths and capabilities of Microsoft 365 Defender for in-depth investigations and a Microsoft 365-specific experience across the Microsoft 365 ecosystem. Microsoft 365 Defender enriches and groups alerts from multiple Microsoft 365 products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. The component services that are part of the Microsoft 365 Defender stack are: -- **Microsoft Defender for Endpoint (MDE)**-- **Microsoft Defender for Identity (MDI)**-- **Microsoft Defender for Office 365 (MDO)**-- **Microsoft Defender for Cloud Apps (MDA)**
+- **Microsoft Defender for Endpoint**
+- **Microsoft Defender for Identity**
+- **Microsoft Defender for Office 365**
+- **Microsoft Defender for Cloud Apps**
+- **Microsoft Defender for Cloud** (Preview)
Other services whose alerts are collected by Microsoft 365 Defender include: -- **Microsoft Purview Data Loss Prevention (DLP)** ([Learn more](/microsoft-365/security/defender/investigate-dlp))-- **Microsoft Entra ID Protection (AADIP)** ([Learn more](/defender-cloud-apps/aadip-integration))
+- **Microsoft Purview Data Loss Prevention** ([Learn more](/microsoft-365/security/defender/investigate-dlp))
+- **Microsoft Entra ID Protection** ([Learn more](/defender-cloud-apps/aadip-integration))
In addition to collecting alerts from these components and other services, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## November 2023
+- [Take advantage of Microsoft Defender for Cloud integration with Microsoft Defender XDR (Preview)](#take-advantage-of-microsoft-defender-for-cloud-integration-with-microsoft-defender-xdr-preview)
- [Near-real-time rules now generally available](#near-real-time-rules-now-generally-available) - [Elevate your cybersecurity intelligence with enrichment widgets (Preview)](#elevate-your-cybersecurity-intelligence-with-enrichment-widgets-preview)
+### Take advantage of Microsoft Defender for Cloud integration with Microsoft Defender XDR (Preview)
+
+Microsoft Defender for Cloud is now [integrated with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview), formerly known as Microsoft 365 Defender. This integration, currently **in Preview**, allows Defender XDR to collect alerts from Defender for Cloud and create Defender XDR incidents from them.
+
+Thanks to this integration, Microsoft Sentinel customers who have enabled [Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md) will now be able to ingest and synchronize Defender for Cloud incidents, with all their alerts, through Microsoft Defender XDR.
+
+To support this integration, Microsoft has added a new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector will allow Microsoft Sentinel customers to receive Defender for Cloud alerts and incidents across their entire tenants, without having to monitor and maintain the connector's enrollment to all their Defender for Cloud subscriptions.
+
+This connector can be used to ingest Defender for Cloud alerts, regardless of whether you have Defender XDR incident integration enabled.
+
+- Learn more about [Microsoft Defender for Cloud integration with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview).
+- Learn more about [ingesting Defender for Cloud incidents into Microsoft Sentinel](ingest-defender-for-cloud-incidents.md).
+<!--
+- Learn how to [connect the tenant-based Defender for Cloud data connector](connect-defender-for-cloud-tenant.md) (in Preview).
+-->
+ ### Near-real-time rules now generally available Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) are now generally available (GA). These highly responsive rules provide up-to-the-minute threat detection by running their queries at intervals just one minute apart.
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
Title: Azure Service Bus Geo-disaster recovery | Microsoft Docs description: How to use geographical regions to fail over and disaster recovery in Azure Service Bus Previously updated : 10/27/2022 Last updated : 11/28/2023 # Azure Service Bus Geo-disaster recovery Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
-Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. A premium namespace can have two or more messaging units and these messaging units will be spread across multiple failure domains within a datacenter, supporting an all-active Service Bus cluster model.
+Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service continues to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. A premium namespace can have two or more messaging units and these messaging units are spread across multiple failure domains within a datacenter, supporting an all-active Service Bus cluster model.
For a premium tier namespace, the outage risk is further spread across three physically separated facilities ([availability zones](#availability-zones)), and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of a datacenter. The all-active Azure Service Bus cluster model within a failure domain along with the availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
-The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region will typically involve several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus Premium SKU.
+The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region typically involves several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus Premium SKU.
-The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (queues, topics, subscriptions, filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
+The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (queues, topics, subscriptions, filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move re-points the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
## Important points to consider -- The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they're already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
+- The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication requires not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they're already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
- Microsoft Entra role-based access control (RBAC) assignments to Service Bus entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them. - The following configurations aren't replicated. - Virtual network configurations
The Geo-Disaster recovery feature ensures that the entire configuration of a nam
- Identities and encryption settings (customer-managed key encryption or bring your own key (BYOK) encryption) - Enable auto scale - Disable local authentication-- Pairing a [partitioned namespace](enable-partitions-premium.md) with a non-partitioned namespace is not supported.
+- Pairing a [partitioned namespace](enable-partitions-premium.md) with a non-partitioned namespace isn't supported.
+- if `AutoDeleteOnIdle` is turned on an entity, the entity might not be present in the secondary namespace when the failover occurs.
> [!TIP] > For replicating the contents of queues and topic subscriptions and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](service-bus-federation-overview.md).
It's important to note the distinction between "outages" and "disasters."
An *outage* is the temporary unavailability of Azure Service Bus, and can affect some components of the service, such as a messaging store, or even the entire datacenter. However, after the problem is fixed, Service Bus becomes available again. Typically, an outage doesn't cause the loss of messages or other data. An example of such an outage might be a power failure in the datacenter. Some outages are only short connection losses because of transient or network issues.
-A *disaster* is defined as the permanent, or longer-term loss of a Service Bus cluster, Azure region, or datacenter. The region or datacenter may or may not become available again, or may be down for hours or days. Examples of such disasters are fire, flooding, or earthquake. A disaster that becomes permanent might cause the loss of some messages, events, or other data. However, in most cases there should be no data loss and messages can be recovered once the data center is back up.
+A *disaster* is defined as the permanent, or longer-term loss of a Service Bus cluster, Azure region, or datacenter. The region or datacenter might or might not become available again, or might be down for hours or days. Examples of such disasters are fire, flooding, or earthquake. A disaster that becomes permanent might cause the loss of some messages, events, or other data. However, in most cases there should be no data loss and messages can be recovered once the data center comes back up.
The Geo-disaster recovery feature of Azure Service Bus is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to transient, or temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
The following terms are used in this article:
- *Alias*: The name for a disaster recovery configuration that you set up. The alias provides a single stable Fully Qualified Domain Name (FQDN) connection string. Applications use this alias connection string to connect to a namespace. Using an alias ensures that the connection string is unchanged when the failover is triggered. -- *Primary/secondary namespace*: The namespaces that correspond to the alias. The primary namespace is "active" and receives messages (this can be an existing or new namespace). The secondary namespace is "passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly accept messages without any application code or connection string changes. To ensure that only the active namespace receives messages, you must use the alias.
+- *Primary/secondary namespace*: The namespaces that correspond to the alias. The primary namespace is "active" and receives messages (it can be an existing or new namespace). The secondary namespace is "passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly accept messages without any application code or connection string changes. To ensure that only the active namespace receives messages, you must use the alias.
- *Metadata*: Entities such as queues, topics, and subscriptions; and their properties of the service that are associated with the namespace. Only entities and their settings are replicated automatically. Messages aren't replicated. - *Failover*: The process of activating the secondary namespace.
You first create or use an existing primary namespace, and a new secondary names
:::image type="content" source="./media/service-bus-geo-dr/failover-page.png" alt-text="Screenshot showing the Failover page."::: > [!IMPORTANT]
- > Failing over will activate the secondary namespace and remove the primary namespace from the Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
+ > Failing over activates the secondary namespace and remove the primary namespace from the Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
1. Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync with the remaining subsystem or infrastructure.
It's because, during migration, your Azure Service Bus standard namespace connec
Your client applications must utilize this alias (that is, the Azure Service Bus standard namespace connection string) to connect to the premium namespace where the disaster recovery pairing has been set up.
-If you use the Azure portal to set up the disaster recovery configuration, the portal will abstract this caveat from you.
+If you use the Azure portal to set up the disaster recovery configuration, the portal abstracts this caveat from you.
## Failover flow
The [samples on GitHub](https://github.com/Azure/azure-service-bus/tree/master/s
Note the following considerations to keep in mind with this release: - In your failover planning, you should also consider the time factor. For example, if you lose connectivity for longer than 15 to 20 minutes, you might decide to initiate the failover.--- The fact that no data is replicated means that currently active sessions aren't replicated. Additionally, duplicate detection and scheduled messages may not work. New sessions, new scheduled messages, and new duplicates will work. -
+- The fact that no data is replicated means that currently active sessions aren't replicated. Additionally, duplicate detection and scheduled messages might not work. New sessions, new scheduled messages, and new duplicates work.
- Failing over a complex distributed infrastructure should be [rehearsed](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan) at least once.- - Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and rules also count as entities. ## Availability Zones
-The Service Bus Premium SKU supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of the messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK will automatically reconnect to Service Bus.
+The Service Bus Premium SKU supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of the messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK automatically reconnects to Service Bus.
When you use availability zones, both metadata and data (messages) are replicated across data centers in the availability zone. > [!NOTE] > The Availability Zones support for Azure Service Bus Premium is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
-When you create a premium tier namespace, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. There's no additional cost for using this feature and you can't disable or enable this feature.
+When you create a premium tier namespace, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. There's no extra cost for using this feature and you can't disable or enable this feature.
## Private endpoints This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Service Bus in general, see [Integrate Azure Service Bus with Azure Private Link](private-link-service.md). ### New pairings
-If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary namespaces have private endpoints. We recommend that you use same configurations on the primary and secondary namespaces and on virtual networks in which private endpoints are created.
+If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace without a private endpoint, the pairing fails. The pairing succeeds only if both primary and secondary namespaces have private endpoints. We recommend that you use same configurations on the primary and secondary namespaces and on virtual networks in which private endpoints are created.
> [!NOTE]
-> When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works or will work after failover. It's your responsibility to ensure that the secondary namespace with private endpoint will work as expected after failover.
+> When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works or works after failover. It's your responsibility to ensure that the secondary namespace with private endpoint works as expected after failover.
> > To test that the private endpoint configurations are same, send a [Get queues](/rest/api/servicebus/controlplane-stable/queues/get) request to the secondary namespace from outside the virtual network, and verify that you receive an error message from the service. ### Existing pairings
-If pairing between primary and secondary namespace already exists, private endpoint creation on the primary namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one for the primary namespace.
+If pairing between primary and secondary namespace already exists, private endpoint creation on the primary namespace fails. To resolve, create a private endpoint on the secondary namespace first and then create one for the primary namespace.
> [!NOTE] > While we allow read-only access to the secondary namespace, updates to the private endpoint configurations are permitted.
If pairing between primary and secondary namespace already exists, private endpo
### Recommended configuration When creating a disaster recovery configuration for your application and Service Bus, you must create private endpoints for both primary and secondary Service Bus namespaces against virtual networks hosting both primary and secondary instances of your application.
-Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and second namespaces: ServiceBus-Namespace1-Primary, ServiceBus-Namespace2-Secondary. You need to do the following steps:
+Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and second namespaces: `ServiceBus-Namespace1-Primary`, `ServiceBus-Namespace2-Secondary`. You need to do the following steps:
-- On ServiceBus-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-2-- On ServiceBus-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-1 and VNET-2
+- On `ServiceBus-Namespace1-Primary`, create two private endpoints that use subnets from VNET-1 and VNET-2
+- On `ServiceBus-Namespace2-Secondary`, create two private endpoints that use the same subnets from VNET-1 and VNET-2
![Private endpoints and virtual networks](./media/service-bus-geo-dr/private-endpoints-virtual-networks.png) Advantage of this approach is that failover can happen at the application layer independent of Service Bus namespace. Consider the following scenarios:
-**Application-only failover:** Here, the application won't exist in VNET-1 but will move to VNET-2. As both private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the application will just work.
+**Application-only failover:** Here, the application doesn't exist in VNET-1 but moves to VNET-2. As both private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the application just works.
-**Service Bus namespace-only failover**: Here again, since both private endpoints are configured on both virtual networks for both primary and secondary namespaces, the application will just work.
+**Service Bus namespace-only failover**: Here again, since both private endpoints are configured on both virtual networks for both primary and secondary namespaces, the application just works.
> [!NOTE] > For guidance on geo-disaster recovery of a virtual network, see [Virtual Network - Business Continuity](../virtual-network/virtual-network-disaster-recovery-guidance.md).
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
If you want to find a list of all the available Service Fabric runtime versions
| 10.1 RTO<br>10.1.1541.9590 | 9.1 CU6<br>9.1.1851.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 10.0 CU1<br>10.0.1949.9590 | 9.0 CU10<br>9.0.1553.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 10.0 RTO<br>10.0.1816.9590 | 9.0 CU10<br>9.0.1553.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 9.1 CU7<br>9.1.1993.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU6<br>9.1.1851.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU5<br>9.1.1833.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU4<br>9.1.1799.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU3<br>9.1.1653.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU2<br>9.1.1583.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU1<br>9.1.1436.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 31, 2024 |
+| 9.1 CU7<br>9.1.1993.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU6<br>9.1.1851.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU5<br>9.1.1833.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU4<br>9.1.1799.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU3<br>9.1.1653.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU2<br>9.1.1583.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU1<br>9.1.1436.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
| 9.1 RTO<br>9.1.1390.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 | | 9.0 CU12<br>9.0.1672.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 1, 2024 | | 9.0 CU11<br>9.0.1569.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
Support for Service Fabric on a specific OS ends when support for the OS version
| 10.1 RTO<br>10.1.1507.1 | 9.1 CU6<br>9.1.1642.1 | 9.0 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 10.0 CU1<br>10.0.1829.1 | 9.0 CU10<br>9.0.1489.1 | 9.0 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 10.0 RTO<br>10.0.1728.1 | 9.0 CU10<br>9.0.1489.1 | 9.0 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 9.1 CU7<br>9.1.1740.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU6<br>9.1.1642.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU5<br>9.1.1625.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU4<br>9.1.1592.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU3<br>9.1.1457.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU2<br>9.1.1388.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
-| 9.1 CU1<br>9.1.1230.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 31, 2024 |
+| 9.1 CU7<br>9.1.1740.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU6<br>9.1.1642.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU5<br>9.1.1625.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU4<br>9.1.1592.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU3<br>9.1.1457.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU2<br>9.1.1388.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU1<br>9.1.1230.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 |
| 9.1 RTO<br>9.1.1206.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | April 30, 2024 | | 9.0 CU12<br>9.0.1554.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | .NET 6 | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | January 1, 2023 | | 9.0 CU11<br>9.0.1503.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | .NET 6 | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |
site-recovery Replication Appliance Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/replication-appliance-support-matrix.md
E:\ <br>
#### If Antivirus software is active on source machine
-If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
+If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\Program Files (x86)\Microsoft Azure Site Recovery\ for smooth replication.
## Sizing and capacity
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Previously updated : 08/24/2022 Last updated : 11/28/2023 ms.devlang: powershell, azurecli
N/A
When moving a large number of blobs to the archive tier, use a batch operation for optimal performance. A batch operation sends multiple API calls to the service with a single request. The suboperations supported by the [Blob Batch](/rest/api/storageservices/blob-batch) operation include [Delete Blob](/rest/api/storageservices/delete-blob) and [Set Blob Tier](/rest/api/storageservices/set-blob-tier).
-> [!NOTE]
-> The [Set Blob Tier](/rest/api/storageservices/set-blob-tier) suboperation of the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace.
- To archive blobs with a batch operation, use one of the Azure Storage client libraries. The following code example shows how to perform a basic batch operation with the .NET client library: :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/AccessTiers.cs" id="Snippet_BulkArchiveContainerContents":::
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
N/A
To rehydrate a large number of blobs at one time, call the [Blob Batch](/rest/api/storageservices/blob-batch) operation to call [Set Blob Tier](/rest/api/storageservices/set-blob-tier) as a bulk operation.
-> [!NOTE]
-> Rehydrating blobs by calling the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchial namespace.
- For a code example that shows how to perform the batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/). - ## Check the status of a rehydration operation While the blob is rehydrating, you can check its status and rehydration priority using the Azure portal, PowerShell, or Azure CLI. The status property may return *rehydrate-pending-to-hot* or *rehydrate-pending-to-cool*, depending on the target tier for the rehydration operation. The rehydration priority property returns either *Standard* or *High*.
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Title: Authorize access to blobs using Active Directory
+ Title: Authorize access to blobs using Microsoft Entra ID
description: Authorize access to Azure blobs using Microsoft Entra ID. Assign Azure roles for access rights. Access data with a Microsoft Entra account.
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Each inventory run for a rule generates the following files:
## Pricing and billing
-Pricing for inventory is based on the number of blobs and containers that are scanned during the billing period. The [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page shows the price per one million objects scanned. For example, if the price to scan one million objects is $0.003, your account contains three million objects, and you produce four reports in a month, then your bill would be 4 * 3 * $0.003 = $0.036.
+Pricing for inventory is based on the number of blobs and containers that are scanned during the billing period. The [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page shows the price per one million objects scanned. For example, if the price to scan one million objects is `$0.003`, your account contains three million objects, and you produce four reports in a month, then your bill would be `4 * 3 * $0.003 = $0.036`.
After inventory files are created, additional standard data storage and operations charges will be incurred for storing, reading, and writing the inventory-generated files in the account.
An object replication policy can prevent an inventory job from writing inventory
- [Calculate the count and total size of blobs per container](calculate-blob-count-size.md) - [Tutorial: Analyze blob inventory reports](storage-blob-inventory-report-analytics.md) - [Manage the Azure Blob Storage lifecycle](./lifecycle-management-overview.md)
+- [Blob Inventory FAQ](storage-blob-faq.yml#azure-storage-blob-inventory)
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
description: Determine the level of support for each storage account feature giv
Previously updated : 07/28/2023 Last updated : 11/28/2023
The following table describes whether a feature is supported in a standard gener
| Storage feature | Default | HNS | NFS | SFTP | ||-|||--|
-| [Access tiers (hot, cool, cold, and archive)](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
+| [Access tiers (hot, cool, cold, and archive)](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
| [Microsoft Entra security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
The following table describes whether a feature is supported in a standard gener
<sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
-<sup>3</sup> Setting the tier of a blob by using the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace.
- ## Premium block blob accounts The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 10/30/2023 Last updated : 11/28/2023
* <a id="ad-file-mount-cname"></a> **Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication?**
- No, this scenario isn't currently supported in single-forest AD environments. This is because when receiving the mount request, Azure Files depends on the Kerberos ticket's server name field to determine what storage account the request is intended for. If `storageaccount.file.core.windows.net` isn't present in the Kerberos ticket as the server name, then the service can't decide which storage account the request is for and is therefore unable to set up an SMB session for the user.
-
- As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md).
-
- As a workaround for mounting the file share, see the instructions in [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain).
+ Yes, this scenario is now supported in both [single-forest](storage-files-identity-ad-ds-mount-file-share.md#mount-file-shares-using-custom-domain-names) and [multi-forest](storage-files-identity-multiple-forests.md) environments for SMB Azure file shares. However, Azure Files only supports configuring CNAMEs using the storage account name as a domain prefix. If you don't want to use the storage account name as a prefix, consider using [DFS Namespaces](files-manage-namespaces.md) instead.
* <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Microsoft Entra credentials from a VM under a different subscription?**
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
description: Learn how to configure Windows ACLs for directory and file level pe
Previously updated : 11/21/2023 Last updated : 11/28/2023 recommendations: false
If you're logged on to a domain-joined Windows client, you can use Windows File
## Next steps
-Now that the feature is enabled and configured, you can [mount a file share from a domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md).
+Now that you've enabled and configured identity-based authentication with AD DS, you can [mount a file share](storage-files-identity-ad-ds-mount-file-share.md).
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Title: Mount Azure file share to an AD DS-joined VM
-description: Learn how to mount an Azure file share to your on-premises Active Directory Domain Services domain-joined machines.
+ Title: Mount SMB Azure file share using AD DS credentials
+description: Learn how to mount an SMB Azure file share using your on-premises Active Directory Domain Services credentials.
Previously updated : 11/21/2023 Last updated : 11/28/2023 recommendations: false
recommendations: false
Before you begin this article, make sure you've read [configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can access an Azure file share from a domain-joined VM. Remember that share-level role assignment can take some time to take effect.
+The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can mount your SMB Azure file share. Remember that share-level role assignment can take some time to take effect.
Sign in to the client using the credentials of the identity that you granted permissions to.
Before you can mount the Azure file share, make sure you've gone through the fol
- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Microsoft Entra credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#identity-based-authentication). - Your client must have unimpeded network connectivity to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication.
-> [!NOTE]
-> Using the canonical name (CNAME) to mount an Azure file share isn't currently supported while using identity-based authentication in single-forest AD environments.
- ## Mount the file share from a domain-joined VM Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-Mount Azure file shares using `file.core.windows.net`, even if you set up a private endpoint for your share.
+Unless you're using [custom domain names](#mount-file-shares-using-custom-domain-names), you should mount Azure file shares using the suffix `file.core.windows.net`, even if you set up a private endpoint for your share.
```powershell $connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
If you run into issues, see [Unable to mount Azure file shares with AD credentia
## Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain
-Non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have line-of-sight to the domain controllers and provide explicit credentials. The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
+Non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have unimpeded network connectivity to the domain controllers and provide explicit credentials. The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
To mount a file share from a non-domain-joined VM, use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name. This will allow the client to contact the domain controller to request and receive Kerberos tickets. You can get the value of **domainFQDN** by running `(Get-ADDomain).Dnsroot` in Active Directory PowerShell.
For example:
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<username@domainFQDN> ```
+## Mount file shares using custom domain names
+
+If you don't want to mount Azure file shares using the suffix `file.core.windows.net`, you can modify the suffix of the storage account name associated with the Azure file share, and then add a canonical name (CNAME) record to route the new suffix to the endpoint of the storage account. The following instructions are for single-forest environments only. To learn how to configure environments that have two or more forests, see [Use Azure Files with multiple Active Directory forests](storage-files-identity-multiple-forests.md).
+
+> [!NOTE]
+> Azure Files only supports configuring CNAMES using the storage account name as a domain prefix. If you don't want to use the storage account name as a prefix, consider using [DFS namepaces](files-manage-namespaces.md).
+
+In this example, we have the Active Directory domain *onpremad1.com*, and we have a storage account called *mystorageaccount* which contains SMB Azure file shares. First, we need to modify the SPN suffix of the storage account to map *mystorageaccount.onpremad1.com* to *mystorageaccount.file.core.windows.net*.
+
+This will allow clients to mount the share with `net use \\mystorageaccount.onpremad1.com` because clients in *onpremad1* will know to search *onpremad1.com* to find the proper resource for that storage account.
+
+To use this method, complete the following steps:
+
+1. Make sure you've set up identity-based authentication and synced your AD user account(s) to Microsoft Entra ID.
+
+2. Modify the SPN of the storage account using the `setspn` tool. You can find `<DomainDnsRoot>` by running the following Active Directory PowerShell command: `(Get-AdDomain).DnsRoot`
+
+ ```
+ setspn -s cifs/<storage-account-name>.<DomainDnsRoot> <storage-account-name>
+ ```
+
+3. Add a CNAME entry using Active Directory DNS Manager and follow the steps below for each storage account in the domain that the storage account is joined to. If you're using a private endpoint, add the CNAME entry to map to the private endpoint name.
+
+ 1. Open Active Directory DNS Manager.
+ 1. Go to your domain (for example, **onpremad1.com**).
+ 1. Go to "Forward Lookup Zones".
+ 1. Select the node named after your domain (for example, **onpremad1.com**) and right-click **New Alias (CNAME)**.
+ 1. For the alias name, enter your storage account name.
+ 1. For the fully qualified domain name (FQDN), enter **`<storage-account-name>`.`<domain-name>`**, such as **mystorageaccount.onpremad1.com**. The hostname part of the FQDN must match the storage account name. Otherwise you'll get an access denied error during the SMB session setup.
+ 1. For the target host FQDN, enter **`<storage-account-name>`.file.core.windows.net**
+ 1. Select **OK**.
+
+You should now be able to mount the file share using *storageaccount.domainname.com*. You can also mount the file share using the storage account key.
+ ## Next steps If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces password rotation, you might need to [update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md).
storage Storage Files Identity Auth Domain Services Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 11/22/2023 Last updated : 11/28/2023 recommendations: false
Sign in to the domain-joined VM using the Microsoft Entra identity to which you
Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. Because you've been authenticated, you won't need to provide the storage account key. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace `<storage-account-name>` and `<file-share-name>` with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-Always mount Azure file shares using `file.core.windows.net`, even if you set up a private endpoint for your share.
+Unless you're using [custom domain names](storage-files-identity-ad-ds-mount-file-share.md#mount-file-shares-using-custom-domain-names), you should mount Azure file shares using the suffix `file.core.windows.net`, even if you set up a private endpoint for your share.
```powershell $connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
description: Learn how to migrate to Azure file shares and find your migration g
Previously updated : 11/21/2023 Last updated : 11/28/2023
The following table lists supported metadata for Azure Files.
## Migration guides
-The following table lists detailed migration guides.
+The following table lists suggested tool combinations for migrating to SMB Azure file shares.
How to use the table:
How to use the table:
Select the target column that matches your choice.
-1. Within the intersection of source and target, a table cell lists available migration scenarios. Select one to directly link to the detailed migration guide.
+1. Within the intersection of source and target, a table cell lists available migration scenarios. Select one to directly link to the migration guide.
A scenario without a link doesn't yet have a published migration guide. Check this table occasionally for updates. New guides will be published when they're available.
A scenario without a link doesn't yet have a published migration guide. Check th
| | Tool combination:| Tool combination: | | Windows Server 2012 R2 and later | <ul><li>[Azure File Sync](../file-sync/file-sync-deployment-guide.md)</li><li>[Azure File Sync and Azure DataBox](storage-files-migration-server-hybrid-databox.md)</li></ul> | <ul><li>Via Azure Storage Mover</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li><li>Via Azure File Sync: Follow same steps as [Azure File Sync hybrid deployment](../file-sync/file-sync-deployment-guide.md) and [decommission server endpoint](../file-sync/file-sync-server-endpoint-delete.md) at the end.</li></ul> | | Windows Server 2012 and earlier | <ul><li>Via DataBox and Azure File Sync to recent server OS</li><li>Via Storage Migration Service to recent server with Azure File Sync, then upload</li></ul> | <ul><li>Via Azure Storage Mover</li><li>Via Storage Migration Service to recent server with Azure File Sync</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> |
-| Network-attached storage (NAS) | <ul><li>[Via Azure File Sync upload](storage-files-migration-nas-hybrid.md)</li><li>[Via DataBox + Azure File Sync](storage-files-migration-nas-hybrid-databox.md)</li></ul> | <ul><li>[Via DataBox](storage-files-migration-nas-cloud-databox.md)</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> |
-| Linux / Samba | <ul><li>[Azure File Sync and RoboCopy](storage-files-migration-linux-hybrid.md)</li></ul> | <ul><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> |
+| Network-attached storage (NAS) | <ul><li>[Via Azure File Sync upload](storage-files-migration-nas-hybrid.md)</li><li>[Via DataBox + Azure File Sync](storage-files-migration-nas-hybrid-databox.md)</li></ul> | <ul><li>Via Azure Storage Mover</li><li>[Via DataBox](storage-files-migration-nas-cloud-databox.md)</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> |
+| Linux (SMB only) | <ul><li>[Azure File Sync and RoboCopy](storage-files-migration-linux-hybrid.md)</li></ul> | <ul><li>Via Azure Storage Mover</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> |
## Migration toolbox
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 11/21/2023 Last updated : 11/28/2023 # Kafka output from Azure Stream Analytics (Preview)
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
## Next steps
-> [!div class="nextstepaction"]
-> [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
-> [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
-> [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
+
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+* [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
<!--Link references--> [stream.analytics.developer.guide]: ../stream-analytics-developer-guide.md
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
You can use four types of security protocols to connect to your Kafka clusters:
> [!NOTE] > For SASL_SSL and SASL_PLAINTEXT, Azure Stream Analytics supports only PLAIN SASL mechanism.
-> > You must upload certificates as secrets to key vault using Azure CLI.
+> You must upload certificates as secrets to key vault using Azure CLI.
|Property name |Description | |-|--|
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
## Next steps
-> [!div class="nextstepaction"]
-> [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
-> [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
-> [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
+
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md)
+* [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md)
<!--Link references-->
stream-analytics Stream Analytics Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-portal.md
Title: Quickstart - Create a Stream Analytics job by using the Azure portal
description: This quickstart shows you how to get started by creating a Stream Analytic job, configuring inputs, outputs, and defining a query. Previously updated : 09/02/2022 Last updated : 11/28/2023
# Quickstart: Create a Stream Analytics job by using the Azure portal
-This quickstart shows you how to create a Stream Analytics job in the Azure portal. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. Your Stream Analytics job reads data from IoT Hub, transform the data, and write the output data to a container in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
+This quickstart shows you how to create a Stream Analytics job in the Azure portal. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. The Stream Analytics job reads data from IoT Hub, transforms the data, and writes the output data to a container in an Azure blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
## Before you begin If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
Before defining the Stream Analytics job, you should prepare the input data. The
:::image type="content" source="./media/stream-analytics-quick-create-portal/device-list.png" lightbox="./media/stream-analytics-quick-create-portal/device-list.png" alt-text="Screenshot showing the list of devices."::: 1. Select your device from the list.
-1. On the device page, select the copy button next to **Connection string - primary key**, and save it to a notepad to use later.
+1. On the device page, select the copy button next to **Primary Connection String**, and save it to a notepad to use later.
:::image type="content" source="./media/stream-analytics-quick-create-portal/save-iot-device-connection-string.png" lightbox="./media/stream-analytics-quick-create-portal/save-iot-device-connection-string.png" alt-text="Screenshot showing the copy button next to device connection string.":::
Before defining the Stream Analytics job, you should prepare the input data. The
1. On a separate tab of the same browser window or in a separate browser window, sign in to the [Azure portal](https://portal.azure.com). 2. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-3. Select **Analytics** > **Stream Analytics job** from the results list.
+3. Select **Analytics** > **Stream Analytics job** from the results list. If you don't see **Stream Analytics job** in the list, search for **Stream Analytics job** using the search box at the topic, and select it from the search results.
1. On the **New Stream Analytics job** page, follow these steps: 1. For **Subscription**, select your Azure subscription. 1. For **Resource group**, select the same resource that you used earlier in this quickstart.
Before defining the Stream Analytics job, you should prepare the input data. The
In this section, you configure an IoT Hub device input to the Stream Analytics job. Use the IoT Hub you created in the previous section of the quickstart.
-1. On the **Stream Analytics job** page, select **Input** under **Job topology** on the left menu.
-1. On the **Inputs** page, select **Add stream input** > **IoT Hub**.
+1. On the **Stream Analytics job** page, select **Inputs** under **Job topology** on the left menu.
+1. On the **Inputs** page, select **Add input** > **IoT Hub**.
:::image type="content" source="./media/stream-analytics-quick-create-portal/add-input-menu.png" lightbox="./media/stream-analytics-quick-create-portal/add-input-menu.png" alt-text="Screenshot showing the **Inputs** page with **Add stream input** > **IoT Hub** menu selected.**."::: 3. On the **IoT Hub** page, follow these steps:
In this section, you configure an IoT Hub device input to the Stream Analytics j
## Configure job output 1. Now, select **Outputs** under **Job topology** on the left menu.
-1. On the **Outputs** page, select **Add** > **Blob storage/ADLS Gen2**.
+1. On the **Outputs** page, select **Add output** > **Blob storage/ADLS Gen2**.
:::image type="content" source="./media/stream-analytics-quick-create-portal/add-output-menu.png" alt-text="Screenshot showing the **Outputs** page with **Add** -> **Blob storage** option selected on the menu."::: 1. On the **New output** page for **Blob storage/ADLS Gen2**, follow these steps:
In this section, you configure an IoT Hub device input to the Stream Analytics j
## Start the Stream Analytics job and check the output
-1. Return to the job overview page in the Azure portal, and select **Start**.
+1. Return to the job overview page in the Azure portal, and select **Start job**.
:::image type="content" source="./media/stream-analytics-quick-create-portal/start-job-menu.png" alt-text="Screenshot showing the **Overview** page with **Start** button selected."::: 1. On the **Start job** page, confirm that **Now** is selected for **Job output start time**, and then select **Start** at the bottom of the page.
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace.md
This quickstart describes the steps to create an Azure Synapse workspace by usin
1. In the **Basics** tab, give the workspace a unique name. We'll use **mysworkspace** in this document 1. You need an ADLSGEN2 account to create a workspace. The simplest choice is to create a new one. If you want to re-use an existing one you'll need to perform some additional configuration. 1. OPTION 1 Creating a new ADLSGEN2 account
- 1. Under **Select Data Lake Storage Gen 2**, click **Create New** and name it **contosolake**.
- 1. Under **Select Data Lake Storage Gen 2**, click **File System** and name it **users**.
-1. OPTION 2 See the **Prepare a Storage Account** instructions at the bottom of this document.
+ 1. Under **Select Data Lake Storage Gen 2 / Account Name**, click **Create New** and provide a global unique name, such as **contosolake**.
+ 1. Under **Select Data Lake Storage Gen 2 / File system name**, click **File System** and name it **users**.
+1. OPTION 2 See the [**Prepare a Storage Account**](#prepare-an-existing-storage-account-for-use-with-azure-synapse-analytics) instructions at the bottom of this document.
1. Your Azure Synapse workspace will use this storage account as the "primary" storage account and the container to store workspace data. The workspace stores data in Apache Spark tables. It stores Spark application logs under a folder called **/synapse/workspacename**. 1. Select **Review + create** > **Create**. Your workspace is ready in a few minutes.
synapse-analytics Sql Data Warehouse Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-connect-sqlcmd.md
C:\>sqlcmd -S MySqlDw.database.windows.net -d Adventure_Works -G -I
``` > [!NOTE]
-> You need to [enable Microsoft Entra authentication](sql-data-warehouse-authentication.md) to authenticate using Active Directory.
+> You need to [enable Microsoft Entra authentication](sql-data-warehouse-authentication.md) to authenticate using Microsoft Entra ID.
## 2. Query
virtual-machines Copy Files To Vm Using Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/copy-files-to-vm-using-scp.md
The `-r` flag instructs SCP to recursively copy the files and directories from t
## Next steps
-* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](./extensions/vmaccess.md)
+* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](./extensions/vmaccess-linux.md)
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Previously updated : 08/18/2023 Last updated : 11/28/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
-There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. Premium SSD and Standard SSD are also available with [Zone-redundant storage](disks-redundancy.md#zone-redundant-storage-for-managed-disks). You aren't yet able to switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one.
+There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. Premium SSD and Standard SSD are also available with [Zone-redundant storage](disks-redundancy.md#zone-redundant-storage-for-managed-disks). You can't yet switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one with a snapshot of an existing disk. See [Migrate to Premium SSD v2 or Ultra Disk](#migrate-to-premium-ssd-v2-or-ultra-disk) for details.
This functionality isn't supported for unmanaged disks. But you can easily convert an unmanaged disk to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
# [Azure CLI](#tab/azure-cli) ---------- ```azurecli #resource group that contains the managed disk
The disk type conversion is instantaneous. You can start your VM after the conve
## Migrate to Premium SSD v2 or Ultra Disk
-Currently, you can only migrate an existing disk to either an Ultra Disk or a Premium SSD v2 through snapshots stored on Standard Storage (Incremental Standard HDD Snapshot). Migration with snapshots stored on Premium storage and other options is not supported.
+Currently, you can only migrate an existing disk to either an Ultra Disk or a Premium SSD v2 through snapshots stored on Standard Storage (Incremental Standard HDD Snapshot). Migration with snapshots stored on Premium storage and other options isn't supported.
Both Premium SSD v2 disks and Ultra Disks have their own set of restrictions. For example, neither can be used as an OS disk, and also aren't available in all regions. See the [Premium SSD v2 limitations](disks-deploy-premium-v2.md#limitations) and [Ultra Disk GA scope and limitations](disks-enable-ultra-ssd.md#ga-scope-and-limitations) sections of their articles for more information.
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Azure Premium SSD v2 is designed for IO-intense enterprise workloads that requir
Premium SSD v2 support a 4k physical sector size by default, but can be configured to use a 512E sector size as well. While most applications are compatible with 4k sector sizes, some require 512 byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks.
-## Limitations
--
-### Regional availability
-- ## Prerequisites - Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-azure-powershell). ## Determine region availability programmatically
-To use a Premium SSD v2, you need to determine the regions and zones where it's supported. Not every region and zones support Premium SSD v2. To determine regions, and zones support premium SSD v2, replace `yourSubscriptionId` then run the following command:
+To use a Premium SSD v2, you need to determine the regions and zones where it's supported. Not every region and zones support Premium SSD v2. For a list of regions, see [Regional availability](#regional-availability).
+
+To determine regions, and zones support premium SSD v2, replace `yourSubscriptionId` then run the following command:
# [Azure CLI](#tab/azure-cli)
Currently, adjusting disk performance is only supported with Azure CLI or the Az
+## Limitations
++
+### Regional availability
++ ## Next steps Add a data disk using either the [Azure portal](linux/attach-disk-portal.md), [CLI](linux/add-disk.md), or [PowerShell](windows/attach-disk-ps.md).
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/overview.md
Otherwise, specific troubleshooting information for each extension can be found
| microsoft.enterprisecloud.monitoring.omsagentforlinux | [Azure Monitor for Linux](oms-linux.md#troubleshoot-and-support) | microsoft.enterprisecloud.monitoring.microsoftmonitoringagent | [Azure Monitor for Windows](oms-windows.md#troubleshoot-and-support) | | stackify.linuxagent.extension.stackifylinuxagentextension | [Stackify Retrace for Linux](stackify-retrace-linux.md#troubleshoot-and-support) |
-| vmaccessforlinux.microsoft.ostcextensions | [Reset password for Linux](vmaccess.md#troubleshoot-and-support) |
+| vmaccessforlinux.microsoft.ostcextensions | [VMAccess for Linux](vmaccess-linux.md#troubleshoot-and-support) |
| microsoft.recoveryservices.vmsnapshot | [Snapshot for Linux](vmsnapshot-linux.md#troubleshoot-and-support) | | microsoft.recoveryservices.vmsnapshot | [Snapshot for Windows](vmsnapshot-windows.md#troubleshoot-and-support) |
virtual-machines Vmaccess Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess-linux.md
+
+ Title: Reset access to an Azure Linux VM
+description: Learn how to manage administrative users and reset access on Linux VMs by using the VMAccess extension and the Azure CLI.
++++++ Last updated : 04/12/2023+++
+# VMAccess Extension for Linux
+
+The VMAccess Extension is used to manage administrative users, configure SSH, and check or repair disks on Azure Linux virtual machines. The extension integrates with Azure Resource Manager templates. It can also be invoked using Azure CLI, Azure PowerShell, the Azure portal, and the Azure Virtual Machines REST API.
+
+This article describes how to run the VMAccess Extension from the Azure CLI and through an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
+
+> [!NOTE]
+> If you use the VMAccess extension to reset the password of your VM after you install the Microsoft Entra Login extension, rerun the Microsoft Entra Login extension to re-enable Microsoft Entra Login for your VM.
+
+## Prerequisites
+
+### Supported Linux distributions
+
+| **Linux Distro** | **x64** | **ARM64** |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Azure Linux | 2.x | 2.x |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
+
+### Tips
+* VMAccess was designed for regaining access to a VM given that access is lost. Based on this principle, it grants sudo permission to account specified in the username field. If you don't wish a user to gain sudo permissions, log in to the VM and use built-in tools (for example, usermod, chage, etc.) to manage unprivileged users.
+* You can only have one version of the extension applied to a VM. To run a second action, update the existing extension with a new configuration.
+* During a user update, VMAccess alters the `sshd_config` file and takes a backup of it beforehand. To restore the original backed-up SSH configuration, run VMAccess with `restore_backup_ssh` set to `True`.
+
+## Extension schema
+
+The VMAccess Extension configuration includes settings for username, passwords, SSH keys, etc. You can store this information in configuration files, specify it on the command line, or include it in an Azure Resource Manager (ARM) template. The following JSON schema contains all the properties available to use in public and protected settings.
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "<name>",
+ "apiVersion": "2023-09-01",
+ "location": "<location>",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.OSTCExtensions",
+ "type": "VMAccessForLinux",
+ "typeHandlerVersion": "1.5",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "check_disk": true,
+ "repair_disk": false,
+ "disk_name": "<disk-name>",
+ },
+ "protectedSettings": {
+ "username": "<username>",
+ "password": "<password>",
+ "ssh_key": "<ssh-key>",
+ "reset_ssh": false,
+ "remove_user": "<username>",
+ "expiration": "<expiration>",
+ "remove_prior_keys": false,
+ "restore_backup_ssh": true
+ }
+ }
+}
+```
+
+### Property values
+
+| Name | Value / Example | Data Type |
+| - | - | - |
+| apiVersion | 2023-09-01 | date |
+| publisher | Microsoft.OSTCExtensions | string |
+| type | VMAccessForLinux | string |
+| typeHandlerVersion | 1.5 | int |
+
+### Settings property values
+
+| Name | Data Type | Description |
+| - | - | - |
+| check_disk | boolean | Whether or not to check disk (optional). Only one between `check_disk` and `repair_disk` can be set to true. |
+| repair_disk | boolean | Whether or not to check disk (optional). Only one between `check_disk` and `repair_disk` can be set to true. |
+| disk_name | string | Name of disk to repair (required when `repair_disk` is true). |
+| username | string | The name of the user to manage (required for all actions on a user account). |
+| password | string | The password to set for the user account. |
+| ssh_key | string | The SSH public key to add for the user account. The SSH key can be in `ssh-rsa`, `ssh-ed25519`, or `.pem` format. |
+| reset_ssh | boolean | Whether or not to reset the SSH. If `true`, it replaces the sshd_config file with an internal resource file corresponding to the default SSH config for that distro. |
+| remove_user | string | The name of the user to remove. Can't be used with `reset_ssh`, `restore_backup_ssh`, and `password`. |
+| expiration | string | Expiration to set to for the account, in the form of `yyyy-mm-dd`. Defaults to never. |
+| remove_prior_keys | boolean | Whether or not to remove old SSH keys when adding a new one. Must be used with `ssh_key`. |
+| restore_backup_ssh | boolean | Whether or not to restore the original backed-up sshd_config. |
+
+## Template deployment
+
+Azure VM Extensions can be deployed with Azure Resource Manager (ARM) templates. The JSON schema detailed in the previous section can be used in an ARM template to run the VMAccess Extension during the template's deployment. You can find a sample template that includes the VMAccess extension on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/vmaccess-on-ubuntu/azuredeploy.json).
+
+The JSON configuration for a virtual machine extension must be nested inside the virtual machine resource fragment of the template, specifically `"resources": []` object for the virtual machine template and for a virtual machine scale set under `"virtualMachineProfile":"extensionProfile":{"extensions" :[]` object.
+
+## Azure CLI deployment
+
+### Using Azure CLI VM user commands
+
+The following CLI commands under [az vm user](/cli/azure/vm/user) use the VMAccess Extension. To use these commands, you need to [install the latest Azure CLI](/cli/azure/install-az-cli2) and sign in to an Azure account by using [az login](/cli/azure/reference-index).
+
+#### Update SSH key
+
+The following example updates the SSH key for the user `azureUser` on the VM named `myVM`:
+
+```azurecli-interactive
+az vm user update \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --username azureUser \
+ --ssh-key-value ~/.ssh/id_rsa.pub
+```
+
+> [!NOTE]
+> The [`az vm user update` command](/cli/azure/vm) appends the new public key text to the `~/.ssh/authorized_keys` file for the admin user on the VM. This command doesn't replace or remove any existing SSH keys. This command doesn't remove prior keys set at deployment time or subsequent updates by using the VMAccess Extension.
++
+#### Reset password
+
+The following example resets the password for the user `azureUser` on the VM named `myVM`:
+
+```azurecli-interactive
+az vm user update \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --username azureUser \
+ --password myNewPassword
+```
+
+#### Restart SSH
+
+The following example restarts the SSH daemon and resets the SSH configuration to default values on a VM named `myVM`:
+
+```azurecli-interactive
+az vm user reset-ssh \
+ --resource-group myResourceGroup \
+ --name myVM
+```
+
+> [!NOTE]
+> The [`az vm user reset-ssh` command](/cli/azure/vm) replaces the sshd_config file with a default config file from the internal resources directory. This command doesn't restore the original SSH configuration found on the virtual machine.
+
+#### Create an administrative/sudo user
+
+The following example creates a user named `myNewUser` with sudo permissions. The account uses an SSH key for authentication on the VM named `myVM`. This method helps you regain access to a VM when current credentials are lost or forgotten. As a best practice, accounts with sudo permissions should be limited.
+
+```azurecli-interactive
+az vm user update \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --username myNewUser \
+ --ssh-key-value ~/.ssh/id_rsa.pub
+```
+
+#### Delete a user
+
+The following example deletes a user named `myNewUser` on the VM named `myVM`:
+
+```azurecli-interactive
+az vm user delete \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --username myNewUser
+```
+
+### Using Azure CLI VM/VMSS extension commands
+
+You can also use the [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) and [az vmss extension set](/cli/azure/vmss/extension#az-vmss-extension-set) commands to run the VMAccess Extension with the specified configuration.
+
+```azurecli-interactive
+az vm extension set \
+ --resource-group myResourceGroup \
+ --vm-name myVM \
+ --name VMAccessForLinux \
+ --publisher Microsoft.OSTCExtensions \
+ --version 1.5 \
+ --settings '{"check_disk":true}'
+ --protected-settings '{"username":"user1","password":"userPassword"}'
+```
+
+The `--settings` and `--protected-settings` parameters also accept JSON file paths. For example, to update the SSH public key of a user, create a JSON file named `update_ssh_key.json` and add settings in the following format. Replace the values within the file with your own information:
+
+```json
+{
+ "username":"azureuser",
+ "ssh_key":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ3S7gGp3rcbKmG2Y4vGZFMuMZCwoUzZNGxxxxxx2XV2x9FfAhy8iGD+lF8UdjFX3t5ebMm6BnnMh8fHwkTRdOt3LDQq8o8ElTBrZaKPxZN2thMZnODs5Hlemb2UX0oRIGRcvWqsd4oJmxsXHNF8UfCZ1ToE4r2SdwTmZv00T2i5faeYnHzxiLPA3Enub7xxxxxxwFArnqad7MO1SY1kLemhX9eFjLWN4mJe56Fu4NiWJkR9APSZQrYeKaqru4KUC68QpVasNJHbuxPSf/PcjF3cjO1+X+4x6L1H5HTPuqUkyZGgDO4ynUHbko4dhlanALcriF7tIfQR9i2r2xOyv5gxJEW/zztGqWma/d4rBoPjnf6tO7rLFHXMt/DVTkAfn5wxxtLDwkn5FMyvThRmex3BDf0gujoI1y6cOWLe9Y5geNX0oj+MXg/W0cXAtzSFocstV1PoVqy883hNoeQZ3mIGB3Q0rIUm5d9MA2bMMt31m1g3Sin6EQ== azureuser@myVM"
+}
+```
+
+Run the VMAccess Extension through the following command:
+
+```azurecli-interactive
+az vm extension set \
+ --resource-group myResourceGroup \
+ --vm-name myVM \
+ --name VMAccessForLinux \
+ --publisher Microsoft.OSTCExtensions \
+ --version 1.5 \
+ --protected-settings update_ssh_key.json
+```
+
+## Azure PowerShell deployment
+
+Azure PowerShell can be used to deploy the VMAccess Extension to an existing virtual machine or virtual machine scale set. You can deploy the extension to a VM by running:
+
+```azurepowershell-interactive
+$username = "<username>"
+$sshKey = "<cert-contents>"
+
+$settings = @{"check_disk" = $true};
+$protectedSettings = @{"username" = $username; "ssh_key" = $sshKey};
+
+Set-AzVMExtension -ResourceGroupName "<resource-group>" `
+ -VMName "<vm-name>" `
+ -Location "<location>" `
+ -Publisher "Microsoft.OSTCExtensions" `
+ -ExtensionType "VMAccessForLinux" `
+ -Name "VMAccessForLinux" `
+ -TypeHandlerVersion "1.5" `
+ -Settings $settings `
+ -ProtectedSettings $protectedSettings
+```
+
+You can also provide and modify extension settings by using strings:
+
+```azurepowershell-interactive
+$username = "<username>"
+$sshKey = "<cert-contents>"
+
+$settingsString = '{"check_disk":true}';
+$protectedSettingsString = '{"username":"' + $username + '","ssh_key":"' + $sshKey + '"}';
+
+Set-AzVMExtension -ResourceGroupName "<resource-group>" `
+ -VMName "<vm-name>" `
+ -Location "<location>" `
+ -Publisher "Microsoft.OSTCExtensions" `
+ -ExtensionType "VMAccessForLinux" `
+ -Name "VMAccessForLinux" `
+ -TypeHandlerVersion "1.5" `
+ -SettingString $settingsString `
+ -ProtectedSettingString $protectedSettingsString
+```
+
+To deploy to a virtual machine scale set, run the following command:
+
+```azurepowershell-interactive
+$resourceGroupName = "<resource-group>"
+$vmssName = "<vmss-name>"
+
+$protectedSettings = @{
+ "username" = "azureUser"
+ "password" = "userPassword"
+}
+
+$publicSettings = @{
+ "repair_disk" = $true
+ "disk_name" = "<disk_name>"
+}
+
+$vmss = Get-AzVmss `
+ -ResourceGroupName $resourceGroupName `
+ -VMScaleSetName $vmssName
+
+Add-AzVmssExtension -VirtualMachineScaleSet $vmss `
+ -Name "<extension-name>" `
+ -Publisher "Microsoft.OSTCExtensions" `
+ -Type "VMAccessForLinux" `
+ -TypeHandlerVersion "1.5"" `
+ -AutoUpgradeMinorVersion $true `
+ -Setting $publicSettings `
+ -ProtectedSetting $protectedSettings
+
+Update-AzVmss `
+ -ResourceGroupName $resourceGroupName `
+ -Name $vmssName `
+ -VirtualMachineScaleSet $vmss
+```
+
+## Troubleshoot and support
+
+The VMAccess extension logs exist locally on the VM and are most informative when it comes to troubleshooting.
+
+| Location | Description |
+| - | - |
+| /var/log/waagent.log | Contains logs from the Linux Agent and shows when an update to the extension occurred. We can check it to ensure the extension ran. |
+| /var/log/azure/Microsoft.OSTCExtensions.VMAccessForLinux/* | The VMAccess Extension produces logs, which can be found here. The directory contains `CommandExecution.log` where you can find each command executed along with its result, along with `extension.log`, which contains individual logs for each execution. |
+| /var/lib/waagent/Microsoft.OSTCExtensions.VMAccessForLinux-\<most recent version\>/config/* | The configuration and binaries for VMAccess VM Extension. |
+|||
+
+You can also retrieve the execution state of the VMAccess Extension, along with other extensions on a given VM, by running the following command:
+
+```azurecli-interactive
+az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
+```
+
+For more help, you can contact the Azure experts at [Azure Community Support](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For more information about Azure Support, read the [Azure support plans FAQ](https://azure.microsoft.com/support/faq/).
+
+## Next steps
+
+To see the code, current versions, and more documentation, see [VMAccess Linux - GitHub](https://github.com/Azure/azure-linux-extensions/tree/master/VMAccess).
virtual-machines Vmaccess https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess.md
- Title: Reset access to an Azure Linux VM
-description: Learn how to manage administrative users and reset access on Linux VMs by using the VMAccess extension and the Azure CLI.
------ Previously updated : 04/12/2023---
-# Manage administrative users, SSH, and check or repair disks on Linux VMs by using the VMAccess extension with the Azure CLI
-
-The VMAccess extension with the Azure CLI allows you to manage administrative users and reset access on Linux VMs.
-
-This article shows you how to:
-
-* Use the Azure VMAccess extension to check or repair a disk.
-* Reset user access.
-* Manage administrative user accounts
-* Update the SSH configuration on Linux computers that run as Azure Resource Manager virtual machines.
-
-If you need to manage Classic virtual machines, see [Using the VMAccess extension](/previous-versions/azure/virtual-machines/linux/classic/reset-access-classic).
-
-> [!NOTE]
-> If you use the VMAccess extension to reset the password of your VM after you install the Microsoft Entra Login extension, rerun the Microsoft Entra Login extension to re-enable Microsoft Entra Login for your VM.
-
-## Prerequisites
-
-The VMAccess extension can be run on these Linux distributions:
-
-### Linux DistroΓÇÖs Supported
-
-| **Linux Distro** | **x64** | **ARM64** |
-|:--|:--:|:--:|
-| Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
-| Debian | 10+ | 11.x+ |
-| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
-| Azure Linux | 2.x | 2.x |
-| openSUSE | 12.3+ | Not Supported |
-| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
-| Rocky Linux | 9.x+ | 9.x+ |
-| SLES | 12.x+, 15.x+ | 15.x SP4+ |
-| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
-
-## Ways to use the VMAccess extension
-
-You can use the VMAccess extension on your Linux VMs in two ways:
-
-* Use the Azure CLI and the required parameters.
-* [Use JSON files with the VMAccess extension](#use-json-files-and-the-vmaccess-extension).
-
-The following examples use [az vm user](/cli/azure/vm/user) commands. To perform these steps, you need to [install the latest Azure CLI](/cli/azure/install-az-cli2) and sign in to an Azure account by using [az login](/cli/azure/reference-index).
-
-## Update SSH key
-
-The following example updates the SSH key for the user `azureuser` on the VM named `myVM`:
-
-```azurecli-interactive
-az vm user update \
- --resource-group myResourceGroup \
- --name myVM \
- --username azureuser \
- --ssh-key-value ~/.ssh/id_rsa.pub
-```
-
-> [!NOTE]
-> The [`az vm user update` command](/cli/azure/vm) appends the new public key text to the `~/.ssh/authorized_keys` file for the admin user on the VM. This command doesn't replace or remove any existing SSH keys. This command doesn't remove prior keys set at deployment time or subsequent updates by using the VMAccess extension.
-
-## Reset password
-
-The following example resets the password for the user `azureuser` on the VM named `myVM`:
-
-```azurecli-interactive
-az vm user update \
- --resource-group myResourceGroup \
- --name myVM \
- --username azureuser \
- --password myNewPassword
-```
-
-## Restart SSH
-
-The following example restarts the SSH daemon and resets the SSH configuration to default values on a VM named `myVM`:
-
-```azurecli-interactive
-az vm user reset-ssh \
- --resource-group myResourceGroup \
- --name myVM
-```
-
-## Create an administrative/sudo user
-
-The following example creates a user named `myNewUser` with sudo permissions. The account uses an SSH key for authentication on the VM named `myVM`. This method helps you regain access to a VM when current credentials are lost or forgotten. As a best practice, accounts with sudo permissions should be limited.
-
-```azurecli-interactive
-az vm user update \
- --resource-group myResourceGroup \
- --name myVM \
- --username myNewUser \
- --ssh-key-value ~/.ssh/id_rsa.pub
-```
-
-## Delete a user
-
-The following example deletes a user named `myNewUser` on the VM named `myVM`:
-
-```azurecli-interactive
-az vm user delete \
- --resource-group myResourceGroup \
- --name myVM \
- --username myNewUser
-```
-
-## Use JSON files and the VMAccess extension
-
-The following examples use raw JSON files. Use the [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) command to then call your JSON files. Azure templates can also call these JSON files.
-
-### Reset user access
-
-If you've lost access to root on your Linux VM, you can launch a VMAccess script to update a user's SSH key or password.
-
-To update the SSH public key of a user, create a file named `update_ssh_key.json` and add settings in the following format. Replace `username` and `ssh_key` with your own information:
-
-```json
-{
- "username":"azureuser",
- "ssh_key":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ3S7gGp3rcbKmG2Y4vGZFMuMZCwoUzZNGxxxxxx2XV2x9FfAhy8iGD+lF8UdjFX3t5ebMm6BnnMh8fHwkTRdOt3LDQq8o8ElTBrZaKPxZN2thMZnODs5Hlemb2UX0oRIGRcvWqsd4oJmxsXHNF8UfCZ1ToE4r2SdwTmZv00T2i5faeYnHzxiLPA3Enub7xxxxxxwFArnqad7MO1SY1kLemhX9eFjLWN4mJe56Fu4NiWJkR9APSZQrYeKaqru4KUC68QpVasNJHbuxPSf/PcjF3cjO1+X+4x6L1H5HTPuqUkyZGgDO4ynUHbko4dhlanALcriF7tIfQR9i2r2xOyv5gxJEW/zztGqWma/d4rBoPjnf6tO7rLFHXMt/DVTkAfn5wxxtLDwkn5FMyvThRmex3BDf0gujoI1y6cOWLe9Y5geNX0oj+MXg/W0cXAtzSFocstV1PoVqy883hNoeQZ3mIGB3Q0rIUm5d9MA2bMMt31m1g3Sin6EQ== azureuser@myVM"
-}
-```
-
-Execute the VMAccess script by running this command:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings update_ssh_key.json
-```
-
-To reset a user password, create a file named `reset_user_password.json` and add settings in the following format. Replace `username` and `password` with your own information:
-
-```json
-{
- "username":"azureuser",
- "password":"myNewPassword"
-}
-```
-
-Execute the VMAccess script by running this command:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings reset_user_password.json
-```
-
-### Restart the SSH
-
-To restart the SSH daemon and reset the SSH configuration to default values, create a file named `reset_sshd.json`. Add the following text:
-
-```json
-{
- "reset_ssh": true
-}
-```
-
-Execute the VMAccess script with:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings reset_sshd.json
-```
-
-### Manage administrative users
-
-To create a user with sudo permissions that uses an SSH key for authentication, create a file named `create_new_user.json` and add settings in the following format. Substitute your own values for the `username` and `ssh_key` parameters. This method helps you regain access to a VM when current credentials are lost or forgotten. As a best practice, limit accounts with sudo permissions.
-
-```json
-{
- "username":"myNewUser",
- "ssh_key":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ3S7gGp3rcbKmG2Y4vGZFMuMZCwoUzZNG1vHY7P2XV2x9FfAhy8iGD+lF8UdjFX3t5ebMm6BnnMh8fHwkTRdOt3LDQq8o8ElTBrZaKPxZN2thMZnODs5Hlemb2UX0oRIGRcvWqsd4oJmxsXHNF8UfCZ1ToE4r2SdwTmZv00T2i5faeYnHzxiLPA3Enub7iUo5IdwFArnqad7MO1SY1kLemhX9eFjLWN4mJe56Fu4NiWJkR9APSZQrYeKaqru4KUC68QpVasNJHbuxPSf/PcjF3cjO1+X+4x6L1H5HTPuqUkyZGgDO4ynUHbko4dhlanALcriF7tIfQR9i2r2xOyv5gxJEW/zztGqWma/d4rBoPjnf6tO7rLFHXMt/DVTkAfn5woYtLDwkn5FMyvThRmex3BDf0gujoI1y6cOWLe9Y5geNX0oj+MXg/W0cXAtzSFocstV1PoVqy883hNoeQZ3mIGB3Q0rIUm5d9MA2bMMt31m1g3Sin6EQ== myNewUser@myVM",
- "password":"myNewUserPassword"
-}
-```
-
-Execute the VMAccess script with:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings create_new_user.json
-```
-
-To delete a user, create a file named `delete_user.json` and add the following content. Change the data for `remove_user` to the user you're trying to delete:
-
-```json
-{
- "remove_user":"myNewUser"
-}
-```
-
-Execute the VMAccess script with:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings delete_user.json
-```
-
-### Check or repair the disk
-
-By using VMAccess, you can check and repair a disk that you added to the Linux VM.
-
-To check and then repair the disk, create a file named `disk_check_repair.json` and add settings in the following format. Change the data for `repair_disk` to the disk you're trying to repair:
-
-```json
-{
- "check_disk": "true",
- "repair_disk": "true, mydiskname"
-}
-```
-
-Execute the VMAccess script with:
-
-```azurecli-interactive
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name VMAccessForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 1.4 \
- --protected-settings disk_check_repair.json
-```
-
-## Troubleshoot and support
-
-Get data about the state of extension deployments from the Azure portal and by using the Azure CLI. To see the deployment state of extensions for a given VM, run the following command by using the Azure CLI.
-
-```azurecli
-az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
-```
-
-For more help, you can contact the Azure experts at [Azure Community Support](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For more information about Azure Support, read the [Azure support plans FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Once the above prerequisites are met, you're ready to connect to your VM. Open y
ssh azureuser@20.51.230.13 ```
- If you forgot your password or username see [Reset Access to an Azure VM](./extensions/vmaccess.md)
+ If you forgot your password or username see [Reset Access to an Azure VM](./extensions/vmaccess-linux.md)
2. Validate the returned fingerprint.
Once the above prerequisites are met, you're ready to connect to your VM. Open y
ssh azureuser@20.51.230.13 ```
- If you forgot your password or username see [Reset Access to an Azure VM](./extensions/vmaccess.md)
+ If you forgot your password or username see [Reset Access to an Azure VM](./extensions/vmaccess-linux.md)
2. Validate the returned fingerprint.
virtual-wan Scenario Secured Hub App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-secured-hub-app-gateway.md
Currently, routes that are advertised from the Virtual WAN route table to spoke
1. Configure user-defined routes (UDRs) on the application gateway subnet. To ensure the application gateway is able to send traffic directly to the Internet, specify the following UDR:
- * **Address Prefix:** 0.0.0.0.0/0
+ * **Address Prefix:** 0.0.0.0/0
* **Next Hop:** Internet ## Next steps
vpn-gateway About Gateway Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-gateway-skus.md
description: Learn about VPN Gateway SKUs.
Previously updated : 11/20/2023 Last updated : 11/28/2023
If you're using the old SKUs (legacy), the production SKU recommendations are St
## About legacy SKUs
-For information about working with the legacy gateway SKUs (Basic, Standard, and HighPerformance), see [Working with VPN gateway SKUs (legacy SKUs)](vpn-gateway-about-skus-legacy.md).
+For information about working with the legacy gateway SKUs (Basic, Standard, and High Performance), including SKU deprecation, see [Managing legacy gateway SKUs](vpn-gateway-about-skus-legacy.md).
## Specify a SKU
You specify the gateway SKU when you create your VPN Gateway. See the following
## <a name="resizechange"></a>Change or resize a SKU > [!NOTE]
-> If you are working with a legacy gateway SKU and are using the classic deployment model (Service Management), the SKU rules are different. See [Working with legacy classic deployment model SKUs](vpn-gateway-about-skus-legacy.md).
+> If you're working with a legacy gateway SKU (Basic, Standard, and High Performance), see [Managing Legacy gateway SKUs](vpn-gateway-about-skus-legacy.md).
[!INCLUDE [changing vs. resizing](../../includes/vpn-gateway-sku-about-change-resize.md)]
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
Title: Legacy Azure virtual network VPN gateway SKUs
-description: How to work with the old virtual network gateway SKUs; Basic, Standard, and HighPerformance.
+description: How to work with the old virtual network gateway SKUs; Basic, Standard, and High Performance.
Previously updated : 02/13/2023 Last updated : 11/28/2023 # Working with virtual network gateway SKUs (legacy SKUs)
This article contains information about the legacy (old) virtual network gateway
You can view legacy gateway pricing in the **Virtual Network Gateways** section, which is located in on the [ExpressRoute pricing page](https://azure.microsoft.com/pricing/details/expressroute).
+## SKU deprecation
+
+The Standard and High Performance SKUs will be deprecated September 30, 2025. The product team will make a migration path available for these SKUs by November 30, 2024. **At this time, there's no action that you need to take**.
+
+There are no [price](https://azure.microsoft.com/pricing/details/vpn-gateway/) changes if you migrate to Standard (VpnGw1) and High Performance (VpnGw2) gateways. As a benefit, there's a performance improvement after migrating:
+
+* **Standard** 6.5x
+* **High Performance** 5x
+
+If you don't migrate your gateway by September 30, 2025, your gateway will be automatically upgraded to AZ gateways: VpnGw1AZ (Standard) or VpnGw2AZ (High Performance).
+
+Important Dates:
+
+* **December 1, 2023**: No new gateway creations possible on Standard / High Performance SKUs
+* **November 30, 2024**: Begin migrating gateways to other SKUs
+* **September 30, 2025**: Standard/High Performance SKUs will be retired and gateways will be automatically migrated
+ ## <a name="agg"></a>Estimated aggregate throughput by SKU [!INCLUDE [Aggregated throughput by legacy SKU](../../includes/vpn-gateway-table-gwtype-legacy-aggtput-include.md)]
You can view legacy gateway pricing in the **Virtual Network Gateways** section,
## <a name="resize"></a>Resize a gateway
-With the exception of the Basic SKU, you can resize your gateway to a gateway SKU within the same SKU family. For example, if you have a Standard SKU, you can resize to a HighPerformance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. For example, you can't go from a Standard SKU to a VpnGw2 SKU, or a Basic SKU to VpnGw1.
+Except for the Basic SKU, you can resize your gateway to a gateway SKU within the same SKU family. For example, if you have a Standard SKU, you can resize to a High Performance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. For example, you can't go from a Standard SKU to a VpnGw2 SKU, or a Basic SKU to VpnGw1.
### Resource Manager
-To resize a gateway for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) using PowerShell, use the following command:
+You can resize a gateway for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) using the Azure portal or PowerShell. For PowerShell, use the following command:
```powershell $gw = Get-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg Resize-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -GatewaySku HighPerformance ```
-You can also resize a gateway in the Azure portal.
- ### <a name="classicresize"></a>Classic
-To resize a gateway for the classic deployment model, you must use the Service Management PowerShell cmdlets. Use the following command:
+To resize a gateway for the [classic deployment model](../azure-resource-manager/management/deployment-models.md), you must use the Service Management PowerShell cmdlets. Use the following command:
```powershell Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerformance
Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerfor
## <a name="change"></a>Change to the new gateway SKUs
+> [!NOTE]
+> Standard and High Performance SKUs will be deprecated September 30, 2025. While you can choose to change to the new gateway SKUs at any point, there is no requirement to do so at this time. The product team will make a migration path available for these SKUs by November 30, 2024. See [Legacy SKU deprecation](#sku-deprecation) for more information.
+ [!INCLUDE [Change to the new SKUs](../../includes/vpn-gateway-gwsku-change-legacy-sku-include.md)] ## Next steps
vpn-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/whats-new.md
+
+ Title: What's new in Azure VPN Gateway?
+description: Learn what's new with Azure VPN Gateway such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.
+++ Last updated : 11/27/2023+++
+# What's new in Azure VPN Gateway?
+
+Azure VPN Gateway is updated regularly. Stay up to date with the latest announcements. This article provides you with information about:
+
+* Recent releases
+* Previews underway with known limitations (if applicable)
+* Known issues
+* Deprecated functionality (if applicable)
+
+You can also find the latest VPN Gateway updates and subscribe to the RSS feed [here](https://azure.microsoft.com/updates/?category=networking&query=azure%20vpn%20gateway).
+
+## Recent releases and announcements
+
+| Type | Area | Name | Description | Date added | Limitations |
+|||||||
+|SKU deprecation | N/A | Standard/High performance VPN gateway SKU | Legacy SKUs (Standard and HighPerformance) will be deprecated on 30 Sep 2025. | Nov 2023 | N/A |
+|Feature | All | [Customer-controlled gateway maintenance](customer-controlled-gateway-maintenance.md) |Customers can schedule maintenance (Guest OS and Service updates) during a time of the day that best suits their business needs. | Nov 2023 (Public preview)| See the [FAQ](vpn-gateway-vpn-faq.md#customer-controlled).
+| Feature | All | [APIPA for VPN Gateway (General availability)](bgp-howto.md#2-create-testvnet1-gateway-with-bgp) | All SKUs of active-active VPN gateways now support multiple custom BGP APIPA addresses for each instance. | Jan 2022 | N/A |
+
+## Next steps
+
+* [What is Azure VPN Gateway?](vpn-gateway-about-vpngateways.md)
+* [VPN Gateway FAQ](vpn-gateway-vpn-faq.md)