Updates from: 06/20/2022 01:06:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
You typically use only one identity provider in your applications, but you have
* [QQ](identity-provider-qq.md) * [Salesforce](identity-provider-salesforce.md) * [Salesforce (SAML protocol)](identity-provider-salesforce-saml.md)
-* [SwissID]( identity-provider-swissid.md)
+* [SwissID](identity-provider-swissid.md)
* [Twitter](identity-provider-twitter.md) * [WeChat](identity-provider-wechat.md) * [Weibo](identity-provider-weibo.md)
active-directory-b2c App Registrations Training Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/app-registrations-training-guide.md
You can also use this option to use Azure AD B2C as a SAML service provider. [L
## Applications for DevOps scenarios
-You can use the other account types to create an app to manage your DevOps scenarios, like using Microsoft Graph to upload Identity Experience Framework policies or provision users. Learn [how register a Microsoft Graph application to manage Azure AD B2C resources](microsoft-graph-get-started.md).
+You can use the other account types to create an app to manage your DevOps scenarios, like using Microsoft Graph to upload Identity Experience Framework policies or provision users. Learn [how to register a Microsoft Graph application to manage Azure AD B2C resources](microsoft-graph-get-started.md).
You might not see all Microsoft Graph permissions, because many of these permissions don't apply to Azure B2C consumer users. [Read more about managing users using Microsoft Graph](microsoft-graph-operations.md).
To get started with the new app registration experience:
* Learn [how to register a web application](tutorial-register-applications.md). * Learn [how to register a web API](add-web-api-application.md). * Learn [how to register a native client application](add-native-application.md).
-* Learn [how register a Microsoft Graph application to manage Azure AD B2C resources](microsoft-graph-get-started.md).
+* Learn [how to register a Microsoft Graph application to manage Azure AD B2C resources](microsoft-graph-get-started.md).
* Learn [how to use Azure AD B2C as a SAML Service Provider.](identity-provider-adfs.md) * Learn about [application types](application-types.md).
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
The following procedures give you example policy steps and a secure certificate
5. Select **Download** to get the client certificate.
-6. Follow [this tutorial](./secure-rest-api.md#https-client-certificate-authentication ) to add the client certificate into Azure AD B2C.
+6. Follow [this tutorial](./secure-rest-api.md#https-client-certificate-authentication) to add the client certificate into Azure AD B2C.
### Retrieve your custom policy examples
For more information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Tutorial Enable Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
Previously updated : 02/10/2022 Last updated : 06/10/2022
To complete this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled. * If you need to, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An account with *global administrator* privileges. Some MFA settings can also be managed by an Authentication Policy Administrator. For more information, see [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
+* An account with *Conditional Access Administrator*, *Security Administrator*, or *Global Administrator* privileges. Some MFA settings can also be managed by an *Authentication Policy Administrator*. For more information, see [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
* A non-administrator account with a password that you know. For this tutorial, we created such an account, named *testuser*. In this tutorial, you test the end-user experience of configuring and using Azure AD Multi-Factor Authentication. * If you need information about creating a user account, see [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).
First, create a Conditional Access policy and assign your test group of users as
:::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users and groups." source="media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-conditional-access-menu-select-users-groups.png":::
- Since none are assigned yet, the list of users and groups (shown in the next step) opens automatically.
+ Since no one is assigned yet, the list of users and groups (shown in the next step) opens automatically.
1. Browse for and select your Azure AD group, such as *MFA-Test-Group*, then choose **Select**.
active-directory Javelo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/javelo-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, Upload the **Service Provider metadata file** which you can download from the [URL](https://api.javelo.io/omniauth/<CustomerSPIdentifier>_saml/metadata) and perform the following steps:
+1. On the **Basic SAML Configuration** section, Upload the **Service Provider metadata file** which you can download from the `https://api.javelo.io/omniauth/<CustomerSPIdentifier>_saml/metadata` and perform the following steps:
a. Click **Upload metadata file**.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Javelo you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Javelo you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Recovery Services vault - Optimize costs of database backup (U
Large classic log data is detected on your storage accounts. You are billed on capacity of data stored in storage accounts including classic logs. You are recommended to check the retention policy of classic logs and update with necessary period to retain less log data. This would reduce unnecessary classic log data and save your billing cost from less capacity.
-Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)]( /azure/storage/common/manage-storage-analytics-logs#modify-retention-policy).
+Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)](/azure/storage/common/manage-storage-analytics-logs#modify-retention-policy).
## Reserved Instances
advisor Advisor Tag Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-tag-filtering.md
You can now get Advisor recommendations and scores scoped to a workload, environ
1. Click **Apply**. Summary tiles will be updated to reflect the filter. 1. Click on any of the categories to review recommendations.
- [ ![Screenshot of the Azure Advisor dashboard that shows count of recommendations after tag filter is applied.](./media/tags/overview-tag-filters.png) ](./media/tags/overview-tag-filters.png#lightbox)
+ [![Screenshot of the Azure Advisor dashboard that shows count of recommendations after tag filter is applied.](./media/tags/overview-tag-filters.png)](./media/tags/overview-tag-filters.png#lightbox)
## How to calculate scores using resource tags
You can now get Advisor recommendations and scores scoped to a workload, environ
1. Click **Apply**. Advisor score will be updated to only include resources impacted by the filter. 1. Click on any of the categories to review recommendations.
- [ ![Screenshot of the Azure Advisor score dashboard that shows score and recommendations after tag filter is applied.](./media/tags/score-tag-filters.png) ](./media/tags/score-tag-filters.png#lightbox)
+ [![Screenshot of the Azure Advisor score dashboard that shows score and recommendations after tag filter is applied.](./media/tags/score-tag-filters.png)](./media/tags/score-tag-filters.png#lightbox)
> [!NOTE] > Not all capabilities are available when tag filters are used. For example, tag filters are not supported for security score and score history.
azure-arc About Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/about-arcdata-extension.md
+
+ Title: Reference for `az arcdata` extension
+
+description: Reference article for `az arcdata` commands.
+++ Last updated : 06/17/2022+++++
+# Azure (`az`) CLI `arcdata` extension
+
+The `arcdata` extension for Azure CLI provides tools for managing Azure Arc data services.
+
+## Install extension
+
+To install the extension, see [Install `arcdata` Azure CLI extension](install-arcdata-extension.md).
+
+## Reference documentation
+
+To access the latest reference documentation:
+
+- [`az arcdata`](/cli/azure/arcdata)
+- [`az sql mi-arc`](/cli/azure/sql/mi-arc)
+- [`az sql midb-arc`](/cli/azure/sql/midb-arc)
+- [`sql instance-failover-group-arc`](/cli/azure/sql/instance-failover-group-arc)
+- [`az postgres arc-server`](/cli/azure/postgres/arc-server)
+
+## Next steps
+
+[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)
azure-arc Install Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-arcdata-extension.md
Title: Install `arcdata` extension
-description: Install the `arcdata` extension for Azure (az) CLI
+description: Install the `arcdata` extension for Azure (`az`) CLI
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-client-tools.md
This article points you to resources to install the tools to manage Azure Arc-en
> > [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
-The [`arcdata` extension for Azure CLI (`az`)](reference/reference-az-arcdata-dc.md) replaces `azdata` for Azure Arc-enabled data services.
+The [`arcdata` extension for Azure CLI (`az`)](about-arcdata-extension.md) replaces `azdata` for Azure Arc-enabled data services.
## Tools for creating and managing Azure Arc-enabled data services
The following table lists common tools required for creating and managing Azure
| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.| | PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.| | Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) \| [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/) |
-| curl <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package |
-| oc | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
+| `curl` <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package |
+| `oc` | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
The following table lists common tools required for creating and managing Azure
<sup>2</sup> You must use `kubectl` version 1.19 or later. Also, the version of `kubectl` should be plus or minus one minor version of your Kubernetes cluster. If you want to install a specific version on `kubectl` client, see [Install `kubectl` binary via curl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl) (on Windows 10, use cmd.exe and not Windows PowerShell to run curl).
-<sup>3</sup> If you are using PowerShell, curl is an alias to the Invoke-WebRequest cmdlet.
+<sup>3</sup> For PowerShell, `curl` is an alias to the Invoke-WebRequest cmdlet.
## Next steps
azure-arc Reference Az Arcdata Ad Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-ad-connector.md
- Title: az arcdata ad-connector-
-description: Reference article for az arcdata ad-connector.
--- Previously updated : 05/02/2022-----
-# az arcdata ad-connector
-
-Manage Active Directory authentication for Azure Arc data services.
-## Commands
-| Command | Description|
-| | |
-[az arcdata ad-connector create](#az-arcdata-ad-connector-create) | Create a new Active Directory connector.
-[az arcdata ad-connector update](#az-arcdata-ad-connector-update) | Update the settings of an existing Active Directory connector.
-[az arcdata ad-connector delete](#az-arcdata-ad-connector-delete) | Delete an existing Active Directory connector.
-[az arcdata ad-connector show](#az-arcdata-ad-connector-show) | Get the details of an existing Active Directory connector.
-## az arcdata ad-connector create
-Create a new Active Directory connector.
-```azurecli
-az arcdata ad-connector create
-```
-### Examples
-Ex 1 - Deploy a new Active Directory connector in indirect mode.
-```azurecli
-az arcdata ad-connector create --name arcadc --k8s-namespace arc --realm CONTOSO.LOCAL --account-provisioning manual --primary-ad-dc-hostname azdc01.contoso.local --secondary-ad-dc-hostnames "azdc02.contoso.local, azdc03.contoso.local" --netbios-domain-name CONTOSO --dns-domain-name contoso.local --nameserver-addresses 10.10.10.11,10.10.10.12,10.10.10.13 --dns-replicas 2 --prefer-k8s-dns false --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata ad-connector update
-Update the settings of an existing Active Directory connector.
-```azurecli
-az arcdata ad-connector update
-```
-### Examples
-Ex 1 - Update an existing Active Directory connector in indirect mode.
-```azurecli
-az arcdata ad-connector update --name arcadc --k8s-namespace arc --primary-ad-dc-hostname azdc01.contoso.local --secondary-ad-dc-hostname "azdc02.contoso.local, azdc03.contoso.local" --nameserver-addresses 10.10.10.11,10.10.10.12,10.10.10.13 --dns-replicas 2 --prefer-k8s-dns false --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata ad-connector delete
-Delete an existing Active Directory connector.
-```azurecli
-az arcdata ad-connector delete
-```
-### Examples
-Ex 1 - Delete an existing Active Directory connector in indirect mode.
-```azurecli
-az arcdata ad-connector delete --name arcadc --k8s-namespace arc --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata ad-connector show
-Get the details of an existing Active Directory connector.
-```azurecli
-az arcdata ad-connector show
-```
-### Examples
-Ex 1 - Get an existing Active Directory connector in indirect mode.
-```azurecli
-az arcdata ad-connector show --name arcadc --k8s-namespace arc --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-config.md
- Title: az arcdata dc config reference-
-description: Reference article for az arcdata dc config commands.
--- Previously updated : 11/04/2021-----
-# az arcdata dc config
-
-Configuration commands.
-
-## Commands
-| Command | Description|
-| | |
-[az arcdata dc config init](#az-arcdata-dc-config-init) | Initialize a data controller configuration profile that can be used with `az arcdata dc create`.
-[az arcdata dc config list](#az-arcdata-dc-config-list) | List available configuration profile choices.
-[az arcdata dc config add](#az-arcdata-dc-config-add) | Add a value for a json path in a config file.
-[az arcdata dc config remove](#az-arcdata-dc-config-remove) | Remove a value for a json path in a config file.
-[az arcdata dc config replace](#az-arcdata-dc-config-replace) | Replace a value for a json path in a config file.
-[az arcdata dc config patch](#az-arcdata-dc-config-patch) | Patch a config file based on a json patch file.
-## az arcdata dc config init
-Initialize a data controller configuration profile that can be used with `az arcdata dc create`. The specific source of the configuration profile can be specified in the arguments.
-```azurecli
-az arcdata dc config init
-```
-### Examples
-Guided data controller config init experience - you will receive prompts for needed values.
-```azurecli
-az arcdata dc config init
-```
-arcdata dc config init with arguments, creates a configuration profile of aks-dev-test in ./custom.
-```azurecli
-az arcdata dc config init --source azure-arc-kubeadm --path custom
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc config list
-List available configuration profile choices for use in `arcdata dc config init`
-```azurecli
-az arcdata dc config list
-```
-### Examples
-Shows all available configuration profile names.
-```azurecli
-az arcdata dc config list
-```
-Shows json of a specific configuration profile.
-```azurecli
-az arcdata dc config list --config-profile aks-dev-test
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc config add
-Add the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az arcdata dc config add
-```
-### Examples
-Add data controller storage.
-```azurecli
-az arcdata dc config add --path custom/control.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc config remove
-Remove the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az arcdata dc config remove
-```
-### Examples
-Ex 1 - Remove data controller storage.
-```azurecli
-az arcdata dc config remove --path custom/control.json --json-path ".spec.storage"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc config replace
-Replace the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az arcdata dc config replace
-```
-### Examples
-Ex 1 - Replace the port of a single endpoint (Data Controller Endpoint).
-```azurecli
-az arcdata dc config replace --path custom/control.json --json-values "$.spec.endpoints[?(@.name=="Controller")].port=30080"
-```
-Ex 2 - Replace data controller storage.
-```azurecli
-az arcdata dc config replace --path custom/control.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc config patch
-Patch the config file according to the given patch file. Consult http://jsonpatch.com/ for a better understanding of how the paths should be composed. The replace operation can use conditionals in its path due to the jsonpath library https://jsonpath.com/. All patch json files must start with a key of "patch" that has an array of patches with their corresponding op (add, replace, remove), path, and value. The "remove" op does not require a value, just a path. See the examples below.
-```azurecli
-az arcdata dc config patch
-```
-### Examples
-Ex 1 - Replace the port of a single endpoint (Data Controller Endpoint) with patch file.
-```azurecli
-az arcdata dc config patch --path custom/control.json --patch ./patch.json
-```
-Patch File Example (patch.json):
-```json
-{"patch":[{"op":"replace","path":"$.spec.endpoints[?(@.name=="Controller")].port","value":30080}]}
-```
-Ex 2 - Replace data controller storage with patch file.
-```azurecli
-az arcdata dc config patch --path custom/control.json --patch ./patch.json
-```
-Patch File Example (patch.json):
-```json
-{"patch":[{"op":"replace","path":".spec.storage","value":{"accessMode":"ReadWriteMany","className":"managed-premium","size":"10Gi"}}]}
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-debug.md
- Title: az arcdata dc debug reference-
-description: Reference article for az arcdata dc debug commands.
--- Previously updated : 11/04/2021-----
-# az arcdata dc debug
-
-Debug data controller.
-
-## Commands
-| Command | Description|
-| | |
-[az arcdata dc debug copy-logs](#az-arcdata-dc-debug-copy-logs) | Copy logs.
-[az arcdata dc debug dump](#az-arcdata-dc-debug-dump) | Trigger memory dump.
-## az arcdata dc debug copy-logs
-Copy the debug logs from the data controller - Kubernetes configuration is required on your system.
-```azurecli
-az arcdata dc debug copy-logs
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc debug dump
-Trigger memory dump and copy it out from container - Kubernetes configuration is required on your system.
-```azurecli
-az arcdata dc debug dump
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-endpoint.md
- Title: az arcdata dc endpoint reference-
-description: Reference article for az arcdata dc endpoint commands.
--- Previously updated : 11/04/2021-----
-# az arcdata dc endpoint
-
-Endpoint commands.
-
-## Commands
-| Command | Description|
-| | |
-[az arcdata dc endpoint list](#az-arcdata-dc-endpoint-list) | List the data controller endpoint.
-## az arcdata dc endpoint list
-List the data controller endpoint.
-```azurecli
-az arcdata dc endpoint list
-```
-### Examples
-Lists all available data controller endpoints.
-```azurecli
-az arcdata dc endpoint list --k8s-namespace namespace
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-status.md
- Title: az arcdata dc status reference-
-description: Reference article for az arcdata dc status commands.
--- Previously updated : 11/04/2021-----
-# az arcdata dc status
-
-Status commands.
-## Commands
-| Command | Description|
-| | |
-[az arcdata dc status show](#az-arcdata-dc-status-show) | Show the status of the data controller.
-## az arcdata dc status show
-Show the status of the data controller.
-```azurecli
-az arcdata dc status show
-```
-### Examples
-Show the status of the data controller in a particular kubernetes namespace.
-```azurecli
-az arcdata dc status show --k8s-namespace namespace --use-k8s
-```
-Show the status of a directly connected data controller in a particular resource group.
-```azurecli
-az arcdata dc status show --resource-group resource-group
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc.md
- Title: az arcdata dc reference-
-description: Reference article for az arcdata dc commands.
--- Previously updated : 11/04/2021-----
-# az arcdata dc
-
-Create, delete, and manage data controllers.
-## Commands
-| Command | Description|
-| | |
-[az arcdata dc create](#az-arcdata-dc-create) | Create data controller.
-[az arcdata dc upgrade](#az-arcdata-dc-upgrade) | Upgrade data controller.
-[az arcdata dc update](#az-arcdata-dc-update) | Update data controller.
-[az arcdata dc list-upgrades](#az-arcdata-dc-list-upgrades) | List available upgrade versions.
-[az arcdata dc delete](#az-arcdata-dc-delete) | Delete data controller.
-[az arcdata dc endpoint](reference-az-arcdata-dc-endpoint.md) | Endpoint commands.
-[az arcdata dc status](reference-az-arcdata-dc-status.md) | Status commands.
-[az arcdata dc config](reference-az-arcdata-dc-config.md) | Configuration commands.
-[az arcdata dc debug](reference-az-arcdata-dc-debug.md) | Debug data controller.
-[az arcdata dc export](#az-arcdata-dc-export) | Export metrics, logs or usage.
-[az arcdata dc upload](#az-arcdata-dc-upload) | Upload exported data file.
-## az arcdata dc create
-Create data controller - kube config is required on your system along with credentials for the monitoring dashboards provided by the following environment variables - AZDATA_LOGSUI_USERNAME and AZDATA_LOGSUI_PASSWORD for Logs Dashboard, and AZDATA_METRICSUI_USERNAME and AZDATA_METRICSUI_PASSWORD for Metrics Dashboard. Alternatively AZDATA_USERNAME and AZDATA_PASSWORD will be used as a fallback if either sets of environment variables are missing.
-```azurecli
-az arcdata dc create
-```
-### Examples
-Deploy an indirectly connected data controller.
-```azurecli
-az arcdata dc create --name name --k8s-namespace namespace --connectivity-mode indirect --resource-group group --location location --subscription subscription --use-k8s
-```
-Deploy a directly connected data controller.
-```azurecli
-az arcdata dc create --name name --connectivity-mode direct --resource-group group --location location --subscription subscription --custom-location custom-location
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc upgrade
-Upgrade data controller to the desired-version specified. If desired-version is not specified, an attempt to upgrade to the latest version will be made. If you are unsure of the desired version, you may use the list-upgrades command to view available versions, or use the --dry-run argument to show which version would be used
-```azurecli
-az arcdata dc upgrade
-```
-### Examples
-Data controller upgrade.
-```azurecli
-az arcdata dc upgrade --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc update
-Updates the datacontroller to enable/disable auto uploading logs and metrics
-```azurecli
-az arcdata dc update
-```
-### Examples
-Data controller upgrade.
-```azurecli
-az arcdata dc update --auto-upload-logs true --auto-upload-metrics true --name dc-name --resource-group resource-group
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc list-upgrades
-Attempts to list versions that are available in the docker image registry for upgrade. - kube config is required on your system along with the following environment variables ['AZDATA_USERNAME', 'AZDATA_PASSWORD'].
-```azurecli
-az arcdata dc list-upgrades
-```
-### Examples
-Data controller upgrade.
-```azurecli
-az arcdata dc list-upgrades --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc delete
-Delete data controller - kube config is required on your system.
-```azurecli
-az arcdata dc delete
-```
-### Examples
-Delete an indirect connected data controller.
-```azurecli
-az arcdata dc delete --name name --k8s-namespace namespace --use-k8s
-```
-Delete a directly connected data controller.
-```azurecli
-az arcdata dc delete --name name --resource-group resource-group
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc export
-Export metrics, logs or usage to a file.
-```azurecli
-az arcdata dc export -t logs --path logs.json --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata dc upload
-Upload data file exported from a data controller to Azure.
-```azurecli
-az arcdata dc upload
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Resource Kind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-resource-kind.md
- Title: az arcdata resource-kind reference-
-description: Reference article for az arcdata resource-kind commands.
--- Previously updated : 11/04/2021-----
-# az arcdata resource-kind
-
-Resource-kind commands to define and template custom resources on your cluster.
-## Commands
-| Command | Description|
-| | |
-[az arcdata resource-kind list](#az-arcdata-resource-kind-list) | List the available custom resource kinds for Arc that can be defined and created.
-[az arcdata resource-kind get](#az-arcdata-resource-kind-get) | Get the Arc resource-kind's template file.
-## az arcdata resource-kind list
-List the available custom resource kinds for Arc that can be defined and created. After listing, you can proceed to getting the template file needed to define or create that custom resource.
-```azurecli
-az arcdata resource-kind list
-```
-### Examples
-Example command for listing the available custom resource kinds for Arc.
-```azurecli
-az arcdata resource-kind list
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az arcdata resource-kind get
-Get the Arc resource-kind's template file.
-```azurecli
-az arcdata resource-kind get
-```
-### Examples
-Example command for getting an Arc resource-kind's CRD template file.
-```azurecli
-az arcdata resource-kind get --kind SqlManagedInstance
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata.md
- Title: az arcdata reference-
-description: Reference article for az arcdata commands.
--- Previously updated : 11/04/2021-----
-# az arcdata
-## Commands
-| Command | Description|
-| | |
-|[az arcdata dc](reference-az-arcdata-dc.md) | Create, delete, and manage data controllers.
-|[az arcdata resource-kind](reference-az-arcdata-resource-kind.md) | Resource-kind commands to define and template custom resources on your cluster.
-|[az arcdata ad-connector](reference-az-arcdata-ad-connector.md) | Manage Active Directory authentication for Azure Arc data services.|
--
-## az sql mi-arc
-| Command | Description|
-| | |
-|[az sql_mi-arc](reference-az-sql-mi-arc.md) | Manage Azure Arc-enabled SQL managed instances.
-
-## az sql midb-arc
-| Command | Description|
-| | |
-|[az sql midb-arc](reference-az-sql-midb-arc.md) | Manage databases for Azure Arc-enabled SQL managed instances.
-
-## sql instance-failover-group-arc
-| Command | Description|
-| | |
-|[az sql instance-failover-group-arc](reference-az-sql-instance-failover-group-arc.md) | Create or Delete a Failover Group.|
--
-## az postgres arc-server
-| Command | Description|
-| | |
-|[az postgres arc-server](reference-az-postgres-arc-server.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
azure-arc Reference Az Postgres Arc Server Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-postgres-arc-server-endpoint.md
- Title: az postgres arc-server endpoint reference-
-description: Reference article for az postgres arc-server endpoint commands.
--- Previously updated : 11/04/2021-----
-# az postgres arc-server endpoint
-
-Manage Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
-## Commands
-| Command | Description|
-| | |
-[az postgres arc-server endpoint list](#az-postgres-arc-server-endpoint-list) | List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
-## az postgres arc-server endpoint list
-List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
-```azurecli
-az postgres arc-server endpoint list
-```
-### Examples
-List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
-```azurecli
-az postgres arc-server endpoint list --name postgres01 --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Postgres Arc Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-postgres-arc-server.md
- Title: az postgres arc-server reference-
-description: Reference article for az postgres arc-server commands.
--- Previously updated : 11/04/2021-----
-# az postgres arc-server
-
-Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
-## Commands
-| Command | Description|
-| | |
-[az postgres arc-server create](#az-postgres-arc-server-create) | Create an Azure Arc enabled PostgreSQL Hyperscale server group.
-[az postgres arc-server edit](#az-postgres-arc-server-edit) | Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
-[az postgres arc-server delete](#az-postgres-arc-server-delete) | Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
-[az postgres arc-server show](#az-postgres-arc-server-show) | Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
-[az postgres arc-server list](#az-postgres-arc-server-list) | List Azure Arc enabled PostgreSQL Hyperscale server groups.
-[az postgres arc-server endpoint](reference-az-postgres-arc-server-endpoint.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
-## az postgres arc-server create
-To set the password of the server group, please set the environment variable AZDATA_PASSWORD
-```azurecli
-az postgres arc-server create
-```
-### Examples
-Create an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server create -n pg1 --k8s-namespace namespace --use-k8s
-```
-Create an Azure Arc enabled PostgreSQL Hyperscale server group with engine settings. Both below examples are valid.
-```azurecli
-az postgres arc-server create -n pg1 --engine-settings "key1=val1" --k8s-namespace namespace
-az postgres arc-server create -n pg1 --engine-settings "key2=val2" --k8s-namespace namespace --use-k8s
-```
-Create a PostgreSQL server group with volume claim mounts.
-```azurecli
-az postgres arc-server create -n pg1 --volume-claim-mounts backup-pvc:backup
-```
-Create a PostgreSQL server group with specific memory-limit for different node roles.
-```azurecli
-az postgres arc-server create -n pg1 --memory-limit "coordinator=2Gi,w=1Gi" --workers 1 --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az postgres arc-server edit
-Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server edit
-```
-### Examples
-Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server edit --path ./spec.json -n pg1 --k8s-namespace namespace --use-k8s
-```
-Edit an Azure Arc enabled PostgreSQL Hyperscale server group with engine settings for the coordinator node.
-```azurecli
-az postgres arc-server edit -n pg1 --coordinator-settings "key2=val2" --k8s-namespace namespace
-```
-Edits an Azure Arc enabled PostgreSQL Hyperscale server group and replaces existing engine settings with new setting key1=val1.
-```azurecli
-az postgres arc-server edit -n pg1 --engine-settings "key1=val1" --replace-settings --k8s-namespace namespace
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az postgres arc-server delete
-Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server delete
-```
-### Examples
-Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server delete -n pg1 --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az postgres arc-server show
-Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server show
-```
-### Examples
-Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
-```azurecli
-az postgres arc-server show -n pg1 --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az postgres arc-server list
-List Azure Arc enabled PostgreSQL Hyperscale server groups.
-```azurecli
-az postgres arc-server list
-```
-### Examples
-List Azure Arc enabled PostgreSQL Hyperscale server groups.
-```azurecli
-az postgres arc-server list --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Instance Failover Group Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-instance-failover-group-arc.md
- Title: az sql instance-failover-group-arc-
-description: Reference article for az sql instance-failover-group-arc.
--- Previously updated : 05/02/2022-----
-# az sql instance-failover-group-arc
-
-Create or Delete a Failover Group.
-## Commands
-| Command | Description|
-| | |
-[az sql instance-failover-group-arc create](#az-sql-instance-failover-group-arc-create) | Create a failover group resource
-[az sql instance-failover-group-arc update](#az-sql-instance-failover-group-arc-update) | Update a failover group resource
-[az sql instance-failover-group-arc delete](#az-sql-instance-failover-group-arc-delete) | Delete a failover group resource on a SQL managed instance.
-[az sql instance-failover-group-arc show](#az-sql-instance-failover-group-arc-show) | show a failover group resource.
-## az sql instance-failover-group-arc create
-Create a failover group resource to create a distributed availability group
-```azurecli
-az sql instance-failover-group-arc create
-```
-### Examples
-Ex 1 - Create a failover group resource fogCr1 to create failover group by using shared name sharedName1 between sqlmi instance sqlmi1 and partner SQL managed instance sqlmi2. It requires partner sqlmi primary mirror partnerPrimary:5022 and partner sqlmi mirror endpoint certificate file ./sqlmi2.cer.
-```azurecli
-az sql instance-failover-group-arc create --name fogCr1 --shared-name sharedName1 --mi sqlmi1 --role primary --partner-mi sqlmi2 --partner-mirroring-url partnerPrimary:5022 --partner-mirroring-cert-file ./sqlmi2.cer --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql instance-failover-group-arc update
-Update a failover group resource to change the role of distributed availability group
-```azurecli
-az sql instance-failover-group-arc update
-```
-### Examples
-Ex 1 - Update a failover group resource fogCr1 to secondary role from primary
-```azurecli
-az sql instance-failover-group-arc update --name fogCr1 --role secondary --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql instance-failover-group-arc delete
-Delete a failover group resource on a SQL managed instance.
-```azurecli
-az sql instance-failover-group-arc delete
-```
-### Examples
-Ex 1 - delete failover group resources named fogCr1.
-```azurecli
-az sql instance-failover-group-arc delete --name fogCr1 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql instance-failover-group-arc show
-show a failover group resource.
-```azurecli
-az sql instance-failover-group-arc show
-```
-### Examples
-Ex 1 - show failover group resources named fogCr1.
-```azurecli
-az sql instance-failover-group-arc show --name fogCr1 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Mi Arc Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc-config.md
- Title: az sql mi-arc config reference-
-description: Reference article for az sql mi-arc config commands.
--- Previously updated : 11/04/2021-----
-# az sql mi-arc config
-
-Configuration commands.
-## Commands
-| Command | Description|
-| | |
-[az sql mi-arc config init](#az-sql-mi-arc-config-init) | Initialize the CRD and specification files for a SQL managed instance.
-[az sql mi-arc config add](#az-sql-mi-arc-config-add) | Add a value for a json path in a config file.
-[az sql mi-arc config remove](#az-sql-mi-arc-config-remove) | Remove a value for a json path in a config file.
-[az sql mi-arc config replace](#az-sql-mi-arc-config-replace) | Replace a value for a json path in a config file.
-[az sql mi-arc config patch](#az-sql-mi-arc-config-patch) | Patch a config file based on a json patch file.
-## az sql mi-arc config init
-Initialize the CRD and specification files for a SQL managed instance.
-```azurecli
-az sql mi-arc config init
-```
-### Examples
-Initialize the CRD and specification files for a SQL managed instance.
-```azurecli
-az sql mi-arc config init --path ./template
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc config add
-Add the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az sql mi-arc config add
-```
-### Examples
-Ex 1 - Add storage.
-```azurecli
-az sql mi-arc config add --path custom/spec.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc config remove
-Remove the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az sql mi-arc config remove
-```
-### Examples
-Ex 1 - Remove storage.
-```azurecli
-az sql mi-arc config remove --path custom/spec.json --json-path ".spec.storage"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc config replace
-Replace the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
-```azurecli
-az sql mi-arc config replace
-```
-### Examples
-Ex 1 - Replace the port of a single endpoint.
-```azurecli
-az sql mi-arc config replace --path custom/spec.json --json-values "$.spec.endpoints[?(@.name=="Controller")].port=30080"
-```
-Ex 2 - Replace storage.
-```azurecli
-az sql mi-arc config replace --path custom/spec.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc config patch
-Patch the config file according to the given patch file. Consult http://jsonpatch.com/ for a better understanding of how the paths should be composed. The replace operation can use conditionals in its path due to the jsonpath library https://jsonpath.com/. All patch json files must start with a key of `patch` that has an array of patches with their corresponding op (add, replace, remove), path, and value. The `remove` op does not require a value, just a path. See the examples below.
-```azurecli
-az sql mi-arc config patch
-```
-### Examples
-Ex 1 - Replace the port of a single endpoint with patch file.
-```azurecli
-az sql mi-arc config patch --path custom/spec.json --patch ./patch.json
-```
-Patch File Example (patch.json):
-```json
-{"patch":[{"op":"replace","path":"$.spec.endpoints[?(@.name=="Controller")].port","value":30080}]}
-```
-Ex 2 - Replace storage with patch file.
-```azurecli
-az sql mi-arc config patch --path custom/spec.json --patch ./patch.json
-```
-Patch File Example (patch.json):
-```json
-{"patch":[{"op":"replace","path":".spec.storage","value":{"accessMode":"ReadWriteMany","className":"managed-premium","size":"10Gi"}}]}
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Mi Arc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc-endpoint.md
- Title: az sql mi-arc endpoint reference-
-description: Reference article for az sql mi-arc endpoint commands.
--- Previously updated : 11/04/2021-----
-# az sql mi-arc endpoint
-
-View and manage SQL endpoints.
-## Commands
-| Command | Description|
-| | |
-[az sql mi-arc endpoint list](#az-sql-mi-arc-endpoint-list) | List the SQL endpoints.
-## az sql mi-arc endpoint list
-List the SQL endpoints.
-```azurecli
-az sql mi-arc endpoint list
-```
-### Examples
-List the endpoints for a SQL managed instance.
-```azurecli
-az sql mi-arc endpoint list -n sqlmi1
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Mi Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc.md
- Title: az sql mi-arc reference-
-description: Reference article for az sql mi-arc commands.
--- Previously updated : 11/04/2021-----
-# az sql mi-arc
-
-Manage Azure Arc-enabled SQL managed instances.
-## Commands
-| Command | Description|
-| | |
-[az sql mi-arc endpoint](reference-az-sql-mi-arc-endpoint.md) | View and manage SQL endpoints.
-[az sql mi-arc create](#az-sql-mi-arc-create) | Create a SQL managed instance.
-[az sql mi-arc update](#az-sql-mi-arc-update) | Update the configuration of a SQL managed instance.
-[az sql mi-arc delete](#az-sql-mi-arc-delete) | Delete a SQL managed instance.
-[az sql mi-arc show](#az-sql-mi-arc-show) | Show the details of a SQL managed instance.
-[az sql mi-arc get-mirroring-cert](#az-sql-mi-arc-get-mirroring-cert) | Retrieve certificate of availability group mirroring endpoint from sql mi and store in a file.
-[az sql mi-arc upgrade](#az-sql-mi-arc-upgrade) | Upgrade SQL managed instance.
-[az sql mi-arc list](#az-sql-mi-arc-list) | List SQL managed instances.
-[az sql mi-arc config](reference-az-sql-mi-arc-config.md) | Configuration commands.
-## az sql mi-arc create
-To set the password of the SQL managed instance, set the environment variable AZDATA_PASSWORD
-```azurecli
-az sql mi-arc create
-```
-### Examples
-Create an indirectly connected SQL managed instance.
-```azurecli
-az sql mi-arc create -n sqlmi1 --k8s-namespace namespace --use-k8s
-```
-Create an indirectly connected SQL managed instance with 3 replicas in HA scenario.
-```azurecli
-az sql mi-arc create -n sqlmi2 --replicas 3 --k8s-namespace namespace --use-k8s
-```
-Create a directly connected SQL managed instance.
-```azurecli
-az sql mi-arc create --name name --resource-group group --location location --subscription subscription --custom-location custom-location
-```
-Create an indirectly connected SQL managed instance with Active Directory authentication.
-```azurecli
-az sql mi-arc create --name contososqlmi --k8s-namespace arc --ad-connector-name arcadc --ad-connector-namespace arc --keytab-secret arcuser-keytab-secret --ad-account-name arcuser --primary-dns-name contososqlmi-primary.contoso.local --primary-port-number 81433 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc update
-Update the configuration of a SQL managed instance.
-```azurecli
-az sql mi-arc update
-```
-### Examples
-Update the configuration of a SQL managed instance.
-```azurecli
-az sql mi-arc update --path ./spec.json -n sqlmi1 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc delete
-Delete a SQL managed instance.
-```azurecli
-az sql mi-arc delete
-```
-### Examples
-Delete a SQL managed instance using provided namespace.
-```azurecli
-az sql mi-arc delete --name sqlmi1 --k8s-namespace namespace --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc show
-Show the details of a SQL managed instance.
-```azurecli
-az sql mi-arc show
-```
-### Examples
-Show the details of an indirect connected SQL managed instance.
-```azurecli
-az sql mi-arc show --name sqlmi1 --k8s-namespace namespace --use-k8s
-```
-Show the details of a directly connected SQL managed instance.
-```azurecli
-az sql mi-arc show --name sqlmi1 --resource-group resource-group
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc get-mirroring-cert
-Retrieve certificate of availability group mirroring endpoint from sql mi and store in a file.
-```azurecli
-az sql mi-arc get-mirroring-cert
-```
-### Examples
-Retrieve certificate of availability group mirroring endpoint from sqlmi1 and store in file fileName1
-```azurecli
-az sql mi-arc get-mirroring-cert -n sqlmi1 --cert-file fileName1
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc upgrade
-Upgrade SQL managed instance to the desired-version specified. If desired-version is not specified, the data controller version will be used.
-```azurecli
-az sql mi-arc upgrade
-```
-### Examples
-Upgrade SQL managed instance.
-```azurecli
-az sql mi-arc upgrade -n sqlmi1 -k arc --desired-version v1.1.0 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc list
-List SQL managed instances.
-```azurecli
-az sql mi-arc list
-```
-### Examples
-List SQL managed instances.
-```azurecli
-az sql mi-arc list --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Midb Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-midb-arc.md
- Title: az sql midb-arc-
-description: Reference article for az sql midb-arc commands.
--- Previously updated : 11/04/2021-----
-# az sql midb-arc
-
-Manage databases for Azure Arc-enabled SQL managed instances.
-## Commands
-| Command | Description|
-| | |
-[az sql midb-arc restore](#az-sql-midb-arc-restore) | Restore a database to an Azure Arc enabled SQL managed instance.
-## az sql midb-arc restore
-
-Restore a database to an Azure Arc enabled SQL managed instance.
-
-```azurecli
-az sql midb-arc restore
-```
-### Examples
-Ex 1 - Restore a database using Point in time restore.
-```azurecli
-az sql midb-arc restore --managed-instance sqlmi1 --name mysourcedb
- --dest-name mynewdb --time "2021-10-20T05:34:22Z" --k8s-namespace
- arc --use-k8s --dry-run
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Jacobs Technolgy Inc.](https://www.jacobs.com/)| |[Jadex Strategic Group](https://jadexstrategic.com)| |[Jasper Solutions Inc.](https://jaspersolutions.com/)|
-|[JHC Technology, Inc.](https://www.effectual.com/jhc-technology/)|
|[Quiet Professionals](https://quietprofessionalsllc.com)| |[Quzara LLC](https://www.quzara.com)| |[Karpel Solutions](https://www.karpel.com/)|
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Any alert instance describes the resource that was affected and the cause of the
] ] }
+ ],
+ "dataSources": [
+ {
+ "resourceId": "/subscriptions/a5ea55e2-7482-49ba-90b3-60e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
+ "tables": [
+ "Heartbeat"
+ ]
+ }
] },
- "dataSources": [
- {
- "resourceId": "/subscriptions/a5ea55e2-7482-49ba-90b3-60e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
- "tables": [
- "Heartbeat"
- ]
- }
- ],
- "IncludedSearchResults": "True",
- "AlertType": "Metric measurement"
+ "IncludedSearchResults": "True",
+ "AlertType": "Metric measurement"
} } ```
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Last updated 10/18/2021
-# Azure Monitor best practices - Analyze and visualize data
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes builtin features in Azure Monitor for analyzing collected data and options for creating custom visualizations to meet the requirements of different users in your organization. Visualizations such as charts and graphs can help you analyze your monitoring data to drill down on issues and identify patterns.
+# Azure Monitor best practices: Analyze and visualize data
+This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes built-in features in Azure Monitor for analyzing collected data. It also describes options for creating custom visualizations to meet the requirements of different users in your organization. Visualizations like charts and graphs can help you analyze your monitoring data to drill down on issues and identify patterns.
+
+## Built-in analysis features
-## Builtin analysis features
The following sections describe Azure Monitor features that provide analysis of collected data without any configuration.+ ### Overview page
-Most Azure services will have an **Overview** page in the Azure portal that includes a **Monitor** section with charts showing recent charts for critical metrics. This is intended for owners of individual services to quickly assess the performance of the resource. Since this page is based on platform metrics that are collected automatically, there's no configuration required for this feature.
-### Metrics explorer
-Metrics explorer allows users to interactively work with metric data and create metric alerts. Most users will be able to use metrics explorer with minimal training but must be familiar with the metrics they want to analyze. There's no configuration required for this feature once data collection has been configured. Platform metrics for Azure resources will automatically be available. Guest metrics for virtual machines will be available when Azure Monitor agent has been deployed to them, and application metrics will be available when Application Insights has been configured.
+Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. Because this page is based on platform metrics that are collected automatically, configuration isn't required for this feature.
+### Metrics Explorer
+
+You can use Metrics Explorer to interactively work with metric data and create metric alerts. Typically, you need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. Configuration isn't required for this feature after data collection is configured. Platform metrics for Azure resources are automatically available. Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to them. Application metrics are available after Application Insights is configured.
### Log Analytics
-Log Analytics allows users to create log queries to interactively work with log data and create log query alerts. There is some training required for users to become familiar with the query language, although they can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. This allows users who are familiar with the query language to build queries for others in the organization.
+With Log Analytics, you can create log queries to interactively work with log data and create log query alerts. Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization.
## Workbooks
-[Workbooks](./visualize/workbooks-overview.md) are the visualization platform of choice for Azure providing a flexible canvas for data analysis and creation of rich visual reports. Workbooks enable you to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They are especially useful to prepare E2E monitoring views across multiple Azure resources.
-Insights use prebuilt workbooks to present users with critical health and performance information for a particular service. You can access a gallery of additional workbooks in the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet requirements of your different users.
+[Workbooks](./visualize/workbooks-overview.md) are the visualization platform of choice for Azure. They provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources.
+
+Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users.
![Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page.](media/visualizations/workbook.png)
-Common scenarios for workbooks include the following:
+Common scenarios for workbooks:
-- Create an interactive report with parameters where selecting an element in a table will dynamically update associated charts and visualizations.
+- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.
- Share a report with other users in your organization.-- Collaborate with other workbook authors in your organization using a public GitHub-based template gallery.
+- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery.
+## Azure dashboards
-
-## Azure Dashboards
-[Azure dashboards](../azure-portal/azure-portal-dashboards.md) are useful in providing a single pane of glass over your Azure infrastructure and services. While a workbook provides richer functionality, a dashboard can combine Azure Monitor data with data from other Azure services.
+[Azure dashboards](../azure-portal/azure-portal-dashboards.md) are useful in providing a "single pane of glass" over your Azure infrastructure and services. While a workbook provides richer functionality, a dashboard can combine Azure Monitor data with data from other Azure services.
![Screenshot that shows an example of an Azure dashboard with customizable information.](media/visualizations/dashboard.png)
-Here's a video walkthrough on creating dashboards:
+Here's a video walk-through on how to create dashboards:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4AslH]
-Common scenarios for dashboards include the following:
+Common scenarios for dashboards:
-- Create a dashboard combining a metrics graph and the results of a log query with operational data for related services.-- Share a dashboard with service owners through integration with [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
-
+- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.
+- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md).
-See [Create and share dashboards of Log Analytics data](visualize/tutorial-logs-dashboards.md) for details on creating a dashboard that includes data from Azure Monitor Logs. See [Create custom KPI dashboards using Azure Application Insights](app/tutorial-app-dashboards.md) for details on creating a dashboard that includes data from Application Insights.
+For details on how to create a dashboard that includes data from Azure Monitor Logs, see [Create and share dashboards of Log Analytics data](visualize/tutorial-logs-dashboards.md). For details on how to create a dashboard that includes data from Application Insights, see [Create custom key performance indicator (KPI) dashboards using Application Insights](app/tutorial-app-dashboards.md).
## Grafana
-[Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for detecting, isolating, and triaging operational incidents, combining visualizations of Azure and non-Azure data sources including on-premises, third party tools, and data stores in other clouds. Grafana has popular plugins and dashboard templates for APM tools such as Dynatrace, New Relic, and App Dynamics which enables users to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plugins for multi-cloud monitoring in a single pane of glass.
+[Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for:
+- Detecting, isolating, and triaging operational incidents.
+- Combining visualizations of Azure and non-Azure data sources. These sources include on-premises, third-party tools, and data stores in other clouds.
+Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multi-cloud monitoring in a single pane of glass.
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize/grafana-plugin.md) to visualize your Azure Monitor metrics and logs.
-Additionally, [Azure Managed Grafana](../managed-grafan) to get started.
+[Azure Managed Grafana](../managed-grafan) to get started.
![Screenshot that shows Grafana visualizations.](media/visualizations/grafana.png) -
-Common scenarios for Grafana include the following:
+Common scenarios for Grafana:
- Combine time-series and event data in a single visualization panel. - Create a dynamic dashboard based on user selection of dynamic variables.-- Create a dashboard from a community created and supported template.-- Create a vendor agnostic BCDR scenario that runs on any cloud provider or on-premises.
+- Create a dashboard from a community-created and community-supported template.
+- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises.
## Power BI
-[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset and then take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices.
+
+[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset. Then you can take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices.
![Screenshot that shows an example Power B I report for I T operations.](media/visualizations/power-bi.png)
-Common scenarios for Power BI include the following:
+Common scenarios for Power BI:
-- Rich visualizations.-- Extensive interactivity, including zoom-in and cross-filtering.-- Ease of sharing throughout your organization.-- Integration with other data from multiple data sources.-- Better performance with results cached in a cube.
+- Create rich visualizations.
+- Benefit from extensive interactivity, including zoom-in and cross-filtering.
+- Share easily throughout your organization.
+- Integrate data from multiple data sources.
+- Experience better performance with results cached in a cube.
## Azure Monitor partners
-Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions may have an additional cost.
+Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions might have an extra cost.
## Custom application
-You can then build your own custom websites and applications using metric and log data in Azure Monitor accessed through a REST API. This gives you complete flexibility in UI, visualization, interactivity, and features.
+You can build your own custom websites and applications by using metric and log data in Azure Monitor accessed through a REST API. This approach gives you complete flexibility in UI, visualization, interactivity, and features.
## Next steps-- See [Alerts and automated actions](best-practices-alerts.md) to define alerts and automated actions from Azure Monitor data.+
+To define alerts and automated actions from Azure Monitor data, see [Alerts and automated actions](best-practices-alerts.md).
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Title: Azure Monitor data platform | Microsoft Docs
-description: Monitoring data collected by Azure Monitor is separated into metrics that are lightweight and capable of supporting near real-time scenarios and logs that are used for advanced analysis.
+description: Monitoring data collected by Azure Monitor is separated into metrics that are lightweight and capable of supporting near-real-time scenarios and logs that are used for advanced analysis.
documentationcenter: ''
# Azure Monitor data platform
-Enabling observability across today's complex computing environments running distributed applications that rely on both cloud and on-premises services, requires collection of operational data from every layer and every component of the distributed system. You need to be able to perform deep insights on this data and consolidate it into a single pane of glass with different perspectives to support the multitude of stakeholders in your organization.
+Today's complex computing environments run distributed applications that rely on both cloud and on-premises services. To enable observability, operational data must be collected from every layer and component of the distributed system. You need to be able to perform deep insights on this data and consolidate it with different perspectives so that it supports the range of stakeholders in your organization.
-[Azure Monitor](overview.md) collects and aggregates data from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
+[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources. You can gain deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
-
-![Azure Monitor overview](media/data-platform/overview.png)
+![Screenshot that shows Azure Monitor overview.](media/data-platform/overview.png)
## Observability data in Azure Monitor
-Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
-Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [VM insights](vm/vminsights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data.
+Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. A monitoring tool must collect and analyze these three different kinds of data to provide sufficient observability of a monitored system.
+
+Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed by using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
+
+Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each platform is optimized for particular monitoring scenarios, and each one supports different features in Azure Monitor.
+Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost-effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [VM insights](vm/vminsights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data.
### Metrics
-[Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using a variety of algorithms, compared to other metrics, and analyzed for trends over time.
-Metrics in Azure Monitor are stored in a time-series database which is optimized for analyzing time-stamped data. This makes metrics particularly suited for alerting and fast detection of issues. They can tell you how your system is performing but typically need to be combined with logs to identify the root cause of issues.
+[Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They're collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated by using various algorithms. They can be compared to other metrics and analyzed for trends over time.
-Metrics are available for interactive analysis in the Azure portal with [Azure Metrics Explorer](essentials/metrics-getting-started.md). They can be added to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data and used for near-real time [alerting](alerts/alerts-metric.md).
+Metrics in Azure Monitor are stored in a time-series database that's optimized for analyzing time-stamped data. Time-stamping makes metrics well suited for alerting and fast detection of issues. Metrics can tell you how your system is performing but typically must be combined with logs to identify the root cause of issues.
-Read more about Azure Monitor Metrics including their sources of data in [Metrics in Azure Monitor](essentials/data-platform-metrics.md).
+Metrics are available for interactive analysis in the Azure portal with [Azure Metrics Explorer](essentials/metrics-getting-started.md). They can be added to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data and used for near-real-time [alerting](alerts/alerts-metric.md).
+
+To read more about Azure Monitor metrics, including their sources of data, see [Metrics in Azure Monitor](essentials/data-platform-metrics.md).
### Logs
-[Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.
-Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [Azure Data Explorer](/azure/data-explorer/) which provides a powerful analysis engine and [rich query language](/azure/kusto/query/). Logs typically provide enough information to provide complete context of the issue being identified and are valuable for identifying root case of issues.
+[Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and might be structured or freeform text with a timestamp. They might be created sporadically as events in the environment generate log entries. A system under heavy load typically generates more log volume.
-> [!NOTE]
-> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
+Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and [rich query language](/azure/kusto/query/). Logs typically provide enough information to provide complete context of the issue being identified and are valuable for identifying the root cause of issues.
+> [!NOTE]
+> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription-level events in Azure are written to an [Activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations.
+>
+>Azure Monitor Logs is a log data platform that collects Activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
- You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query.
+ You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal. You can also add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can create [log alerts](alerts/alerts-log.md), which will trigger an alert based on the results of a schedule query.
-Read more about Azure Monitor Logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md).
+Read more about Azure Monitor logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md).
### Distributed traces
-Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.
-Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md), and trace data is stored with other application log data collected by Application Insights. This makes it available to the same analysis tools as other log data including log queries, dashboards, and alerts.
+Traces are series of related events that follow a user request through a distributed system. They can be used to determine the behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.
-Read more about distributed tracing at [What is Distributed Tracing?](app/distributed-tracing.md).
+Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights. This way it's available to the same analysis tools as other log data including log queries, dashboards, and alerts.
+Read more about distributed tracing at [What is distributed tracing?](app/distributed-tracing.md).
-## Compare Azure Monitor Metrics and Logs
+## Compare Azure Monitor metrics and logs
-The following table compares Metrics and Logs in Azure Monitor.
+The following table compares metrics and logs in Azure Monitor.
| Attribute | Metrics | Logs | |:|:|:|
-| Benefits | Lightweight and capable of near-real time scenarios such as alerting. Ideal for fast detection of issues. | Analyzed with rich query language. Ideal for deep analysis and identifying root cause. |
-| Data | Numerical values only | Text or numeric data |
-| Structure | Standard set of properties including sample time, resource being monitored, a numeric value. Some metrics include multiple dimensions for further definition. | Unique set of properties depending on the log type. |
-| Collection | Collected at regular intervals. | May be collected sporadically as events trigger a record to be created. |
-| View in Azure portal | Metrics Explorer | Log Analytics |
-| Data sources include | Platform metrics collected from Azure resources.<br>Applications monitored by Application Insights.<br>Custom defined by application or API. | Application and resource logs.<br>Monitoring solutions.<br>Agents and VM extensions.<br>Application requests and exceptions.<br>Microsoft Defender for Cloud.<br>Data Collector API. |
+| Benefits | Lightweight and capable of near-real-time scenarios such as alerting. Ideal for fast detection of issues. | Analyzed with rich query language. Ideal for deep analysis and identifying root cause. |
+| Data | Numerical values only. | Text or numeric data. |
+| Structure | Standard set of properties including sample time, resource being monitored, and numeric value. Some metrics include multiple dimensions for further definition. | Unique set of properties depending on the log type. |
+| Collection | Collected at regular intervals. | Might be collected sporadically as events trigger a record to be created. |
+| View in the Azure portal | Metrics Explorer. | Log Analytics. |
+| Data sources include | Platform metrics collected from Azure resources.<br>Applications monitored by Application Insights.<br>Custom defined by application or API. | Application and resource logs.<br>Monitoring solutions.<br>Agents and VM extensions.<br>Application requests and exceptions.<br>Microsoft Defender for Cloud.<br>Data Collector API. |
## Collect monitoring data
-Different [sources of data for Azure Monitor](agents/data-sources.md) will write to either a Log Analytics workspace (Logs) or the Azure Monitor metrics database (Metrics) or both. Some sources will write directly to these data stores, while others may write to another location such as Azure storage and require some configuration to populate logs or metrics.
-See [Metrics in Azure Monitor](essentials/data-platform-metrics.md) and [Logs in Azure Monitor](logs/data-platform-logs.md) for a listing of different data sources that populate each type.
+Different [sources of data for Azure Monitor](agents/data-sources.md) will write to either a Log Analytics workspace (Logs) or the Azure Monitor metrics database (Metrics) or both. Some sources will write directly to these data stores. Others might write to another location, such as Azure Storage, and require some configuration to populate logs or metrics.
+For a listing of different data sources that populate each type, see [Metrics in Azure Monitor](essentials/data-platform-metrics.md) and [Logs in Azure Monitor](logs/data-platform-logs.md).
## Stream data to external systems
-In addition to using the tools in Azure to analyze monitoring data, you may have a requirement to forward it to an external tool such as a security information and event management (SIEM) product. This forwarding is typically done directly from monitored resources through [Azure Event Hubs](../event-hubs/index.yml). Some sources can be configured to send data directly to an event hub while you can use another process such as a Logic App to retrieve the required data. See [Stream Azure monitoring data to an event hub for consumption by an external tool](essentials/stream-monitoring-data-event-hubs.md) for details.
+In addition to using the tools in Azure to analyze monitoring data, you might have a requirement to forward it to an external tool like a security information and event management product. This forwarding is typically done directly from monitored resources through [Azure Event Hubs](../event-hubs/index.yml).
+Some sources can be configured to send data directly to an event hub while you can use another process, such as a logic app, to retrieve the required data. For more information, see [Stream Azure monitoring data to an event hub for consumption by an external tool](essentials/stream-monitoring-data-event-hubs.md).
## Next steps -- Read more about [Metrics in Azure Monitor](essentials/data-platform-metrics.md).-- Read more about [Logs in Azure Monitor](logs/data-platform-logs.md).
+- Read more about [metrics in Azure Monitor](essentials/data-platform-metrics.md).
+- Read more about [logs in Azure Monitor](logs/data-platform-logs.md).
- Learn about the [monitoring data available](agents/data-sources.md) for different resources in Azure.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
Title: Azure Monitor Logs data security | Microsoft Docs description: Learn about how Azure Monitor Logs protects your privacy and secures your data. --++ Last updated 03/21/2022 # Azure Monitor Logs data security
-This document is intended to provide information specific to [Azure Monitor Logs](../logs/data-platform-logs.md) to supplement the information on [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1).
-This article explains how log data is collected, processed, and secured by Azure Monitor. You can use agents to connect to the web service, use System Center Operations Manager to collect operational data, or retrieve data from Azure diagnostics for use by Azure Monitor.
+This article explains how Azure Monitor collects, processes, and secures log data, and describes security features you can use to further secure your Azure Monitor environment. The information in this article is specific to [Azure Monitor Logs](../logs/data-platform-logs.md) and supplements the information on [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1).
-Azure Monitor Logs manages your cloud-based data securely by using the following methods:
+Azure Monitor Logs manages your cloud-based data securely using:
* Data segregation * Data retention
Azure Monitor Logs manages your cloud-based data securely by using the following
* Compliance * Security standards certifications
-You can also use additional security features built into Azure Monitor. These features require more administrator management.
-* Customer-managed (security) keys
-* Azure Private Storage
-* Private Link networking
-* Azure support access limits set by Azure Lockbox
- Contact us with any questions, suggestions, or issues about any of the following information, including our security policies at [Azure support options](https://azure.microsoft.com/support/options/). ## Sending data securely using TLS 1.2 To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Azure Monitor Logs.
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents can't communicate over at least TLS 1.2 you won't be able to send data to Azure Monitor Logs.
We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absol
|Platform/Language | Support | More Information | | | | | |Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
-| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings) |
+| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
+| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings) |
| Windows 7 SP1 and Windows Server 2008 R2 SP1 | Supported, but not enabled by default. | See the [Transport Layer Security (TLS) registry settings](/windows-server/security/tls/tls-registry-settings) page for details on how to enable. | ## Data segregation
-After your data is ingested by Azure Monitor, the data is kept logically separate on each component throughout the service. All data is tagged per workspace. This tagging persists throughout the data lifecycle, and it is enforced at each layer of the service. Your data is stored in a dedicated database in the storage cluster in the region you have selected.
+After your data is ingested by Azure Monitor, the data is kept logically separate on each component throughout the service. All data is tagged per workspace. This tagging persists throughout the data lifecycle, and it's enforced at each layer of the service. Your data is stored in a dedicated database in the storage cluster in the region you've selected.
## Data retention Indexed log search data is stored and retained according to your pricing plan. For more information, see [Log Analytics Pricing](https://azure.microsoft.com/pricing/details/log-analytics/).
While very rare, Microsoft will notify each customer within one day if significa
For more information about how Microsoft responds to security incidents, see [Microsoft Azure Security Response in the Cloud](https://gallery.technet.microsoft.com/Azure-Security-Response-in-dd18c678/file/150826/4/Microsoft%20Azure%20Security%20Response%20in%20the%20cloud.pdf). ## Compliance
-The Azure Monitor software development and service team's information security and governance program supports its business requirements and adheres to laws and regulations as described at [Microsoft Azure Trust Center](https://azure.microsoft.com/support/trust-center/) and [Microsoft Trust Center Compliance](https://www.microsoft.com/en-us/trustcenter/compliance/default.aspx). How Azure Monitor Logs establishes security requirements, identifies security controls, manages, and monitors risks are also described there. Annually, we review polices, standards, procedures, and guidelines.
+The Azure Monitor software development and service team's information security and governance program supports its business requirements and adheres to laws and regulations as described at [Microsoft Azure Trust Center](https://azure.microsoft.com/support/trust-center/) and [Microsoft Trust Center Compliance](https://www.microsoft.com/en-us/trustcenter/compliance/default.aspx). How Azure Monitor Logs establishes security requirements, identifies security controls, manages, and monitors risks are also described there. Annually, we review policies, standards, procedures, and guidelines.
Each development team member receives formal application security training. Internally, we use a version control system for software development. Each software project is protected by the version control system.
-Microsoft has a security and compliance team that oversees and assesses all services in Microsoft. Information security officers make up the team and they are not associated with the engineering teams that develops Log Analytics. The security officers have their own management chain and conduct independent assessments of products and services to ensure security and compliance.
+Microsoft has a security and compliance team that oversees and assesses all services at Microsoft. Information security officers make up the team and they are not associated with the engineering teams that develop Log Analytics. The security officers have their own management chain and conduct independent assessments of products and services to ensure security and compliance.
Microsoft's board of directors is notified by an annual report about all information security programs at Microsoft.
Azure Log Analytics meets the following requirements:
> ## Cloud computing security data flow
-The following diagram shows a cloud security architecture as the flow of information from your company and how it is secured as is moves to Azure Monitor, ultimately seen by you in the Azure portal. More information about each step follows the diagram.
+The following diagram shows a cloud security architecture as the flow of information from your company and how it's secured as is moves to Azure Monitor, ultimately seen by you in the Azure portal. More information about each step follows the diagram.
![Image of Azure Monitor Logs data collection and security](./media/data-security/log-analytics-data-security-diagram.png)
-## 1. Sign up for Azure Monitor and collect data
+### 1. Sign up for Azure Monitor and collect data
For your organization to send data to Azure Monitor Logs, you configure a Windows or Linux agent running on Azure virtual machines, or on virtual or physical computers in your environment or other cloud provider. If you use Operations Manager, from the management group you configure the Operations Manager agent. Users (which might be you, other individual users, or a group of people) create one or more Log Analytics workspaces, and register agents by using one of the following accounts: * [Organizational ID](../../active-directory/fundamentals/sign-up-organization.md)
For your organization to send data to Azure Monitor Logs, you configure a Window
A Log Analytics workspace is where data is collected, aggregated, analyzed, and presented. A workspace is primarily used as a means to partition data, and each workspace is unique. For example, you might want to have your production data managed with one workspace and your test data managed with another workspace. Workspaces also help an administrator control user access to the data. Each workspace can have multiple user accounts associated with it, and each user account can access multiple Log Analytics workspaces. You create workspaces based on datacenter region.
-For Operations Manager, the Operations Manager management group establishes a connection with the Azure Monitor service. You then configure which agent-managed systems in the management group are allowed to collect and send data to the service. Depending on the solution you have enabled, data from these solutions are either sent directly from an Operations Manager management server to the Azure Monitor service, or because of the volume of data collected by the agent-managed system, are sent directly from the agent to the service. For systems not monitored by Operations Manager, each connects securely to the Azure Monitorservice directly.
+For Operations Manager, the Operations Manager management group establishes a connection with the Azure Monitor service. You then configure which agent-managed systems in the management group are allowed to collect and send data to the service. Depending on the solution you've enabled, data from these solutions are either sent directly from an Operations Manager management server to the Azure Monitor service, or because of the volume of data collected by the agent-managed system, are sent directly from the agent to the service. For systems not monitored by Operations Manager, each connects securely to the Azure Monitorservice directly.
All communication between connected systems and the Azure Monitor service is encrypted. The TLS (HTTPS) protocol is used for encryption. The Microsoft SDL process is followed to ensure Log Analytics is up-to-date with the most recent advances in cryptographic protocols.
-Each type of agent collects log data for Azure Monitor. The type of data that is collected is depends on the configuration of your workspace and other features of Azure Monitor.
+Each type of agent collects log data for Azure Monitor. The type of data that is collected depends on the configuration of your workspace and other features of Azure Monitor.
-## 2. Send data from agents
+### 2. Send data from agents
You register all agent types with an enrollment key and a secure connection is established between the agent and the Azure Monitor service using certificate-based authentication and TLS with port 443. Azure Monitor uses a secret store to generate and maintain keys. Private keys are rotated every 90 days and are stored in Azure and are managed by the Azure operations who follow strict regulatory and compliance practices. With Operations Manager, the management group registered with a Log Analytics workspace establishes a secure HTTPS connection with an Operations Manager management server.
With any agent reporting to an Operations Manager management group that is integ
The Windows or management server agent cached data is protected by the operating system's credential store. If the service cannot process the data after two hours, the agents will queue the data. If the queue becomes full, the agent starts dropping data types, starting with performance data. The agent queue limit is a registry key so you can modify it, if necessary. Collected data is compressed and sent to the service, bypassing the Operations Manager management group databases, so it does not add any load to them. After the collected data is sent, it is removed from the cache.
-As described above, data from the management server or direct-connected agents is sent over TLS to Microsoft Azure datacenters. Optionally, you can use ExpressRoute to provide additional security for the data. ExpressRoute is a way to directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider. For more information, see [ExpressRoute](https://azure.microsoft.com/services/expressroute/).
+As described above, data from the management server or direct-connected agents is sent over TLS to Microsoft Azure datacenters. Optionally, you can use ExpressRoute to provide extra security for the data. ExpressRoute is a way to directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider. For more information, see [ExpressRoute](https://azure.microsoft.com/services/expressroute/).
-## 3. The Azure Monitor service receives and processes data
+### 3. The Azure Monitor service receives and processes data
The Azure Monitor service ensures that incoming data is from a trusted source by validating certificates and the data integrity with Azure authentication. The unprocessed raw data is then stored in an Azure Event Hub in the region the data will eventually be stored at rest. The type of data that is stored depends on the types of solutions that were imported and used to collect data. Then, the Azure Monitor service processes the raw data and ingests it into the database. The retention period of collected data stored in the database depends on the selected pricing plan. For the *Free* tier, collected data is available for seven days. For the *Paid* tier, collected data is available for 31 days by default, but can be extended to 730 days. Data is stored encrypted at rest in Azure storage, to ensure data confidentiality, and the data is replicated within the local region using locally redundant storage (LRS). The last two weeks of data are also stored in SSD-based cache and this cache is encrypted. Data in database storage cannot be altered once ingested but can be deleted via [*purge* API path](personal-data-mgmt.md#delete). Although data cannot be altered, some certifications require that data is kept immutable and cannot be changed or deleted in storage. Data immutability can be achieved using [data export](logs-data-export.md) to a storage account that is configured as [immutable storage](../../storage/blobs/immutable-policy-configure-version-scope.md).
-## 4. Use Azure Monitor to access the data
+### 4. Use Azure Monitor to access the data
To access your Log Analytics workspace, you sign into the Azure portal using the organizational account or Microsoft account that you set up previously. All traffic between the portal and Azure Monitor service is sent over a secure HTTPS channel. When using the portal, a session ID is generated on the user client (web browser) and data is stored in a local cache until the session is terminated. When terminated, the cache is deleted. Client-side cookies, which do not contain personally identifiable information, are not automatically removed. Session cookies are marked HTTPOnly and are secured. After a pre-determined idle period, the Azure portal session is terminated.
-## Additional Security features
+## Additional security features
You can use these additional security features to further secure your Azure Monitor environment. These features require more administrator management. - [Customer-managed (security) keys](../logs/customer-managed-keys.md) - You can use customer-managed keys to encrypt data sent to your Log Analytics workspaces. It requires use of Azure Key Vault. -- [Private / customer-managed Storage](./private-storage.md) - Manage your personally encrypted storage account and tell Azure Monitor to use it to store monitoring data
+- [Private/customer-managed storage](./private-storage.md) - Manage your personally encrypted storage account and tell Azure Monitor to use it to store monitoring data
- [Private Link networking](./private-link-security.md) - Azure Private Link allows you to securely link Azure PaaS services (including Azure Monitor) to your virtual network using private endpoints. - [Azure customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-preview) - Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request.
+## Tamper-proofing and immutability
+
+Azure Monitor is an append-only data platform that includes provisions to delete data for compliance purposes. [Set a lock on your Log Analytics workspace](../../azure-resource-manager/management/lock-resources.md) to block all activities that could delete data: purge, table delete, and table- or workspace-level data retention changes.
+
+To tamper-proof your monitoring solution, we recommend you [export data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
+ ## Next steps * [See the different kinds of data that you can collect in Azure Monitor](../monitor-reference.md).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Last updated 04/27/2022
# Azure Monitor overview
-Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This information helps you understand how your applications are performing and proactively identify issues affecting them and the resources they depend on.
+Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This information helps you understand how your applications are performing and proactively identify issues that affect them and the resources they depend on.
-Just a few examples of what you can do with Azure Monitor include:
+A few examples of what you can do with Azure Monitor include:
- Detect and diagnose issues across applications and dependencies with [Application Insights](app/app-insights-overview.md). - Correlate infrastructure issues with [VM insights](vm/vminsights-overview.md) and [Container insights](containers/container-insights-overview.md). - Drill into your monitoring data with [Log Analytics](logs/log-query-overview.md) for troubleshooting and deep diagnostics. - Support operations at scale with [automated actions](alerts/alerts-action-rules.md). - Create visualizations with Azure [dashboards](visualize/tutorial-logs-dashboards.md) and [workbooks](visualize/workbooks-overview.md).-- Collect data from [monitored resources](./monitor-reference.md) using [Azure Monitor Metrics](./essentials/data-platform-metrics.md).-- Investigate change data for routine monitoring or for triaging incidents using [Change Analysis](./change/change-analysis.md).
+- Collect data from [monitored resources](./monitor-reference.md) by using [Azure Monitor Metrics](./essentials/data-platform-metrics.md).
+- Investigate change data for routine monitoring or for triaging incidents by using [Change Analysis](./change/change-analysis.md).
[!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)] ## Overview
-The following diagram gives a high-level view of Azure Monitor. At the center of the diagram are the data stores for metrics and logs, which are the two fundamental types of data used by Azure Monitor. On the left are the [sources of monitoring data](agents/data-sources.md) that populate these [data stores](data-platform.md). On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and streaming to external systems.
+The following diagram gives a high-level view of Azure Monitor. At the center of the diagram are the data stores for metrics and logs, which are the two fundamental types of data used by Azure Monitor. On the left are the [sources of monitoring data](agents/data-sources.md) that populate these [data stores](data-platform.md). On the right are the different functions that Azure Monitor performs with this collected data. Actions include analysis, alerting, and streaming to external systems.
:::image type="content" source="media/overview/azure-monitor-overview-optm.svg" alt-text="Diagram that shows an overview of Azure Monitor." border="false" lightbox="media/overview/azure-monitor-overview-optm.svg":::
-The video below uses an earlier version of the diagram above, but its explanations are still relevant.
+The following video uses an earlier version of the preceding diagram, but its explanations are still relevant.
+ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL]
-## Monitoring data platform
+## Monitor data platform
-All data collected by Azure Monitor fits into one of two fundamental types, [metrics and logs](data-platform.md). [Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They are lightweight and capable of supporting near real-time scenarios. [Logs](logs/data-platform-logs.md) contain different kinds of data organized into records with different sets of properties for each type. Telemetry such as events and traces are stored as logs in addition to performance data so that it can all be combined for analysis.
+All data collected by Azure Monitor fits into one of two fundamental types, [metrics and logs](data-platform.md). [Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They're lightweight and capable of supporting near-real-time scenarios. [Logs](logs/data-platform-logs.md) contain different kinds of data organized into records with different sets of properties for each type. Telemetry such as events and traces is stored as logs in addition to performance data so that it can all be combined for analysis.
-For many Azure resources, you'll see data collected by Azure Monitor right in their Overview page in the Azure portal. Have a look at any virtual machine for example, and you'll see several charts displaying performance metrics. Click on any of the graphs to open the data in [metrics explorer](essentials/metrics-charts.md) in the Azure portal, which allows you to chart the values of multiple metrics over time. You can view the charts interactively or pin them to a dashboard to view them with other visualizations.
+For many Azure resources, you'll see data collected by Azure Monitor right in their overview page in the Azure portal. Look at any virtual machine (VM), for example, and you'll see several charts that display performance metrics. Select any of the graphs to open the data in [Metrics Explorer](essentials/metrics-charts.md) in the Azure portal. With Metrics Explorer, you can chart the values of multiple metrics over time. You can view the charts interactively or pin them to a dashboard to view them with other visualizations.
-![Diagram shows Metrics data flowing into the Metrics Explorer to use in visualizations.](media/overview/metrics.png)
+![Diagram that shows metrics data flowing into Metrics Explorer to use in visualizations.](media/overview/metrics.png)
-Log data collected by Azure Monitor can be analyzed with [queries](logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze collected data. You can create and test queries using [Log Analytics](./logs/log-query-overview.md) in the Azure portal. You can then either directly analyze the data using different tools or save queries for use with [visualizations](best-practices-analysis.md) or [alert rules](alerts/alerts-overview.md).
+Log data collected by Azure Monitor can be analyzed with [queries](logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze collected data. You can create and test queries by using [Log Analytics](./logs/log-query-overview.md) in the Azure portal. You can then either directly analyze the data by using different tools or save queries for use with [visualizations](best-practices-analysis.md) or [alert rules](alerts/alerts-overview.md).
-Azure Monitor uses a version of the [Kusto query language](/azure/kusto/query/) that is suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language using [multiple lessons](logs/get-started-queries.md). Particular guidance is provided to users who are already familiar with [SQL](/azure/data-explorer/kusto/query/sqlcheatsheet) and [Splunk](/azure/data-explorer/kusto/query/splunk-cheat-sheet).
+Azure Monitor uses a version of the [Kusto Query Language](/azure/kusto/query/) that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language by using [multiple lessons](logs/get-started-queries.md). Particular guidance is provided to users who are already familiar with [SQL](/azure/data-explorer/kusto/query/sqlcheatsheet) and [Splunk](/azure/data-explorer/kusto/query/splunk-cheat-sheet).
-![Diagram shows Logs data flowing into Log Analytics for analysis.](media/overview/logs.png)
+![Diagram that shows logs data flowing into Log Analytics for analysis.](media/overview/logs.png)
-Change Analysis not only alerts you to live site issues, outages, component failures, or other change data, but it provides insights into those application changes, increases observability, and reduces the mean time to repair (MTTR). You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by navigating to the Change Analysis service via the Azure portal. For web app in-guest changes, you can enable Change Analysis using the [Diagnose and solve problems tool](./change/change-analysis-visualizations.md#diagnose-and-solve-problems-tool).
+Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-visualizations.md#diagnose-and-solve-problems-tool).
-Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time, detecting managed identities, platform OS upgrades, and hostname changes. Change Analysis securely queries IP Configuration rules, TLS settings, and extension versions to provide more detailed change data.
+Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data.
## What data does Azure Monitor collect?
-Azure Monitor can collect data from a [variety of sources](monitor-reference.md). This ranges from your application, any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers:
-- **Application monitoring data**: Data about the performance and functionality of the code you have written, regardless of its platform.-- **Guest OS monitoring data**: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
+Azure Monitor can collect data from [sources](monitor-reference.md) that range from your application to any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers:
+
+- **Application monitoring data**: Data about the performance and functionality of the code you've written, regardless of its platform.
+- **Guest operating system monitoring data**: Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.
- **Azure resource monitoring data**: Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services).-- **Azure subscription monitoring data**: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
+- **Azure subscription monitoring data**: Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself.
- **Azure tenant monitoring data**: Data about the operation of tenant-level Azure services, such as Azure Active Directory.-- **Azure resource change data**: Data about changes within your Azure resource(s) and how to address and triage incidents and issues.
+- **Azure resource change data**: Data about changes within your Azure resources and how to address and triage incidents and issues.
-As soon as you create an Azure subscription and start adding resources such as virtual machines and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming.
+As soon as you create an Azure subscription and add resources such as VMs and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming.
-[Enable diagnostics](essentials/platform-logs-overview.md) to extend the data you're collecting into the internal operation of the resources. [Add an agent](agents/agents-overview.md) to compute resources to collect telemetry from their guest operating systems.
+[Enable diagnostics](essentials/platform-logs-overview.md) to extend the data you're collecting into the internal operation of the resources. [Add an agent](agents/agents-overview.md) to compute resources to collect telemetry from their guest operating systems.
Enable monitoring for your application with [Application Insights](app/app-insights-overview.md) to collect detailed information including page views, application requests, and exceptions. Further verify the availability of your application by configuring an [availability test](app/monitor-web-app-availability.md) to simulate user traffic. ### Custom sources
-Azure Monitor can collect log data from any REST client using the [Data Collector API](logs/data-collector-api.md). This allows you to create custom monitoring scenarios and extend monitoring to resources that don't expose telemetry through other sources.
+
+Azure Monitor can collect log data from any REST client by using the [Data Collector API](logs/data-collector-api.md). You can create custom monitoring scenarios and extend monitoring to resources that don't expose telemetry through other sources.
## Insights and curated visualizations
-Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization" which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger scalable curated visualizations are known as "insights" and marked with that name in the documentation and Azure portal.
-For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are also described below.
+Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization," which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger, scalable, curated visualizations are known as "insights" and marked with that name in the documentation and the Azure portal.
+
+For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are also described here.
### Application Insights
-[Application Insights](app/app-insights-overview.md) monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes.
-![App Insights](media/overview/app-insights.png)
+[Application Insights](app/app-insights-overview.md) monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It takes advantage of the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. You can use it to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes.
+
+![Screenshot that shows Application Insights.](media/overview/app-insights.png)
### Container insights
-[Container insights](containers/container-insights-overview.md) monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux.
-![Container Health](media/overview/container-insights.png)
+[Container insights](containers/container-insights-overview.md) monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux.
-### VM insights
-[VM insights](vm/vminsights-overview.md) monitors your Azure virtual machines (VM) at scale. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.
+![Screenshot that shows container health.](media/overview/container-insights.png)
+### VM insights
-![VM Insights](media/overview/vm-insights.png)
+[VM insights](vm/vminsights-overview.md) monitors your Azure VMs at scale. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.
+![Screenshot that shows VM insights.](media/overview/vm-insights.png)
+## Respond to critical situations
-## Responding to critical situations
-In addition to allowing you to interactively analyze monitoring data, an effective monitoring solution must be able to proactively respond to critical conditions identified in the data that it collects. This could be sending a text or mail to an administrator responsible for investigating an issue. Or you could launch an automated process that attempts to correct an error condition.
+In addition to allowing you to interactively analyze monitoring data, an effective monitoring solution must be able to proactively respond to critical conditions identified in the data that it collects. The response could be sending a text or email to an administrator responsible for investigating an issue. Or you could launch an automated process that attempts to correct an error condition.
### Alerts
-[Alerts in Azure Monitor](alerts/alerts-overview.md) proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near real time alerts based on numeric values. Rules based on logs allow for complex logic across data from multiple sources.
-Alert rules in Azure Monitor use [action groups](alerts/action-groups.md), which contain unique sets of recipients and actions that can be shared across multiple rules. Based on your requirements, action groups can perform such actions as using webhooks to have alerts start external actions or to integrate with your ITSM tools.
+[Alerts in Azure Monitor](alerts/alerts-overview.md) proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near-real-time alerts based on numeric values. Rules based on logs allow for complex logic across data from multiple sources.
+
+Alert rules in Azure Monitor use [action groups](alerts/action-groups.md), which contain unique sets of recipients and actions that can be shared across multiple rules. Based on your requirements, action groups can perform such actions as using webhooks to have alerts start external actions or to integrate with your IT service management tools.
-![Screenshot shows alerts in Azure Monitor with severity, total alerts, and other information.](media/overview/alerts.png)
+![Screenshot that shows alerts in Azure Monitor with severity, total alerts, and other information.](media/overview/alerts.png)
### Autoscale+ Autoscale allows you to have the right amount of resources running to handle the load on your application. Create rules that use metrics collected by Azure Monitor to determine when to automatically add resources when load increases. Save money by removing resources that are sitting idle. You specify a minimum and maximum number of instances and the logic for when to increase or decrease resources. ![Diagram shows autoscale, with several servers on a line labeled Processor Time > 80% and two servers marked as minimum, three servers as current capacity, and five as maximum.](media/overview/autoscale.png)
-## Visualizing monitoring data
-[Visualizations](best-practices-analysis.md) such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and leverages other Azure services for publishing it to different audiences.
+## Visualize monitoring data
+
+[Visualizations](best-practices-analysis.md) such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences.
### Dashboards
-[Azure dashboards](../azure-portal/azure-portal-dashboards.md) allow you to combine different kinds of data into a single pane in the [Azure portal](https://portal.azure.com). You can optionally share the dashboard with other Azure users. Add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of activity logs, a usage chart from Application Insights, and the output of a log query.
-![Screenshot shows an Azure Dashboard, which includes Application and Security tiles, along with other customizable information.](media/overview/dashboard.png)
+[Azure dashboards](../azure-portal/azure-portal-dashboards.md) allow you to combine different kinds of data into a single pane in the [Azure portal](https://portal.azure.com). You can optionally share the dashboard with other Azure users. Add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of Activity logs, a usage chart from Application Insights, and the output of a log query.
+
+![Screenshot that shows an Azure dashboard, which includes Application and Security tiles and other customizable information.](media/overview/dashboard.png)
### Workbooks
-[Workbooks](visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. Use workbooks provided with Insights or create your own from predefined templates.
+[Workbooks](visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. You can use them to tap into multiple data sources from across Azure and combine them into unified interactive experiences. Use workbooks provided with Insights or create your own from predefined templates.
-![Workbooks example](media/overview/workbooks.png)
+![Screenshot that shows workbook examples.](media/overview/workbooks.png)
### Power BI
-[Power BI](https://powerbi.microsoft.com) is a business analytics service that provides interactive visualizations across a variety of data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to [automatically import log data from Azure Monitor](./logs/log-powerbi.md) to take advantage of these additional visualizations.
-
-![Power BI](media/overview/power-bi.png)
+[Power BI](https://powerbi.microsoft.com) is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to [automatically import log data from Azure Monitor](./logs/log-powerbi.md) to take advantage of these visualizations.
+![Screenshot that shows Power BI.](media/overview/power-bi.png)
## Integrate and export data+ You'll often have the requirement to integrate Azure Monitor with other systems and to build custom solutions that use your monitoring data. Other Azure services work with Azure Monitor to provide this integration. ### Event Hubs
-[Azure Event Hubs](../event-hubs/index.yml) is a streaming platform and event ingestion service. It can transform and store data using any real-time analytics provider or batching/storage adapters. Use Event Hubs to [stream Azure Monitor data](essentials/stream-monitoring-data-event-hubs.md) to partner SIEM and monitoring tools.
+[Azure Event Hubs](../event-hubs/index.yml) is a streaming platform and event ingestion service. It can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to [stream Azure Monitor data](essentials/stream-monitoring-data-event-hubs.md) to partner SIEM and monitoring tools.
### Logic Apps
-[Logic Apps](https://azure.microsoft.com/services/logic-apps) is a service that allows you to automate tasks and business processes using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor. This allows you to build workflows integrating with a variety of other systems.
+[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps) is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor.
### API
-Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. This provides you with essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.
+
+Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.
## Next steps+ Learn more about: * [Metrics and logs](./data-platform.md#metrics) for the data collected by Azure Monitor.
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Many teams need to strictly regulate access to monitoring data and settings. For example, if you have team members who work exclusively on monitoring (support engineers, DevOps engineers) or if you use a managed service provider, you might want to grant them access to only monitoring data. You might want to restrict their ability to create, modify, or delete resources.
+Many teams need to strictly regulate access to monitoring data and settings. For example, if you have team members who work exclusively on monitoring (support engineers, DevOps engineers) or if you use a managed service provider, you might want to grant them access to only monitoring data. You might want to restrict their ability to create, modify, or delete resources.
This article shows how to quickly apply a built-in monitoring role to a user in Azure or build your own custom role for a user who needs limited monitoring permissions. The article then discusses security considerations for your Azure Monitor-related resources and how you can limit access to the data in those resources. ## Built-in monitoring roles
-Built-in roles in Azure Monitor help limit access to resources in a subscription while still enabling infrastructure-monitoring staff to obtain and configure the data that they need. Azure Monitor provides two out-of-the-box roles: Monitoring Reader and Monitoring Contributor.
+
+Built-in roles in Azure Monitor help limit access to resources in a subscription while still enabling staff who monitor infrastructure to obtain and configure the data they need. Azure Monitor provides two out-of-the-box roles: Monitoring Reader and Monitoring Contributor.
### Monitoring Reader
-People assigned the Monitoring Reader role can view all monitoring data in a subscription but can't modify any resource or edit any settings related to monitoring resources. This role is appropriate for users in an organization, such as support or operations engineers, who need to be able to:
+
+People assigned the Monitoring Reader role can view all monitoring data in a subscription but can't modify any resource or edit any settings related to monitoring resources. This role is appropriate for users in an organization, such as support or operations engineers, who need to:
* View monitoring dashboards in the Azure portal. * View alert rules defined in [Azure alerts](alerts/alerts-overview.md). * Query for metrics by using the [Azure Monitor REST API](/rest/api/monitor/metrics), [PowerShell cmdlets](powershell-samples.md), or [cross-platform CLI](cli-samples.md).
-* Query the Activity Log by using the portal, Azure Monitor REST API, PowerShell cmdlets, or cross-platform CLI.
+* Query the Activity log by using the portal, Azure Monitor REST API, PowerShell cmdlets, or cross-platform CLI.
* View the [diagnostic settings](essentials/diagnostic-settings.md) for a resource. * View the [log profile](essentials/activity-log.md#legacy-collection-methods) for a subscription. * View autoscale settings.
People assigned the Monitoring Reader role can view all monitoring data in a sub
* Retrieve the workspace storage configuration for Log Analytics. > [!NOTE]
-> This role does not give read access to log data that has been streamed to an event hub or stored in a storage account. For information on configuring access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
+> This role doesn't give read access to log data that has been streamed to an event hub or stored in a storage account. For information on how to configure access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
### Monitoring Contributor
-People assigned the Monitoring Contributor role can view all monitoring data in a subscription. They can also create or modify monitoring settings, but they can't modify any other resources.
-This role is a superset of the Monitoring Reader role. It's appropriate for members of an organization's monitoring team or managed service providers who, in addition to the permissions mentioned earlier, need to be able to:
+People assigned the Monitoring Contributor role can view all monitoring data in a subscription. They can also create or modify monitoring settings, but they can't modify any other resources.
+
+This role is a superset of the Monitoring Reader role. It's appropriate for members of an organization's monitoring team or managed service providers who, in addition to the permissions mentioned earlier, need to:
* View monitoring dashboards in the portal and create their own private monitoring dashboards. * Set [diagnostic settings](essentials/diagnostic-settings.md) for a resource.\*
This role is a superset of the Monitoring Reader role. It's appropriate for memb
\*To set a log profile or a diagnostic setting, users must also separately be granted ListKeys permission on the target resource (storage account or event hub namespace). > [!NOTE]
-> This role does not give read access to log data that has been streamed to an event hub or stored in a storage account. For information on configuring access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
+> This role doesn't give read access to log data that has been streamed to an event hub or stored in a storage account. For information on how to configure access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
-## Monitoring permissions and Azure custom roles
-If the preceding built-in roles don't meet the exact needs of your team, you can [create an Azure custom role](../role-based-access-control/custom-roles.md) with more granular permissions. Here are the common Azure role-based access control (RBAC) operations for Azure Monitor:
+## Monitor permissions and Azure custom roles
+
+If the preceding built-in roles don't meet the exact needs of your team, you can [create an Azure custom role](../role-based-access-control/custom-roles.md) with more granular permissions. The common Azure role-based access control (RBAC) operations for Azure Monitor are listed here.
| Operation | Description | | | | | Microsoft.Insights/ActionGroups/[Read, Write, Delete] |Read, write, or delete action groups. |
-| Microsoft.Insights/ActivityLogAlerts/[Read, Write, Delete] |Read, write, or delete Activity Log alerts. |
+| Microsoft.Insights/ActivityLogAlerts/[Read, Write, Delete] |Read, write, or delete Activity log alerts. |
| Microsoft.Insights/AlertRules/[Read, Write, Delete] |Read, write, or delete alert rules (from classic alerts). | | Microsoft.Insights/AlertRules/Incidents/Read |List incidents (history of the alert rule being triggered) for alert rules. This applies only to the portal. | | Microsoft.Insights/AutoscaleSettings/[Read, Write, Delete] |Read, write, or delete autoscale settings. | | Microsoft.Insights/DiagnosticSettings/[Read, Write, Delete] |Read, write, or delete diagnostic settings. |
-| Microsoft.Insights/EventCategories/Read |Enumerate all categories possible in the Activity Log. Used by the Azure portal. |
-| Microsoft.Insights/eventtypes/digestevents/Read |This permission is necessary for users who need access to the Activity Log via the portal. |
-| Microsoft.Insights/eventtypes/values/Read |List Activity Log events (management events) in a subscription. This permission applies to both programmatic and portal access to the Activity Log. |
+| Microsoft.Insights/EventCategories/Read |Enumerate all categories possible in the Activity log. Used by the Azure portal. |
+| Microsoft.Insights/eventtypes/digestevents/Read |This permission is necessary for users who need access to the Activity log via the portal. |
+| Microsoft.Insights/eventtypes/values/Read |List Activity log events (management events) in a subscription. This permission applies to both programmatic and portal access to the Activity log. |
| Microsoft.Insights/ExtendedDiagnosticSettings/[Read, Write, Delete] | Read, write, or delete diagnostic settings for network flow logs. |
-| Microsoft.Insights/LogDefinitions/Read |This permission is necessary for users who need access to the Activity Log via the portal. |
-| Microsoft.Insights/LogProfiles/[Read, Write, Delete] |Read, write, or delete log profiles (streaming the Activity Log to an event hub or storage account). |
+| Microsoft.Insights/LogDefinitions/Read |This permission is necessary for users who need access to the Activity log via the portal. |
+| Microsoft.Insights/LogProfiles/[Read, Write, Delete] |Read, write, or delete log profiles (streaming the Activity log to an event hub or storage account). |
| Microsoft.Insights/MetricAlerts/[Read, Write, Delete] |Read, write, or delete near-real-time metric alerts. | | Microsoft.Insights/MetricDefinitions/Read |Read metric definitions (list of available metric types for a resource). | | Microsoft.Insights/Metrics/Read |Read metrics for a resource. |
If the preceding built-in roles don't meet the exact needs of your team, you can
| Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log alerts in Azure Monitor. | > [!NOTE]
-> Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating (writing) a diagnostic setting or a log profile that archives to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource.
+> Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating (writing) a diagnostic setting or a log profile that archives to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource.
For example, you can use the preceding table to create an Azure custom role for an Activity Log Reader like this:
New-AzRoleDefinition -Role $role
``` ## Security considerations for monitoring data+ Monitoring dataΓÇöparticularly log filesΓÇöcan contain sensitive information, such as IP addresses or user names. Monitoring data from Azure comes in three basic forms: -- The Activity Log, which describes all control-plane actions on your Azure subscription-- Resource logs, which are logs emitted by a resource-- Metrics, which are emitted by resources
+- The Activity log describes all control-plane actions on your Azure subscription.
+- Resource logs are logs emitted by a resource.
+- Metrics are emitted by resources.
-All these data types can be stored in a storage account or streamed to an event hub, both of which are general-purpose Azure resources. Because these are general-purpose resources, creating, deleting, and accessing them is a privileged operation reserved for an administrator. We suggest that you use the following practices for monitoring-related resources to prevent misuse:
+All these data types can be stored in a storage account or streamed to an event hub, both of which are general-purpose Azure resources. Because these are general-purpose resources, creating, deleting, and accessing them is a privileged operation reserved for an administrator. Use the following practices for monitoring-related resources to prevent misuse:
-* Use a single, dedicated storage account for monitoring data. If you need to separate monitoring data into multiple storage accounts, never share usage of a storage account between monitoring and non-monitoring data. Sharing usage in that way might inadvertently give access to non-monitoring data to organizations that need access to only monitoring data. For example, a third-party organization for security information and event management (SIEM) should need only access to monitoring data.
+* Use a single, dedicated storage account for monitoring data. If you need to separate monitoring data into multiple storage accounts, never share usage of a storage account between monitoring and non-monitoring data. Sharing usage in that way might inadvertently give access to non-monitoring data to organizations that need access to only monitoring data. For example, a third-party organization for security information and event management should need only access to monitoring data.
* Use a single, dedicated service bus or event hub namespace across all diagnostic settings for the same reason described in the previous point. * Limit access to monitoring-related storage accounts or event hubs by keeping them in a separate resource group. [Use scope](../role-based-access-control/overview.md#scope) on your monitoring roles to limit access to only that resource group. * Never grant the ListKeys permission for either storage accounts or event hubs at subscription scope when a user needs only access to monitoring data. Instead, give these permissions to the user at a resource or resource group scope (if you have a dedicated monitoring resource group).
-### Limiting access to monitoring-related storage accounts
-When a user or application needs access to monitoring data in a storage account, you should [generate a shared access signature (SAS)](/rest/api/storageservices/create-account-sas) on the storage account that contains monitoring data with service-level read-only access to blob storage. In PowerShell, the account SAS might look like the following code:
+### Limit access to monitoring-related storage accounts
+
+When a user or application needs access to monitoring data in a storage account, [generate a shared access signature (SAS)](/rest/api/storageservices/create-account-sas) on the storage account that contains monitoring data with service-level read-only access to blob storage. In PowerShell, the account SAS might look like the following code:
```powershell $context = New-AzStorageContext -ConnectionString "[connection string for your monitoring Storage Account]"
$token = New-AzStorageAccountSASToken -ResourceType Service -Service Blob -Permi
You can then give the token to the entity that needs to read from that storage account. The entity can list and read from all blobs in that storage account.
-Alternatively, if you need to control this permission with Azure RBAC, you can grant that entity the `Microsoft.Storage/storageAccounts/listkeys/action` permission on that particular storage account. This permission is necessary for users who need to be able to set a diagnostic setting or a log profile to archive to a storage account. For example, you can create the following Azure custom role for a user or application that needs to read from only one storage account:
+Alternatively, if you need to control this permission with Azure RBAC, you can grant that entity the `Microsoft.Storage/storageAccounts/listkeys/action` permission on that particular storage account. This permission is necessary for users who need to set a diagnostic setting or a log profile to archive to a storage account. For example, you can create the following Azure custom role for a user or application that needs to read from only one storage account:
```powershell $role = Get-AzRoleDefinition "Reader"
New-AzRoleDefinition -Role $role
``` > [!WARNING]
-> The ListKeys permission enables the user to list the primary and secondary storage account keys. These keys grant the user all signed permissions (such as read, write, create blobs, and delete blobs) across all signed services (blob, queue, table, file) in that storage account. We recommend using an account SAS when possible.
+> The ListKeys permission enables the user to list the primary and secondary storage account keys. These keys grant the user all signed permissions (such as read, write, create blobs, and delete blobs) across all signed services (blob, queue, table, file) in that storage account. We recommend using an account SAS when possible.
+
+### Limit access to monitoring-related event hubs
-### Limiting access to monitoring-related event hubs
-You can follow a similar pattern with event hubs, but first you need to create a dedicated authorization rule for listening. If you want to grant access to an application that only needs to listen to monitoring-related event hubs, do the following:
+You can follow a similar pattern with event hubs, but first you need to create a dedicated authorization rule for listening. If you want to grant access to an application that only needs to listen to monitoring-related event hubs, follow these steps:
1. In the portal, create a shared access policy on the event hubs that were created for streaming monitoring data with only listening claims. For example, you might call it "monitoringReadOnly." If possible, give that key directly to the consumer and skip the next step.
-2. If the consumer needs to be able to get the key ad hoc, grant the user the ListKeys action for that event hub. This step is also necessary for users who need to be able to set a diagnostic setting or a log profile to stream to event hubs. For example, you might create an Azure RBAC rule:
+1. If the consumer needs to get the key ad hoc, grant the user the ListKeys action for that event hub. This step is also necessary for users who need to set a diagnostic setting or a log profile to stream to event hubs. For example, you might create an Azure RBAC rule:
```powershell $role = Get-AzRoleDefinition "Reader"
You can follow a similar pattern with event hubs, but first you need to create a
Azure Monitor needs access to your Azure resources to provide the services that you enable. If you want to monitor your Azure resources while still securing them from access to the public internet, you can use secured storage accounts.
-Monitoring data is often written to a storage account. You might want to make sure that unauthorized users can't access the data that's copied to a storage account. For additional security, you can lock down network access to give only your authorized resources and trusted Microsoft services access to a storage account by restricting a storage account to use selected networks.
+Monitoring data is often written to a storage account. You might want to make sure that unauthorized users can't access the data that's copied to a storage account. For extra security, you can lock down network access to give only your authorized resources and trusted Microsoft services access to a storage account by restricting a storage account to use selected networks.
![Screenshot that shows the settings for firewalls and virtual networks.](./media/roles-permissions-security/secured-storage-example.png)
-Azure Monitor is considered a trusted Microsoft service. If you select the **Allow trusted Microsoft services to access this storage account** checkbox, Azure monitor will have access to your secured storage account. You then enable writing Azure Monitor resource logs, Activity Log, and metrics to your storage account under these protected conditions. This setting will also enable Log Analytics to read logs from secured storage.
+Azure Monitor is considered a trusted Microsoft service. If you select the **Allow trusted Microsoft services to access this storage account** checkbox, Azure monitor will have access to your secured storage account. You then enable writing Azure Monitor resource logs, Activity log, and metrics to your storage account under these protected conditions. This setting will also enable Log Analytics to read logs from secured storage.
For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md). ## Next steps+ * [Read about Azure RBAC and permissions in Azure Resource Manager](../role-based-access-control/overview.md) * [Read the overview of monitoring in Azure](overview.md)-
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Last updated 05/05/2022 # Azure Monitor cost and usage
-This **article** describes the different ways that Azure Monitor charges for usage, how to evaluate charges on your Azure bill, and how to estimate charges to monitor your entire environment.
+
+This article describes the different ways that Azure Monitor charges for usage. It also explains how to evaluate charges on your Azure bill and how to estimate charges to monitor your entire environment.
## Pricing model
-Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use.
-Features of Azure Monitor that are enabled by default do not incur any charge. This includes collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
-Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+Azure Monitor uses consumption-based pricing, which is also known as pay-as-you-go pricing. With this billing model, you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. These features include collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
+Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each type is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
| Type | Description | |:|:|
-| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application insights resources. This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
-| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
-| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Alerts | Charged based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
-| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
+| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application Insights resources. For most customers, this category typically incurs the bulk of Azure Monitor charges. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for logs can vary significantly on the configuration that you choose. For information on how charges for logs data are calculated and the different pricing tiers available, see [Azure Monitor logs pricing details](logs/cost-logs.md). |
+| Platform logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. |
+| Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Alerts | Charges are based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at-scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+| Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multistep web tests](app/availability-multistep.md) in Application Insights. Multistep web tests have been deprecated.
+
+## Data transfer charges
-## Data transfer charges
-Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate, although data sent to a different region via [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges. Inbound data transfer is free. Data transfer charges are typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
+Sending data to Azure Monitor can incur data bandwidth charges. As described in [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Data sent to a different region via [Diagnostic settings](essentials/diagnostic-settings.md) doesn't incur data transfer charges. Inbound data transfer is free.
+Data transfer charges are typically small compared to the costs for data ingestion and retention. Focus on your ingested data volume to control costs for Log Analytics.
## Estimate Azure Monitor usage and costs
-If you're new to Azure Monitor, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter *Azure Monitor*, and then select the **Azure Monitor** tile. The pricing calculator will help you estimate your likely costs based on your expected utilization.
-The bulk of your costs will typically be from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect since they'll vary significantly based on your configuration. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
+If you're new to Azure Monitor, use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter **Azure Monitor**, and then select the **Azure Monitor** tile. The pricing calculator helps you estimate your likely costs based on your expected utilization.
+
+The bulk of your costs typically come from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect because they'll vary significantly based on your configuration.
-Following is basic guidance that you can use for common resources.
+A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment.
-- **Virtual machines.** With typical monitoring enabled, a virtual machine will generate between 1 GB to 3 GB of data per month. This is highly dependent on the configuration of your agents.-- **Application Insights.** See the following section for different methods to estimate data from your applications.-- **Container insights.** See [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster) for guidance on estimating data for your AKS cluster.
+See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
+
+Use the following basic guidance for common resources:
+
+- **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents.
+- **Application Insights**: For different methods to estimate data from your applications, see the following section.
+- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).
The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases. >[!NOTE]
->The billable data volume is calculated using a customer friendly, cost-effective method. The billed data volume is defined as the size of the data that will be stored, excluding a set of standard columns and any JSON wrapper that was part of the data received for ingestion. This billable data volume is substantially smaller than the size of the entire JSON-packaged event, often less than 50%. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models. [Learn more](logs/cost-logs.md#data-size-calculation).
+>The billable data volume is calculated by using a customer-friendly, cost-effective method. The billed data volume is defined as the size of the data that will be stored, excluding a set of standard columns and any JSON wrapper that was part of the data received for ingestion. This billable data volume is substantially smaller than the size of the entire JSON-packaged event, often less than 50%.
+>
+>It's essential to understand this calculation of billed data size when you estimate costs and compare them with other pricing models. For more information on pricing, see [Azure Monitor Logs pricing details](logs/cost-logs.md#data-size-calculation).
## Estimate application usage
-There are two methods that you can use to estimate the amount of data from an application monitored with Application Insights.
+
+There are two methods you can use to estimate the amount of data from an application monitored with Application Insights.
### Learn from what similar applications collect
-In the Azure Monitoring Pricing calculator for Application Insights, enable **Estimate data volume based on application activity** which allows you to provide inputs about your application. The calculator will then tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
-### Data collection when using sampling
-With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
+In the Azure Monitor pricing calculator for Application Insights, enable **Estimate data volume based on application activity**. You use this option to provide inputs about your application. The calculator then tells you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
+
+### Data collection when you use sampling
+
+With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring.
-For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
+If the application produces a low amount of telemetry, such as when debugging or because of low usage, items won't be dropped by the sampling processor if the volume is below the configured-events-per-second level.
+For a high-volume application, with the default threshold of five events per second, adaptive sampling limits the number of daily events to 432,000. If you consider a typical average event size of 1 KB, this size corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application because the sampling is done locally to each node.
-## Viewing Azure Monitor usage and charges
-There are two primary tools to view and analyze your Azure Monitor billing and estimated charges.
+For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling). This technique samples when the data is received by Application Insights based on a percentage of data to retain. Or you can use [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers.
-- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool that you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.-- [Usage and Estimated Costs](#usage-and-estimated-costs) provides a listing of monthly charges for different Azure Monitor features. This is particularly useful for Log Analytics workspaces where it helps you to select your pricing tier by showing how your cost would be different at different tiers.
+## View Azure Monitor usage and charges
+There are two primary tools to view and analyze your Azure Monitor billing and estimated charges:
+
+- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.
+- [Usage and estimated costs](#usage-and-estimated-costs) provides a listing of monthly charges for different Azure Monitor features. This information is useful for Log Analytics workspaces. It helps you to select your pricing tier by showing how your cost would be different at different tiers.
## Azure Cost Management + Billing
-Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
+
+Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** > **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
>[!NOTE]
->You might need additional access to Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
+>You might need additional access to cost management data. See [Assign access to cost management data](../cost-management-billing/costs/assign-access-acm-data.md).
-To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**:
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following service names:
-- **Azure Monitor**-- **Application Insights**-- **Log Analytics**-- **Insight and Analytics**
+- Azure Monitor
+- Application Insights
+- Log Analytics
+- Insight and Analytics
-Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you may want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for information on how to use this view.
-![Screenshot that shows Azure Cost Management with cost information.](./media/usage-estimated-costs/010.png)
+![Screenshot that shows Cost Management with cost information.](./media/usage-estimated-costs/010.png)
>[!NOTE]
->Alternatively, you can go to the **Overview** page of a Log Analytics workspace or Application Insights resource and click **View Cost** in the upper right corner of the **Essentials** section. This will launch the **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
-> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for Log Analytics workspace.":::
+>Alternatively, you can go to the overview page of a Log Analytics workspace or Application Insights resource and select **View Cost** in the upper-right corner of the **Essentials** section. This option opens **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
+>
+> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for a Log Analytics workspace.":::
### Download usage
-To gain more understanding of your usage, you can download your usage from the Azure portal and see usage per Azure resource in the downloaded spreadsheet. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) for a tutorial, including how to automatically create a daily report that you can use for regular analysis.
-Usage from your Log Analytics workspaces can be found by first filtering on the **Meter Category** column to show *Log Analytics*, *Insight and Analytics* (used by some of the legacy pricing tiers), and *Azure Monitor* (used by commitment tier pricing tiers). Add a filter on the *Instance ID* column for *contains workspace* or *contains cluster*. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column.
+To gain more understanding of your usage, download your usage from the Azure portal. You'll see your usage per Azure resource in the downloaded spreadsheet. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily report you can use for regular analysis.
+
+Usage from your Log Analytics workspaces can be found by first filtering on the **Meter Category** column to show **Log Analytics**, **Insight and Analytics** (used by some of the legacy pricing tiers), and **Azure Monitor** (used by commitment-tier pricing tiers). Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
### Application Insights meters
-Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category**, because there's a single log back end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column, and the unit for each entry is shown in the **Unit of Measure** column. See [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md) for more details.
-To separate costs from your Log Analytics or Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
+Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category** because there's a single log back-end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column. For more information, see [Understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
+
+To separate costs from your Log Analytics or Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**.
## Usage and estimated costs
-You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+
+You can get more usage details about Log Analytics workspaces and Application Insights resources from the **Usage and estimated costs** option for each.
+ ### Log Analytics workspace
-To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal.
+To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and estimated costs** from the **Log Analytics workspace** menu in the Azure portal.
-This view includes the following:
-A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br>
-B. Estimated monthly charges using different commitment tiers.<br>
-C. Billable data ingestion by solution from the past 31 days.
+This view includes:
-To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
+- Estimated monthly charges based on usage from the past 31 days by using the current pricing tier.<br>
+- Estimated monthly charges by using different commitment tiers.<br>
+- Billable data ingestion by solution from the past 31 days.
+To explore the data in more detail, select the icon in the upper-right corner of either chart to work with the query in Log Analytics.
-### Application insights
-To learn about your usage trends for your classic Application Insights resource, select **Usage and Estimated Costs** from the **Applications** menu in the Azure portal.
+### Application Insights
-This view includes the following:
+To learn about your usage trends for your classic Application Insights resource, select **Usage and estimated costs** from the **Applications** menu in the Azure portal.
-A. Estimated monthly charges based on usage from the past month.<br>
-B. Billable data ingestion by table from the past month.
-To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
+This view includes:
+- Estimated monthly charges based on usage from the past month.<br>
+- Billable data ingestion by table from the past month.
-## Viewing data allocation benefits
+To investigate your Application Insights usage more deeply, open the **Metrics** page and add the metric named **Data point volume**. Then select the **Apply splitting** option to split the data by **Telemetry item type**.
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details. Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
+## View data allocation benefits
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details.
+Open the exported usage spreadsheet and filter the **Instance ID** column to your workspace. (To select all your workspaces in the spreadsheet, filter the **Instance ID** column to **contains /workspaces/**.) Next, filter the **ResourceRate** column to show only rows where this rate is equal to zero. Now you'll see the data allocations from these various sources.
-## Operations Management Suite subscription entitlements
+> [!NOTE]
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name **Data Included per Node** and the meter category **Insight and Analytics**. (This name is for a legacy offer still used with this meter.) If the workspace is in the legacy Per-Node Log Analytics pricing tier, this meter also includes the data allocations from this Log Analytics pricing tier.
-Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
+## Operations Management Suite subscription entitlements
-To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
+Customers who purchased Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration.
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per Node (Operations Management Suite) pricing tier. This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane.
+Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous. This move requires careful consideration.
Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription. > [!TIP]
-> If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier.
->
+> If your organization has Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per Node (Operations Management Suite) pricing tier and your Application Insights resources in the Enterprise pricing tier.
## Next steps -- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.-- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
+- For details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges, see [Azure Monitor Logs pricing details](logs/cost-logs.md).
+- For details on how to analyze the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected, see [Analyze usage in Log Analytics workspace](logs/analyze-usage.md).
+- To control your costs by setting a daily limit on the amount of data that can be ingested in a workspace, see [Set daily cap on Log Analytics workspace](logs/daily-cap.md).
+- For best practices on how to configure and manage Azure Monitor to minimize your charges, see [Azure Monitor best practices - Cost management](best-practices-cost.md).
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West US. Regional coverage will expand as the preview progresses.
-* [Azure Policy built-in definitions for Azure NetApp](azure-policy-definitions.md#built-in-policy-definitions)
+* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | privateendpointredirectmaps | No | No | No | > | privateendpoints | Yes | Yes | Yes | > | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes | Yes | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
+> | publicipaddresses | Yes | Yes - Basic SKU<br>No - Standard SKU | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
> | publicipprefixes | Yes | Yes | No | > | routefilters | No | No | No | > | routetables | Yes | Yes | No |
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
For example, the following rule is set to Match External Address, and this setti
:::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot Internet connectivity inbound Public IP." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png"::: If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
-For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
+For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html)git status. ## Next steps
The Distributed Firewall could be used to filter traffic to VMs. This feature is
[Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md)
-[Disable Internet access or enable a default route](disable-internet-access.md)
+[Disable Internet access or enable a default route](disable-internet-access.md)
cognitive-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/choose-natural-language-processing-service.md
Last updated 04/16/2020
## Next steps
-* Learn [how manage Azure resources](How-To/set-up-qnamaker-service-azure.md)
+* Learn [how to manage Azure resources](How-To/set-up-qnamaker-service-azure.md)
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md
Sample code for intent recognition:
* [Quickstart: Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) * [Recognize intents from speech using the Speech SDK for C#](./how-to-recognize-intents-from-speech-csharp.md)
-* [Intent recognition and other Speech services using Unity in C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/unity/speechrecognizer)
+* [Intent recognition and other Speech services using Unity in C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/unity/speechrecognizer)
* [Recognize intents using Speech SDK for Python](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/python/console) * [Intent recognition and other Speech services using the Speech SDK for C++ on Windows](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/windows/console) * [Intent recognition and other Speech services using the Speech SDK for Java on Windows or Linux](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/jre/console)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.2.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.2.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.3.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.3.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview | | Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.1.0 | Generally available |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `3.2.0-amd64`:
+Release note for `3.3.0-amd64`:
**Features** * Security upgrade.
-* HTTP Proxy support.
-* Improve custom speech model download output message.
* Speech components upgrade.-
-Note that due to the phrase lists feature, the size of this container image has increased.
+* Bug Fixes.
+* Support for latest model versions.
| Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:90e5d8c1675571de926bfffd0e9844fabe8d763cc881bdbccb5bb4faa0258122`|
-| `3.1.0-amd64` | | `sha256:90e5d8c1675571de926bfffd0e9844fabe8d763cc881bdbccb5bb4faa0258122`|
+| `latest` | | `sha256:62cd792c52422adc0a0bf79f30d9204b73d3bd9b66a37d5719e3f93146214c33`|
+| `3.3.0-amd64` | | `sha256:62cd792c52422adc0a0bf79f30d9204b73d3bd9b66a37d5719e3f93146214c33`|
# [Previous version](#tab/previous)
+Release note for `3.2.0-amd64`:
+
+**Features**
+* Security upgrade.
+* HTTP Proxy support.
+* Improve custom speech model download output message.
+* Speech components upgrade.
+
+Note that due to the phrase lists feature, the size of this container image has increased.
+ Release note for `3.1.0-amd64`: **Features**
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
-Release note for `3.2.0-amd64-<locale>`:
+Release note for `3.3.0-amd64-<locale>`:
**Features** * Security upgrade. * Speech components upgrade.
-* HTTP Proxy support.
+* Bug Fixes.
+* Upgraded speech models.
+* Support for latest model versions.
-Note that due to the phrase lists feature, the size of this container image has increased.
| Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `3.2.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.2.0-amd64-en-us`. |
+| `3.3.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.2.0-amd64-en-us`. |
This container has the following locales available. | Locale for v3.2.0 | Notes | Digest | |--|:--|:--|
-| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:d63c13880627778742e42e00e25fd61aef1c4ee713e5def90642c68ddebc33c6` |
-| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:22e8d931602bb91a86a68caac2c81e323ee9039b0bbd6bb19db35997b7fbd359` |
-| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:80e8e9c6cfec8e53f9b04dcf20d1ee555bd89e72f8c0cdef88b1fbfbd83d6c6c` |
-| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:4a4810d2c3c0be63787994db412845909f10582d428620c70afb41bbeac1d034` |
-| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:acca38a880c5ef6575647d372769acaaa7f14de848b3d07daa05138a27b51d96` |
-| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:d63c13880627778742e42e00e25fd61aef1c4ee713e5def90642c68ddebc33c6` |
-| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:3522d12e3be5eff50b4b33f41c51a137f0efe86aa9a7070ce3b84c78057804c7` |
-| `ar-om` | Container image with the `ar-OM` locale. | `sha256:2a586ff0be16883091e4853f1a15ce0c412e75936db57ad8e6b1a1d8321e2a46` |
-| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:d63c13880627778742e42e00e25fd61aef1c4ee713e5def90642c68ddebc33c6` |
-| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:d63c13880627778742e42e00e25fd61aef1c4ee713e5def90642c68ddebc33c6` |
-| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:082ae364aaffd636676ed0cea755fcc3d0c2654ccd1a5659dada7dc135d43196` |
-| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:e6639b807c3988f39b8a9478d043b847cc6e6223892840323ddd81883a8519c1` |
-| `ca-es` | Container image with the `ca-ES` locale. | `sha256:56570074924735a5952e280aab4e369be4c16b5d8fef14d1f2a6aa7946018e7b` |
-| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:bc8661627ce501091f0754d7062686882c5084cfbd9cb76670cd4e83a04f1834` |
-| `da-dk` | Container image with the `da-DK` locale. | `sha256:2843edc7b98bc189ded859d17ce8f92a650c73caa4e3ebb6752ea06241ce3bdf` |
-| `de-at` | Container image with the `de-AT` locale. | `sha256:e6b55644ebf9e0e9cf5072725a331a688c65f5b0e3a4257aea11470110c4aa71` |
-| `de-ch` | Container image with the `de-CH` locale. | `sha256:57fe70566d02e2dee1078fd7b1effdf82ebad01aed9195051b3fc27f81239e09` |
-| `de-de` | Container image with the `de-DE` locale. | `sha256:f6d46be11e09e01ccf99de0c643fefb38ed2d705ec693d797bc7543202b34007` |
-| `el-gr` | Container image with the `el-GR` locale. | `sha256:e5182ee56f28f43dafe9d503a659bc80e569a7fd6fbc57bc47465feea516c4fc` |
-| `en-au` | Container image with the `en-AU` locale. | `sha256:1e400d993fe59757fc4cf3de67d6b27654a9f53e21cb274145432ad6b880a7c6` |
-| `en-ca` | Container image with the `en-CA` locale. | `sha256:49c5bf1b0d5c83ddd85574005fb56b6e13c11fbc0590af7e731d8b5a603adee1` |
-| `en-gb` | Container image with the `en-GB` locale. | `sha256:d716caaf2e3699ef48e68cc5835b4fbcc8da756076efa30db1ba931cf995bf72` |
-| `en-hk` | Container image with the `en-HK` locale. | `sha256:5a9973801f79fec0faa591de42bd33e32c5b06b157d70ae7a5131cdcc05ea4e8` |
-| `en-ie` | Container image with the `en-IE` locale. | `sha256:c370b14c5fc0121201f6a14c7b713970ae02e6fce1d94b6768f9e301d793cb6f` |
-| `en-in` | Container image with the `en-IN` locale. | `sha256:a3fb074743369368233e8028f3c36e335fc04facff9b8700d1a092aba1dff3ac` |
-| `en-nz` | Container image with the `en-NZ` locale. | `sha256:71ed81d050168774f89da32854a0be5dbe4d89636109fc604d11afba313517ea` |
-| `en-ph` | Container image with the `en-PH` locale. | `sha256:0bb215fc8ce0e669add37630f90e42c93d6936e159ccd1a3fb83e9039212c2a5` |
-| `en-sg` | Container image with the `en-SG` locale. | `sha256:7b827a8f75610bc7f4838857e94231d50e4cb2c3602010ad00b0e970da45d6ef` |
-| `en-us` | Container image with the `en-US` locale. | `sha256:8141da6240e78c7b81955ee91c71eb7dec17f345c9b4dbe577494ee9e159bf5a` |
-| `en-za` | Container image with the `en-ZA` locale. | `sha256:9fe15bf9e12fe4185f8f3295f7f2a6a8563f894754537c1d8cd403a60763010f` |
-| `es-ar` | Container image with the `es-AR` locale. | `sha256:a136551dc0d33791deb370ee7af0ce198110467639a56f4e17a926b56808b0a9` |
-| `es-bo` | Container image with the `es-BO` locale. | `sha256:937cbabebf4f03dc0031e6716d6148479f5bba4d5dd9b7563a4dfa154b6eeace` |
-| `es-cl` | Container image with the `es-CL` locale. | `sha256:b0b9a7cdf96ddda5b90067c4efe7d4e8fb11ab2e3df05f5727b4e8aa916ff102` |
-| `es-co` | Container image with the `es-CO` locale. | `sha256:d639e4db6e3c005f76d36079514e2771e2516ebde86ba6594a9c9691453d0667` |
-| `es-cr` | Container image with the `es-CR` locale. | `sha256:01e2de748bd3632b1a4e25ae9a53503dcfbfc6d4f90f1a4ca804ca3045b6d1df` |
-| `es-cu` | Container image with the `es-CU` locale. | `sha256:1c25106375dbc5846a5ff1c5f594f969efeeb81d6fdb520dd41fae52f23dcc27` |
-| `es-do` | Container image with the `es-DO` locale. | `sha256:2f403243e5b7e097708532903777cf65ea95ab901067fe406a6aba81fc54819c` |
-| `es-ec` | Container image with the `es-EC` locale. | `sha256:164a27ae9c122617197ed67f6df3ac8bb1b38347d2f1e6dcd72254ff5c4cc92b` |
-| `es-es` | Container image with the `es-ES` locale. | `sha256:a39b95ffb862a82345780210aa5e7a8991fe271d55bf8c36693ce81a7adb0739` |
-| `es-gt` | Container image with the `es-GT` locale. | `sha256:90c3c225666dc74f6fe74cbbc8bc8c339a469c36c51ee87a9921253366c6ce69` |
-| `es-hn` | Container image with the `es-HN` locale. | `sha256:fc9e3c9e9dfc0bfda67d11fc9dba3edc62d65187bc9f54de68f473305639d147` |
-| `es-mx` | Container image with the `es-MX` locale. | `sha256:f2e15fe3dca24366240f4f20fc683c45d3a59c1150481f9286dedeb1113ec4da` |
-| `es-ni` | Container image with the `es-NI` locale. | `sha256:bdd341e3180888c3e0bd8fe271d1dccb95be82eca4a0b96ccfeba8f849d5c2d7` |
-| `es-pa` | Container image with the `es-PA` locale. | `sha256:065c343fa668f6ecc580c6e0e0284c76921428a458b50a02f0e7c6a0f65ae155` |
-| `es-pe` | Container image with the `es-PE` locale. | `sha256:17e801c032d8259cdd969ae4de492b1bd57aee9fe1f7345b276e69e93e91d8df` |
-| `es-pr` | Container image with the `es-PR` locale. | `sha256:79d846898998a801f3306657b2ef7a03c083684dec4f59700d82a569a293d5bd` |
-| `es-py` | Container image with the `es-PY` locale. | `sha256:f8efedcf1e8a2c078ff6e1e9c8a958a8d87e1390b51d2bbfb8be13c472e1b82b` |
-| `es-sv` | Container image with the `es-SV` locale. | `sha256:e88890fe32e107175f256452301eced892d4db17b29413926540676471b8f870` |
-| `es-us` | Container image with the `es-US` locale. | `sha256:0a7dcd7199629e2c5e3c36dd5adaa1f91ef59d27fca7b3786a62206dea002d77` |
-| `es-uy` | Container image with the `es-UY` locale. | `sha256:dd79516dbaf642ad42d12be89345914b38130e42f16585dd14c53f1de1303b12` |
-| `es-ve` | Container image with the `es-VE` locale. | `sha256:6ad036af7238b80953947f6a50d7838e977bde43fdebed58a26fc88167110cd0` |
-| `et-ee` | Container image with the `et-EE` locale. | `sha256:e9bcb09b2a777e3034509bba624d7fdb9ce3fc2c10195d9373bd53e1ed09646e` |
-| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:2c28e8b15ae8f0797c66c0e3289637a325f573dd0c80f3e40053c57fd7b6ed79` |
-| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:77ea9f489d3e200be8e701a1c26aa1701569c923af3c64444383f7b7533206be` |
-| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:a3772449d4e7f4f720dc83dc4d29416c9e6e5f6b097cc5645cb1c0b3d42454fa` |
-| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:e4019ce8cb3707abf7bc6b04371822d6f524e6dc22144ca3dc6017899162c8da` |
-| `gu-in` | Container image with the `gu-IN` locale. | `sha256:dff44a4aed248f5b908d4dde61eab3b31ae0505b30f9a096f99569c5eba57e06` |
-| `hi-in` | Container image with the `hi-IN` locale. | `sha256:2b94843006eb3648a47f41b0164ad5d2b09c3169a0266895a91ef78e374b8ad3` |
-| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:4f9932479f7d1bf8b27feae12efa94ad850f7cee277007845629b0a3a81ce1b7` |
-| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:73c84a4a1e58e81b064c29c0e1671639b772eecbfb1b0091ce71a4ccde19b0f2` |
-| `it-it` | Container image with the `it-IT` locale. | `sha256:e8f5491a4bfed56ca5316d36ad216c8fd1a9947eb74c3062c3a4f2c22545e3f6` |
-| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:5ef963124f8c9764d2c7b4abf503a185d781ed200de21071cc7326d8e9360d15` |
-| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:07871a0eeab99e99a22c59a5b234f8ff7ade8c52e216d1770c5af4e6d0889304` |
-| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:bc5dc364b127bb0ef76547952f73e8efe6c9bde949566575ba180ecc143187a8` |
-| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:09bcdf3781771e6826cd50463c1cf16e5b05f68e431ace04dfad97bde1cbae3c` |
-| `mr-in` | Container image with the `mr-IN` locale. | `sha256:e30f8d0d222944699afddbf08e28cdfe59e2d6a291f7ee7adbc7322fca495720` |
-| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:69bd4ecc03ceb9beee1e0328ba2f03c0e68d0e5f66bfe0a7147f743dcba41f1b` |
-| `nb-no` | Container image with the `nb-NO` locale. | `sha256:2bce49324c5b50dfaaec6ac62627876130e0819a70c6f97792932eee4a866cd5` |
-| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:87c365f3aa880bf7161895b7e14766473557634f0b2b80cb1caac526c548e3fa` |
-| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:07710d4061204d09a271685464793474c53b7d641bd96ce2e23bdec9ce30102d` |
-| `pt-br` | Container image with the `pt-BR` locale. | `sha256:c41224ecaa09922773181d2af75efbcec670cce31c02bb72f5b83a6221b2707e` |
-| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:017119251b3de7c5bba3f6e94bc6f0ca1c61978816278bd915367776ed063641` |
-| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:a47954801c3a012fec68453200e35d3f90c5ddb501b8e2b2f57d4066bb9a801a` |
-| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:450c847a023508e0ce113363c90bdd68492fb0ebf3505ba25371682f42615ee0` |
-| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:938fc1b1dd4395ddfe3dda5aea74a6e69c1fda42170ac60110429751d93c2afa` |
-| `sl-si` | Container image with the `sl-SI` locale. | `sha256:31e428a55fbc3b09729a2e0ab52dd5e992d370ff16648c26ff4c36deefe9e7ee` |
-| `sv-se` | Container image with the `sv-SE` locale. | `sha256:bdec89e2f318831bc3caf7b2cf27de9f8b339b7482409902b45e363ad0e97d86` |
-| `ta-in` | Container image with the `ta-IN` locale. | `sha256:1b6e6abad2602708c29677c7c51f01f917da74eb46b4a0a0f304b99057b1a5d5` |
-| `te-in` | Container image with the `te-IN` locale. | `sha256:8d91b732895c4813fa1f14abcc45a0fbe6761a7012451fdc73101b06af708a52` |
-| `th-th` | Container image with the `th-TH` locale. | `sha256:6023049982f58bb4482b64ed68a37aacb09303c5769c8b15a85fb859fe091392` |
-| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:b1196fa5e04f0c597a988a34340fc53d8736fd401bfd8c00dcedfa43767a5de4` |
-| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:8b1b3f06dd8061d336de03cdb8b6853f8299ac15b081675337a40a72c02f6b04` |
-| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:be65edec8ca921e4706de1d64d80ea59e7e0500dc7c21423fb7c756bc32b99a3` |
-| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:d43398f02b9a607ae4c2352d760e7cba4e6546badc904a06f8a855d27fd730e9` |
-| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:ef47403de6bc723682b316c1d97968c11cf28dedda869e689d4e5eecd8d024b2` |
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:403e61a814dc2a96709ee3eebc6a01f36e0fa046554f3c877c80396320757e15` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:49460c5ff58f9ac6374a033230b7ce2a5cf6c00c3861464cbb8068cea61e9b7f` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:963e26fd4d2f76424a6047821aeb7600e2f8f68001bce794731e6984e8a43daa` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:7ec2de4303bffbfb7ec5ea9f6aadaebd655202837428e8e42d8f22c681b558b2` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:2834c8eef1dd2334eaaa151170b1d32496e3222f0e838d71ac9c35a8e915dfe9` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:403e61a814dc2a96709ee3eebc6a01f36e0fa046554f3c877c80396320757e15` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:b53093078c86c46517cdff4ae97a75f674bf853e28b1e9e45bb627fc998aeef3` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:2f6ee448f90c1bcfda701ddd0869e748daf37cd57199b853ed781029b8b0eb86` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:403e61a814dc2a96709ee3eebc6a01f36e0fa046554f3c877c80396320757e15` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:403e61a814dc2a96709ee3eebc6a01f36e0fa046554f3c877c80396320757e15` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:266e2ff2b087a098c654aca7ab0cc072199b0ecb27ae9c6d631fb413942596c9` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:f7f2f448e5f3e3f58b4c185d1dea25cdbda8bc7701a704c064fbb2dfff0c3637` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:d41f54299a4a7106e889c769baa9dc837f161f794464ba768107b21a9c1dbda3` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:3f5fd98ca36f5061f2f02d78823a21cf54f370c90f55ac65df1ced1f03922b7a` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:1e75e5d5a1e77b4b0c6c39adc8ed56c0d9a28176fcb2a10cc64fdd28b6e62f4b` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:2f922e2cf5a523ca125a71bbc9bc74f8b76ae6d80e5fbb5da28392543882e3ba` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:b5e0e7e8dd60a5b6ac2e47506f24ade3189ad0490244dd9ecb8e0eac477ddd81` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:701ece750b87f6e43d7d2cde7022dcdc68c98ec6dd12672e06b12f0ab7f59cce` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:4872e35fd9d302d5cfcc3482dd486599ccd4324be6947b04401a04d09ecc1700` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:533156d3d940b9751fd6594b1f3e587acab75f26d40291ec70257e3af8954e5f` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:18d3bc50dfac664ae8a4fea78523c4911cb6ebd5b156c719bddc4faf479da69f` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:c6e384b9d8fc0bdb680ec1792c374e527dfc18748b61fc3f6081512af749bef6` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:7fbe9d56efdec8e239b3b9137705f17d3ec004922a81b8b1326b4dbc5b28df42` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:e493aca4b94a1c6c9cb4136f734ab91ff373537f6f4f7c3e701a117c53b72c62` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:4250c134e98fefb1fc901ea7ddb0fc990a5ab799511fc03d5fe200f2a410c591` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:af43260751007e2fcee9387d8262e9bc8145b7d88bdb7b19ed438b51ee6e9e3f` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:9bd5ca897c1a98a4235a4300379001f3ce89eb41eee793fe6ba1c8a89317c6a8` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:555979ed3739be9337691d3ef874b8257ae038ab52cc4ab1d0a524aaf46beeef` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:975fefa4d62634a9a5f3456411f1b622926daf59bca85502b44e27910cc219b6` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:e15e869926ff4a8a1bf86e4e109cf67d92638beee4f33127056d40da46f0805d` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:2ed88208af0888031b9d1399f9be6492efb462ba91c78adb1240115c9dad3952` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:e74605892684037dcdc1f61630a0cff53789ab1fc64b21ef81ace4fefb5fe9ae` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:19a8e17bb8a8e3781a84557433a16c47417695e4080a7fbd061e0270b05843aa` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:71bbc8895a84522f9d9241ddb63b9d37e8517b539a917b34ab9ac1fa916cf7e8` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:3e9b55f3596581a148d89b263ebf2ed71a210c6532f351ab299f4e1a6f92a579` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:56f18ea50909fc8ea580f83482c52d551c486955325e0feaf2fc438058421bd9` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:116c73de5c1139e20e8ebe429e7d3c21f3e8e1355b3e6139f1fa91a9ea4eb948` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:3a1147b8f741c816d96a8d28e4877eeb0c32ed2113f8ea610da3528fb1afe18d` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:d23fd33bc01ce852a97f1e33606bec0ec205b1b11f58bb315c417b5388a7c61c` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:752bc5a7c7ef5ac17f677ae09ae169fe818f674e91ae84167870869dd8a7c285` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:a16a5fba252388fb53f1a447ee8506acac32a09914d371e5aba64286e4934f8d` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:8adfda483264b9cd4a9b89e09aa3039975aa00a46cf910ea852830b9780af294` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:77de73d2cfabfc4c0017e42b09fd9ac61c1c49a4e9c9a5f06e04f7ccbdec50c0` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:91bc19b1314bee3e9938c78a16f45c1956fc7ccdd9ea7fdc0ff8dffe1c82502a` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:cd54ebc1037e1ba5222f270ca4f23e3f90279f1354a94bb12e997bec3ccf9489` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:137aa946129121d1648c6222616616a414313936cbc9fbb5a094377861e3deac` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:ace1dee02aed5e54e4fb569b1f8c4891f0e2637c4ecec17a7cc29f89b7c71a65` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:f7f011fcbae4eafd0dccc04c94fda09f811dbbed583e570571a14ec683f17197` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:22d3a9fd529ba566764a1c806723ee9c0aa4cf1741bfe9142898f6eb7cdfa849` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:0698c2725f0210f03b1f9f818bbdfaefea17adbf758fa8853dcd751adab5500a` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:575f85ec23c01e088fd6b2a651a7ece5a466b4c02410ee5be41947869e63041c` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:1f4e4049118a0c4374640f6171604a3b276325057aad1160390292f34efc4af5` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:fcf1f84a043d8ae07d30639023013978ad560270d8899a268a2a25d0de811050` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:97d163b65c8c50660cb11d0adb4a0bd9af35e84d5c2b71d7a56747c25d890469` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:b2063c7cf561ec31d0d11660ec37c027cec9098f16e19a9127988f5d4e4946be` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:5ade2e9c7acd2d0d73e4879ae59820449804122d41643412a702d1bcbb052bf2` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:a72c096af128fc302d3a211ed401f0f31c1b0291a6651f8821b2bb35da22c4a2` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:a174875198e7003cc931deb532327cb28afa9bc5f6bfabb0c314159797f65004` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:8f59f20992393685007d6d43eaae6e02370af5442fe601bb4d65617b35214eac` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:930a0207ab0087f7337c0af704f7598aa2e7180a87bb1e6daa4ced918a21b965` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:510aab7a813fce7571ba467e1bc692f657e8097796b14a875bed0c9f7cfa3549` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:7bfd27592df4f6bf995cb877bd4e61158bf535055b5d39528fcc18e487c99f75` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:0b8c26b21f25aa85dc95f40870d59c1a6cf9fb49749d464f3a04a762df321314` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:2352365b4075d56c9f7b3bffa924c619bd0114e116113746c5032f62079a8cf1` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:10c9b63fda2cd21de71cfeceaf7d4bbe04255d4031592c838425c1fb76fbfa87` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:5bc7ab2c0f6b8ae0ae2614e456216223d5aa3b9eb354bc13977d765f40fd9c25` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:f2f22e330006e4d2698b226e86e9e83a1dde2cdbb7f7ee5af64949e33ee23794` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:819510c34fecbd6452c8b8818c0e583581c42e54b7f43d960009b0759c53ea71` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:ee1bfd505b4771a999b2e15a86d0c938948dc47697ca9367f8d8fd74695c89fb` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:56c7d75cc04656ae3c47e1598c043cb88942f726efad90fc248ccdeb553cdf5e` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:e471d73d22c7011bd423e4cdb08e69c2a4b3b809186abe5c9025d4949dd32b82` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:425c31d6419fd037644ccc1cef9b3632820345a7032204b1cc53004e854e8afb` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:bc31915b91a82590032a5b5675e1f462dcd0875b8e2822841cb2a278ba596be1` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:a6764c6b6e49aeae3ee33e4fec56ae3df1dfba24c4ecb26463eb267f76113af3` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:06e179fb2e2f9237046d349f749505ab2ea8e28e86126ac2265a380a0fdafc2e` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:c9ee93779388e334383600a49988a8fb78b9548d41ce363568413cd30f3dbb48` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:8242473d3f8e716d68602851629e4e5d770188b570e88d5f57344cdb709d4188` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:68e7e3de77446732745114ec5489e216c16851f0e81d47957d51d8f043bafe39` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:2b77606f1c058418e2e1dbb0108e6fb620f9247e8ed760935c58ada2b1a7e252` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:6131948599520c80fbfd91e82c859b1c7c00722202815ed6a8d89f36a6114846` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:757bfdec75e025dd26717801302d075356287bdb04138ad320bd292218dda3ee` |
+| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:d0d53dafa8a8903ba1db754f08bdcf9381eed2fdac30b4f7c005bd11201fc01d` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:3234557e3267aeb97605035995b2b3c1ab58f37e3a9da7d8e2dd18588cdd2557` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:048fd6b57e0272f795015a1b77e6008d5f12a79c454987fccf391b3385228d16` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:c4310ec2c5912b07bbc4673aad57e50be1fe2a0627e73c66288f2bd6c4f50945` |
# [Previous version](#tab/previous)
+Release note for `3.2.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
+* Speech components upgrade.
+* HTTP Proxy support.
+
+Note that due to the phrase lists feature, the size of this container image has increased.
+ Release note for `3.1.0-amd64-<locale>`: **Features**
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Last updated 11/02/2021 -+ # Tutorial: Deploy a background processing application with Azure Container Apps
-Using Azure Container Apps allows you to deploy applications without requiring the exposure of public endpoints. By using Container Apps scale rules, the application can scale up and down based on the Azure Storage queue length. When there are no messages on the queue, the container app scales down to zero.
+Using Azure Container Apps allows you to deploy applications without requiring the exposure of public endpoints. By using Container Apps scale rules, the application can scale out and in based on the Azure Storage queue length. When there are no messages on the queue, the container app scales in to zero.
You learn how to:
New-AzResourceGroupDeployment `
This command deploys the demo application from the public container image called `mcr.microsoft.com/azuredocs/containerapps-queuereader` and sets secrets and environments variables used by the application.
-The application scales up to 10 replicas based on the queue length as defined in the `scale` section of the ARM template.
+The application scales out to 10 replicas based on the queue length as defined in the `scale` section of the ARM template.
## Verify the result
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Create an ARM template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace * the Application Insights resource for distributed tracing * a dapr component for the state store
-* the two dapr-enabled container apps
+* the two dapr-enabled container apps: [hello-k8s-node](https://hub.docker.com/r/dapriosamples/hello-k8s-node) and [hello-k8s-python](https://hub.docker.com/r/dapriosamples/hello-k8s-python)
+
Save the following file as _hello-world.json_:
Create a bicep template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace * the Application Insights resource for distributed tracing * a dapr component for the state store
-* the two dapr-enabled container apps
+* the two dapr-enabled container apps: [hello-k8s-node](https://hub.docker.com/r/dapriosamples/hello-k8s-node) and [hello-k8s-python](https://hub.docker.com/r/dapriosamples/hello-k8s-python)
Save the following file as _hello-world.bicep_:
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Your state store is configured using the Dapr component described in *statestore
## Deploy the service application (HTTP web server) + # [Bash](#tab/bash) ```azurecli
az containerapp create `
+By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-node).
+ This command deploys: * the service (Node) app server on `--target-port 3000` (the app port)
az containerapp create `
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `--target-port` to start a server, nor is there a need to enable ingress.
+By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-python).
+
+This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `--target-port` to start a server, nor is there a need to enable ingress.
## Verify the result
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
## Availability > [!IMPORTANT]
-> Microsoft Defender for container registries has been replaced with **Microsoft Defender for Containers**. If you've already enabled Defender for container registries on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
+> Microsoft Defender for container registries has been replaced with [**Microsoft Defender for Containers**](defender-for-containers-introduction.md). If you've already enabled Defender for container registries on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
> > This plan is no longer available for subscriptions where it isn't already enabled. >
To protect the Azure Resource Manager based registries in your subscription, ena
|Aspect|Details| |-|:-|
-|Release state:|Generally available (GA)|
+|Release state:|Deprecated (Use [**Microsoft Defender for Containers**](defender-for-containers-introduction.md))|
|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Supported registries and images:|Linux images in ACR registries accessible from the public internet with shell access<br>[ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)| |Unsupported registries and images:|Windows images<br>'Private' registries (unless access is granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services))<br>Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images, or "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br>Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)|
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
|Aspect|Details| |-|:-|
-|Release state:|General availability (GA)|
+|Release state:|Deprecated (Use [**Microsoft Defender for Containers**](defender-for-containers-introduction.md))|
|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).| |Required roles and permissions:|**Security admin** can dismiss alerts.<br>**Security reader** can view findings.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The following table displays roles and allowed actions in Defender for Cloud.
| Edit security policy | - | Γ£ö | - | Γ£ö | Γ£ö | | Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö | | Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
-| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
The following image shows a view of the back panel of the HPE Edgeline EL300.
|CPU|Intel Core i7-8650U (1.9GHz/4-core/15W)| |Chipset|Intel® Q170 Platform Controller Hub| |Memory|8 GB DDR4 2133 MHz Wide Temperature SODIMM|
-|Storage|128 GB 3ME3 Wide Temperature mSATA SSD|
+|Storage|256-GB SATA 6G Read Intensive M.2 2242 3 year warranty wide temperature SSD|
|Network controller|6x Gigabit Ethernet ports by Intel® I219| |Device access|4 USBs: Two fronts; two rears; 1 internal| |Power Adapter|250V/10A|
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
This procedure describes how to update the HPE BIOS configuration for your OT de
1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
+> [!NOTE]
+> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+>
+ ### Install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus.
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
The following image shows a view of the HPE ProLiant Dl360 back panel:
|**Power** |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit |**Rack support** | HPE 1U Gen10 SFF easy install rail kit | + ## HPE DL360 BOM |PN |Description |Quantity|
This procedure describes how to update the HPE BIOS configuration for your OT se
1. In the **Create Array** form, select all the options.
+> [!NOTE]
+> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+>
### Install iLO remotely from a virtual drive
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
To update sign-out counting periods, adjust the `= <number>` value to the requir
## Track user activity
-You can track user activity in the event timeline on each sensor. The timeline displays the event or affected device, and the time and date that the user carried out the activity.
+Track user activity on a sensor's event timeline, or by viewing audit logs generated on an on-premises management console.
-**To view user activity**:
+- **The timeline** displays the event or affected device, and the time and date that the user carried out the activity.
-1. Select **Event Timeline** from the sensor side menu.
+- **Audit logs** record key activity data at the time of occurrence. Use audit logs generated on the on-premises management console to understand which changes were made, when, and by whom.
-1. Verify that **User Operations** filter is set to **Show**.
+### View user activity on the sensor's Event Timeline
- :::image type="content" source="media/how-to-create-and-manage-users/track-user-activity.png" alt-text="Screenshot of the Event timeline showing a user that signed in to Defender for IoT.":::
+Select **Event Timeline** from the sensor side menu. If needed, verify that **User Operations** filter is set to **Show**.
-1. Use the filters or Ctrl F option to find the information of interest to you.
+For example:
+
+Use the filters or search using CTRL+F to find the information of interest to you.
+
+### View audit log data on the on-premises management console
+
+In the on-premises management console, select **System Settings > System Statistics**, and then select **Audit log**.
+
+The dialog displays data from the currently active audit log. For example:
+
+For example:
++
+New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
+
+Audit logs include the following data:
+
+| Action | Information logged |
+|--|--|
+| **Learn, and remediation of alerts** | Alert ID |
+| **Password changes** | User, User ID |
+| **Login** | User |
+| **User creation** | User, User role |
+| **Password reset** | User name |
+| **Exclusion rules-Creation**| Rule summary |
+| **Exclusion rules-Editing**| Rule ID, Rule Summary |
+| **Exclusion rules-Deletion** | Rule ID |
+| **Management Console Upgrade** | The upgrade file used |
+| **Sensor upgrade retry** | Sensor ID |
+| **Uploaded TI package** | No additional information recorded. |
++
+> [!TIP]
+> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export audit logs for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
+>
## Change a user's password
You can recover the password for the on-premises management console or the senso
1. Select **Next**, and your user, and a system-generated password for your management console will then appear. + ## Next steps - [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having. Previously updated : 05/22/2022 Last updated : 06/15/2022 # Troubleshoot the sensor and on-premises management console
All allowlists, policies, and configuration settings are cleared, and the sensor
## Troubleshoot an on-premises management console
-### Investigate a lack of expected alerts on the management console
-If an expected alert is not shown in the **Alerts** window, verify the following:
+### Investigate a lack of expected alerts
+
+If you don't see an expected alert on the on-premises **Alerts** page, do the following to troubleshoot:
-- Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, a new alert is not shown.
+- Verify whether the alert is already listed as a reaction to a different security instance. If it has, and that alert hasn't yet been handled, a new alert isn't shown elsewhere.
-- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
+- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
### Tweak the Quality of Service (QoS)
To limit the number of alerts, use the `notifications.max_number_to_report` prop
1. Save the changes. No restart is required.
+### Export audit logs for troubleshooting
+Audit logs record key activity data at the time of occurrence. Use audit logs generated on the on-premises management console to understand which changes were made, when, and by whom.
-### Export audit logs from the management console
+You may also want to export your audit logs to send them to the support team for extra troubleshooting.
+
+> [!NOTE]
+> New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
+>
-Audit logs record key information at the time of occurrence. Audit logs are useful when you are trying to figure out what changes were made, and by who. Audit logs can be exported in the management console, and contain the following information:
+**To export audit log data**:
-| Action | Information logged |
-|--|--|
-| **Learn, and remediation of alerts** | Alert ID |
-| **Password changes** | User, User ID |
-| **Login** | User |
-| **User creation** | User, User role |
-| **Password reset** | User name |
-| **Exclusion rules-Creation**| Rule summary |
-| **Exclusion rules-Editing**| Rule ID, Rule Summary |
-| **Exclusion rules-Deletion** | Rule ID |
-| **Management Console Upgrade** | The upgrade file used |
-| **Sensor upgrade retry** | Sensor ID |
-| **Uploaded TI package** | No additional information recorded. |
+1. In the on-premises management console, select **System Settings > Export**.
-**To export the audit log**:
+1. In the **Export Troubleshooting Information** dialog:
-1. In the management console, in the left pane, select **System Settings**.
+ 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**.
-1. Select **Export**.
+ 1. Select **Audit Logs**.
-1. In the File Name field, enter the file name that you want to use for the exported log. If no name is entered, the default file name will be the current date.
+ 1. Select **Export**.
-1. Select **Audit Logs**.
+ The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog. Select the link to download the file.
-1. Select **Export**.
+1. Exported audit logs are encrypted for your security, and require a password to open. In the **Archived Files** list, select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button for your exported logs to view its password. If you're forwarding the audit logs to the support team, make sure to send the password to support separately from the exported logs.
-The exported log is added to the **Archived Logs** list. Select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view the OTP. Send the OTP string to the support team in a separate message from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs.
+For more information, see [View audit log data on the on-premises management console](how-to-create-and-manage-users.md#view-audit-log-data-on-the-on-premises-management-console).
## Next steps
event-grid Custom Disaster Recovery Client Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-disaster-recovery-client-side.md
+
+ Title: Build your own client-side disaster recovery for Azure Event Grid topics
+description: This article describes how you can build your own client-side disaster recovery for Azure Event Grid topics.
+ Last updated : 06/14/2022
+ms.devlang: csharp
+++
+# Build your own client-side disaster recovery for Azure Event Grid topics
+
+Disaster recovery focuses on recovering from a severe loss of application functionality. This tutorial will walk you through how to set up your eventing architecture to recover if the Event Grid service becomes unhealthy in a particular region.
+
+In this tutorial, you'll learn how to create an active-passive failover architecture for custom topics in Event Grid. You'll accomplish failover by mirroring your topics and subscriptions across two regions and then managing a failover when a topic becomes unhealthy. The architecture in this tutorial fails over all new traffic. it's important to be aware, with this setup, events already in flight won't be recovered until the compromised region is healthy again.
+
+> [!NOTE]
+> Event Grid supports automatic geo disaster recovery (GeoDR) on the server side now. You can still implement client-side disaster recovery logic if you want a greater control on the failover process. For details about automatic GeoDR, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md).
+
+## Create a message endpoint
+
+To test your failover configuration, you'll need an endpoint to receive your events at. The endpoint isn't part of your failover infrastructure, but will act as our event handler to make it easier to test.
+
+To simplify testing, deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+1. [Deploy the solution](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json) to your subscription. In the Azure portal, provide values for the parameters.
+1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+`https://<your-site-name>.azurewebsites.net`
+Make sure to note this URL as you'll need it later.
+
+1. You see the site but no events have been posted to it yet.
+
+ ![Screenshot showing your web site with no events.](./media/blob-event-quickstart-portal/view-site.png)
+++
+## Create primary and secondary topics
+
+First, create two Event Grid topics. These topics will act as primary and secondary topics. By default, your events will flow through the primary topic. If there is a service outage in the primary region, your secondary will take over.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the upper left corner of the main Azure menu,
+ choose **All services** > search for **Event Grid** > select **Event Grid topics**.
+
+ ![Screenshot showing the Event Grid topics menu.](./media/custom-disaster-recovery/select-topics-menu.png)
+
+ Select the star next to Event Grid topics to add it to resource menu for easier access in the future.
+
+1. In the Event Grid topics Menu, select **+ADD** to create the primary topic.
+
+ * Give the topic a logical name and add "-primary" as a suffix to make it easy to track.
+ * This topic's region will be your primary region.
+
+ ![Screenshot showing the Create primary topic page.](./media/custom-disaster-recovery/create-primary-topic.png)
+
+1. Once the Topic has been created, navigate to it and copy the **Topic Endpoint**. you'll need the URI later.
+
+ ![Screenshot showing the topic endpoint.](./media/custom-disaster-recovery/get-primary-topic-endpoint.png)
+
+1. Get the access key for the topic, which you'll also need later. Click on **Access keys** in the resource menu and copy Key 1.
+
+ ![Screenshot showing the topic's access key.](./media/custom-disaster-recovery/get-primary-access-key.png)
+
+1. In the **Topic** page, click **+Event Subscription** to create a subscription connecting your subscribing the event receiver website you made in the pre-requisites to the tutorial.
+
+ * Give the event subscription a logical name and add "-primary" as a suffix to make it easy to track.
+ * Select Endpoint Type Web Hook.
+ * Set the endpoint to your event receiver's event URL, which should look something like: `https://<your-event-reciever>.azurewebsites.net/api/updates`
+
+ ![Screenshot that shows the "Create Event Subscription - Basic" page with the "Name", "Endpoint Type", and "Endpoint" values highlighted.](./media/custom-disaster-recovery/create-primary-es.png)
+
+1. Repeat the same flow to create your secondary topic and subscription. This time, replace the "-primary" suffix with "-secondary" for easier tracking. Finally, make sure you put it in a different Azure Region. While you can put it anywhere you want, it's recommended that you use the [Azure Paired Regions](../availability-zones/cross-region-replication-azure.md). Putting the secondary topic and subscription in a different region ensures that your new events will flow even if the primary region goes down.
+
+You should now have:
+
+ * An event receiver website for testing.
+ * A primary topic in your primary region.
+ * A primary event subscription connecting your primary topic to the event receiver website.
+ * A secondary topic in your secondary region.
+ * A secondary event subscription connecting your primary topic to the event receiver website.
+
+## Implement client-side failover
+
+Now that you have a regionally redundant pair of topics and subscriptions setup, you're ready to implement client-side failover. There are several ways to accomplish it, but all failover implementations will have a common feature: if one topic is no longer healthy, traffic will redirect to the other topic.
+
+### Basic client-side implementation
+
+The following sample code is a simple .NET publisher that will always attempt to publish to your primary topic first. If it doesn't succeed, it will then fail over the secondary topic. In either case, it also checks the health api of the other topic by doing a GET on `https://<topic-name>.<topic-region>.eventgrid.azure.net/api/health`. A healthy topic should always respond with **200 OK** when a GET is made on the **/api/health** endpoint.
+
+> [!NOTE]
+> The following sample code is only for demonstration purposes and is not intended for production use.
+
+```csharp
+using System;
+using System.Net.Http;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+using Azure;
+using Azure.Messaging.EventGrid;
+
+namespace EventGridFailoverPublisher
+{
+ // This captures the "Data" portion of an EventGridEvent on a custom topic
+ class FailoverEventData
+ {
+ public string TestStatus { get; set; }
+ }
+
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ // TODO: Enter the endpoint each topic. You can find this topic endpoint value
+ // in the "Overview" section in the "Event Grid topics" page in Azure Portal..
+ string primaryTopic = "https://<primary-topic-name>.<primary-topic-region>.eventgrid.azure.net/api/events";
+ string secondaryTopic = "https://<secondary-topic-name>.<secondary-topic-region>.eventgrid.azure.net/api/events";
+
+ // TODO: Enter topic key for each topic. You can find this in the "Access Keys" section in the
+ // "Event Grid topics" page in Azure Portal.
+ string primaryTopicKey = "<your-primary-topic-key>";
+ string secondaryTopicKey = "<your-secondary-topic-key>";
+
+ Uri primaryTopicUri = new Uri(primaryTopic);
+ Uri secondaryTopicUri = new Uri(secondaryTopic);
+
+ Uri primaryTopicHealthProbe = new Uri($"https://{primaryTopicUri.Host}/api/health");
+ Uri secondaryTopicHealthProbe = new Uri($"https://{secondaryTopicUri.Host}/api/health");
+
+ var httpClient = new HttpClient();
+
+ try
+ {
+ var client = new EventGridPublisherClient(primaryTopicUri, new AzureKeyCredential(primaryTopicKey));
+
+ await client.SendEventsAsync(GetEventsList());
+ Console.Write("Published events to primary Event Grid topic.");
+
+ HttpResponseMessage health = httpClient.GetAsync(secondaryTopicHealthProbe).Result;
+ Console.Write("\n\nSecondary Topic health " + health);
+ }
+ catch (RequestFailedException ex)
+ {
+ var client = new EventGridPublisherClient(secondaryTopicUri, new AzureKeyCredential(secondaryTopicKey));
+
+ await client.SendEventsAsync(GetEventsList());
+ Console.Write("Published events to secondary Event Grid topic. Reason for primary topic failure:\n\n" + ex);
+
+ HttpResponseMessage health = await httpClient.GetAsync(primaryTopicHealthProbe);
+ Console.WriteLine($"Primary Topic health {health}");
+ }
+
+ Console.ReadLine();
+ }
+
+ static IList<EventGridEvent> GetEventsList()
+ {
+ List<EventGridEvent> eventsList = new List<EventGridEvent>();
+
+ for (int i = 0; i < 5; i++)
+ {
+ eventsList.Add(new EventGridEvent(
+ subject: "test" + i,
+ eventType: "Contoso.Failover.Test",
+ dataVersion: "2.0",
+ data: new FailoverEventData
+ {
+ TestStatus = "success"
+ }));
+ }
+
+ return eventsList;
+ }
+ }
+}
+```
+
+### Try it out
+
+Now that you have all of your components in place, you can test out your failover implementation. Run the above sample in Visual Studio code, or your favorite environment. Replace the following four values with the endpoints and keys from your topics:
+
+ * primaryTopic - the endpoint for your primary topic.
+ * secondaryTopic - the endpoint for your secondary topic.
+ * primaryTopicKey - the key for your primary topic.
+ * secondaryTopicKey - the key for your secondary topic.
+
+Try running the event publisher. You should see your test events land in your Event Grid viewer like below.
+
+![Screenshot showing the Event Grid Viewer app.](./media/custom-disaster-recovery/event-grid-viewer.png)
+
+To make sure your failover is working, you can change a few characters in your primary topic key to make it no longer valid. Try running the publisher again. You should still see new events appear in your Event Grid viewer, however when you look at your console, you'll see that they are now being published via the secondary topic.
+
+### Possible extensions
+
+There are many ways to extend this sample based on your needs. For high-volume scenarios, you may want to regularly check the topic's health api independently. That way, if a topic were to go down, you don't need to check it with every single publish. Once you know a topic isn't healthy, you can default to publishing to the secondary topic.
+
+Similarly, you may want to implement failback logic based on your specific needs. If publishing to the closest data center is critical for you to reduce latency, you can periodically probe the health api of a topic that has failed over. Once it's healthy again, you'll know it's safe to failback to the closer data center.
+
+## Next steps
+
+- Learn how to [receive events at an http endpoint](./receive-events.md)
+- Discover how to [route events to Hybrid Connections](./custom-event-to-hybrid-connection.md)
+- Learn about [disaster recovery using Azure DNS and Traffic Manager](../networking/disaster-recovery-dns-traffic-manager.md)
event-grid Custom Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-disaster-recovery.md
Title: Disaster recovery for custom topics in Azure Event Grid
+ Title: Build your own disaster recovery plan for Azure Event Grid topics and domains
description: This tutorial will walk you through how to set up your eventing architecture to recover if the Event Grid service becomes unhealthy in a region. Previously updated : 04/22/2021 Last updated : 06/14/2022 ms.devlang: csharp
-# Build your own disaster recovery for custom topics in Event Grid
-Disaster recovery focuses on recovering from a severe loss of application functionality. This tutorial will walk you through how to set up your eventing architecture to recover if the Event Grid service becomes unhealthy in a particular region.
+# Build your own disaster recovery plan for Azure Event Grid topics and domains
-In this tutorial, you'll learn how to create an active-passive failover architecture for custom topics in Event Grid. You'll accomplish failover by mirroring your topics and subscriptions across two regions and then managing a failover when a topic becomes unhealthy. The architecture in this tutorial fails over all new traffic. it's important to be aware, with this setup, events already in flight won't be recovered until the compromised region is healthy again.
+If you have decided not to replicate any data to a paired region, you'll need to invest in some practices to build your own disaster recovery scenario and recover from a severe loss of application functionality.
-> [!NOTE]
-> Event Grid supports automatic geo disaster recovery (GeoDR) on the server side now. You can still implement client-side disaster recovery logic if you want a greater control on the failover process. For details about automatic GeoDR, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md).
+## Build your scripts for automation
-## Create a message endpoint
+Keep your deployment pipelines automated, handcrafted processes can cause delays when a failover occurs. Ensure all your Azure deployments are backed up in scripts or templates so that deployments can be easily replicated in one or multiple regions if needed. Don't try to reinvent the wheel, use what it's already proven and works, there are a many automation tools capable to solve issues around cloud deployment automation like [Azure DevOps](/azure/devops/) or [GitHub Actions](https://docs.github.com/en/actions), there are more tools out there that can help you during the deployment phase, use the one you feel more comfortable to work with and use this how-to guide just a checklist reference.
-To test your failover configuration, you'll need an endpoint to receive your events at. The endpoint isn't part of your failover infrastructure, but will act as our event handler to make it easier to test.
+## Define the regions in your plan
-To simplify testing, deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+To create a recovery plan, you'll need to choose which regions will be used in your plan. When you choose the regions, you also need to consider the possible latency between your users and the cloud resources. Try to get the closest region to your primary region.
-1. Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters.
+## Selecting a cross-region router
- <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json" target="_blank"><img src="../media/template-deployments/deploy-to-azure.svg" alt="Button to deploy to Azure."></a>
+Once you already defined the regions, you'll need to define the cross-region router that will help you to distribute the traffic across the regions if needed. [Traffic Manager](../traffic-manager/traffic-manager-overview.md) is a DNS-based traffic load balancer that allows you to distribute traffic to your public facing applications across the global Azure regions. Traffic Manager also provides your public endpoints with high availability and quick responsiveness, in case you need additional features like cross-region redirection and availability, reverse proxy, static content cache, WAF policies you may be interested to see [Front Door](../frontdoor/front-door-overview.md).
+
+## Deploy your Azure Event Grid resources
-1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
-`https://<your-site-name>.azurewebsites.net`
-Make sure to note this URL as you'll need it later.
+Now it's time to create your Azure Event Grid topic resources, use the following [Bicep sample](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid) to create a topic with a webhook event subscription.
-1. You see the site but no events have been posted to it yet.
+Repeat the topic deployment process for the secondary region you have chosen.
- ![View new site](./media/blob-event-quickstart-portal/view-site.png)
+Note: Once you have deployed resources in Azure you'll need to ensure changes made in the configuration of the topic and event subscriptions are reflected in the template, to continue the practice: create and recreate.
+Save in some place the topic endpoints URLs for each resource you have created, you'll see something like this:
+Region 1: `https://my-primary-topic.my-region-1.eventgrid.azure.net/api/events`
-## Create your primary and secondary topics
+Region 2: `https://my-secondary-topic.my-secondary-1.eventgrid.azure.net/api/events`
-First, create two Event Grid topics. These topics will act as your primary and secondary. By default, your events will flow through your primary topic. If there is a service outage in the primary region, your secondary will take over.
+## Create a Traffic Manager for Azure Event Grid endpoints
-1. Sign in to the [Azure portal](https://portal.azure.com).
+The previously created Azure Event Grid resources endpoints will be used when we create and configure the Traffic Manager profile in Azure, see the following [Quickstart: Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md) for more information.
-1. From the upper left corner of the main Azure menu,
- choose **All services** > search for **Event Grid** > select **Event Grid Topics**.
+Traffic Manager it's a global resource that provides a unique DNS name, like: `https://myeventgridtopic.trafficmanager.net`. Once you configure both Azure Event Grid topic endpoints in the Traffic Manager, it will automatically redirect the traffic to the second region once the primary region becomes unavailable.
- ![Event Grid Topics menu](./media/custom-disaster-recovery/select-topics-menu.png)
+At this moment you have your resources deployed and running, and can start sending events to your traffic manager endpoint, in case you don't want to keep active the secondary endpoint in your traffic manager you may be interested to [disable the endpoint](../traffic-manager/traffic-manager-manage-endpoints.md#to-disable-an-endpoint).
- Select the star next to Event Grid Topics to add it to resource menu for easier access in the future.
+## Integrate deployment scripts in your CI/CD process
-1. In the Event Grid Topics Menu, select **+ADD** to create your primary topic.
+Now that you ensure your configuration is working as expected and your events are delivered to the regions you defined, you'll need to integrate your template with an automation tool, see [Quickstart: Integrate Bicep with Azure Pipelines](../azure-resource-manager/bicep/add-template-to-azure-pipelines.md) or [Quickstart: Deploy Bicep files by using GitHub Actions](../azure-resource-manager/bicep/deploy-github-actions.md) for more information.
- * Give the topic a logical name and add "-primary" as a suffix to make it easy to track.
- * This topic's region will be your primary region.
-
- ![Event Grid Topic primary create dialogue](./media/custom-disaster-recovery/create-primary-topic.png)
-
-1. Once the Topic has been created, navigate to it and copy the **Topic Endpoint**. you'll need the URI later.
-
- ![Event Grid Primary Topic](./media/custom-disaster-recovery/get-primary-topic-endpoint.png)
-
-1. Get the access key for the topic, which you'll also need later. Click on **Access keys** in the resource menu and copy Key 1.
-
- ![Get Primary Topic Key](./media/custom-disaster-recovery/get-primary-access-key.png)
-
-1. In the Topic blade, click **+Event Subscription** to create a subscription connecting your subscribing the event receiver website you made in the pre-requisites to the tutorial.
-
- * Give the event subscription a logical name and add "-primary" as a suffix to make it easy to track.
- * Select Endpoint Type Web Hook.
- * Set the endpoint to your event receiver's event URL, which should look something like: `https://<your-event-reciever>.azurewebsites.net/api/updates`
-
- ![Screenshot that shows the "Create Event Subscription - Basic" page with the "Name", "Endpoint Type", and "Endpoint" values highlighted.](./media/custom-disaster-recovery/create-primary-es.png)
-
-1. Repeat the same flow to create your secondary topic and subscription. This time, replace the "-primary" suffix with "-secondary" for easier tracking. Finally, make sure you put it in a different Azure Region. While you can put it anywhere you want, it's recommended that you use the [Azure Paired Regions](../availability-zones/cross-region-replication-azure.md). Putting the secondary topic and subscription in a different region ensures that your new events will flow even if the primary region goes down.
-
-You should now have:
-
- * An event receiver website for testing.
- * A primary topic in your primary region.
- * A primary event subscription connecting your primary topic to the event receiver website.
- * A secondary topic in your secondary region.
- * A secondary event subscription connecting your primary topic to the event receiver website.
-
-## Implement client-side failover
-
-Now that you have a regionally redundant pair of topics and subscriptions setup, you're ready to implement client-side failover. There are several ways to accomplish it, but all failover implementations will have a common feature: if one topic is no longer healthy, traffic will redirect to the other topic.
-
-### Basic client-side implementation
-
-The following sample code is a simple .NET publisher that will always attempt to publish to your primary topic first. If it doesn't succeed, it will then failover the secondary topic. In either case, it also checks the health api of the other topic by doing a GET on `https://<topic-name>.<topic-region>.eventgrid.azure.net/api/health`. A healthy topic should always respond with **200 OK** when a GET is made on the **/api/health** endpoint.
-
-> [!NOTE]
-> The following sample code is only for demonstration purposes and is not intended for production use.
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Collections.Generic;
-using System.Threading.Tasks;
-using Azure;
-using Azure.Messaging.EventGrid;
-
-namespace EventGridFailoverPublisher
-{
- // This captures the "Data" portion of an EventGridEvent on a custom topic
- class FailoverEventData
- {
- public string TestStatus { get; set; }
- }
-
- class Program
- {
- static async Task Main(string[] args)
- {
- // TODO: Enter the endpoint each topic. You can find this topic endpoint value
- // in the "Overview" section in the "Event Grid Topics" blade in Azure Portal..
- string primaryTopic = "https://<primary-topic-name>.<primary-topic-region>.eventgrid.azure.net/api/events";
- string secondaryTopic = "https://<secondary-topic-name>.<secondary-topic-region>.eventgrid.azure.net/api/events";
-
- // TODO: Enter topic key for each topic. You can find this in the "Access Keys" section in the
- // "Event Grid Topics" blade in Azure Portal.
- string primaryTopicKey = "<your-primary-topic-key>";
- string secondaryTopicKey = "<your-secondary-topic-key>";
-
- Uri primaryTopicUri = new Uri(primaryTopic);
- Uri secondaryTopicUri = new Uri(secondaryTopic);
-
- Uri primaryTopicHealthProbe = new Uri($"https://{primaryTopicUri.Host}/api/health");
- Uri secondaryTopicHealthProbe = new Uri($"https://{secondaryTopicUri.Host}/api/health");
-
- var httpClient = new HttpClient();
-
- try
- {
- var client = new EventGridPublisherClient(primaryTopicUri, new AzureKeyCredential(primaryTopicKey));
-
- await client.SendEventsAsync(GetEventsList());
- Console.Write("Published events to primary Event Grid topic.");
-
- HttpResponseMessage health = httpClient.GetAsync(secondaryTopicHealthProbe).Result;
- Console.Write("\n\nSecondary Topic health " + health);
- }
- catch (RequestFailedException ex)
- {
- var client = new EventGridPublisherClient(secondaryTopicUri, new AzureKeyCredential(secondaryTopicKey));
-
- await client.SendEventsAsync(GetEventsList());
- Console.Write("Published events to secondary Event Grid topic. Reason for primary topic failure:\n\n" + ex);
-
- HttpResponseMessage health = await httpClient.GetAsync(primaryTopicHealthProbe);
- Console.WriteLine($"Primary Topic health {health}");
- }
-
- Console.ReadLine();
- }
-
- static IList<EventGridEvent> GetEventsList()
- {
- List<EventGridEvent> eventsList = new List<EventGridEvent>();
-
- for (int i = 0; i < 5; i++)
- {
- eventsList.Add(new EventGridEvent(
- subject: "test" + i,
- eventType: "Contoso.Failover.Test",
- dataVersion: "2.0",
- data: new FailoverEventData
- {
- TestStatus = "success"
- }));
- }
-
- return eventsList;
- }
- }
-}
-```
-
-### Try it out
-
-Now that you have all of your components in place, you can test out your failover implementation. Run the above sample in Visual Studio code, or your favorite environment. Replace the following four values with the endpoints and keys from your topics:
-
- * primaryTopic - the endpoint for your primary topic.
- * secondaryTopic - the endpoint for your secondary topic.
- * primaryTopicKey - the key for your primary topic.
- * secondaryTopicKey - the key for your secondary topic.
-
-Try running the event publisher. You should see your test events land in your Event Grid viewer like below.
-
-![Event Grid Primary Event Subscription](./media/custom-disaster-recovery/event-grid-viewer.png)
-
-To make sure your failover is working, you can change a few characters in your primary topic key to make it no longer valid. Try running the publisher again. You should still see new events appear in your Event Grid viewer, however when you look at your console, you'll see that they are now being published via the secondary topic.
-
-### Possible extensions
-
-There are many ways to extend this sample based on your needs. For high-volume scenarios, you may want to regularly check the topic's health api independently. That way, if a topic were to go down, you don't need to check it with every single publish. Once you know a topic isn't healthy, you can default to publishing to the secondary topic.
-
-Similarly, you may want to implement failback logic based on your specific needs. If publishing to the closest data center is critical for you to reduce latency, you can periodically probe the health api of a topic that has failed over. Once it's healthy again, you'll know it's safe to failback to the closer data center.
+Having a regularly tested automated process will provide confidence that dependencies used in your scripts and tools aren't outdated, and the recovery process can be triggered in a couple of minutes after any possible failure in the region.
## Next steps
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
A deployment manifest is a JSON document that describes which modules to deploy,
Configure each property with an appropriate value, as indicated by the placeholders. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties as described by [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties).
+ > [!TIP]
+ > The name for your `target` container has naming restrictions, for example using a `$` prefix is unsupported. To see all restrictions, view [Container Names](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#container-names).
+ ```json { "deviceAutoDeleteProperties": {
A deployment manifest is a JSON document that describes which modules to deploy,
"cloudStorageConnectionString": "DefaultEndpointsProtocol=https;AccountName=<your Azure Storage Account Name>;AccountKey=<your Azure Storage Account Key>; EndpointSuffix=<your end point suffix>", "storageContainersForUpload": { "<source container name1>": {
- "target": "<target container name1>"
+ "target": "<your-target-container-name>"
} }, "deleteAfterUpload": <true,false>
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
If you aren't going use the deployment, you should delete it with the below comm
* Learn to [Troubleshoot online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md) * Learn how to [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) * Learn how to [monitor online endpoints](how-to-monitor-online-endpoints.md).
-* Learn [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* Learn [safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
* [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md
To step through this how-to guide, you need to have:
- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands. > [!TIP]
-> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
## Common use-cases for dump and restore
For known issues, tips and tricks, we recommend you to look at our [techcommunit
## Next steps - [Connect applications to Azure Database for MySQL](./how-to-connection-string.md). - For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-mydumper-myloader.md
After the database is restored, itΓÇÖs always recommended to validate the data c
## Next steps * Learn more about the [mydumper/myloader project in GitHub](https://github.com/maxbube/mydumper).
-* Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+* Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
* [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](how-to-migrate-single-flexible-minimum-downtime.md) * Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md) * Commonly encountered [migration errors](./how-to-troubleshoot-common-errors.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
Flexible servers that are configured with high availability, log data is replica
2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server. 3. If you want to clone your database server for testing and development purposes, or you want to restore for any other purposes, you can perform point-in-time restore.
-## Zone redundant high availability - features
+## High availability - features
* Standby replica will be deployed in an exact VM configuration same as the primary server, including vCores, storage, network settings (VNET, Firewall), etc.
Flexible servers that are configured with high availability, log data is replica
* You can remove standby replica by disabling high availability.
-* You can only choose your availability zone for your primary database server. Standby zone is auto-selected.
+* For zone-redundant HA, you can choose your availability zones for your primary and standby database servers.
* Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time.
Flexible servers that are configured with high availability, log data is replica
* Periodic maintenance activities such as minor version upgrades happen at the standby first and the service is failed over to reduce downtime.
-## Zone redundant high availability - limitations
+## High availability - limitations
* High availability is not supported with burstable compute tier. * High availability is supported only in regions where multiple zones are available.
-* Due to synchronous replication to another availability zone, applications can experience elevated write and commit latency.
+* Due to synchronous replication to the standby server, especially with zone-redundant HA, applications can experience elevated write and commit latency.
* Standby replica cannot be used for read queries.
Flexible servers that are configured with high availability, log data is replica
* If logical decoding or logical replication is configured with a HA configured flexible server, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
-## Availability without high availability
+## Availability for non-HA servers
For Flexible servers configured **without** high availability, the service still provides built-in availability, storage redundancy and resiliency to help to recover from any planned or unplanned downtime events.
Here are some failure scenarios that require user action to recover:
* **Can I choose the availability zones for my primary and standby servers?** <br> If you choose same zone HA, then you can only choose the primary server. If you choose zone redundant HA, then you can choose both primary and standby AZs.
-
+ * **Is zone redundant HA available in all regions?** <br> Zone-redundant HA is available in regions that support multiple AZs in the region. For the latest region support, please see [this documentation](overview.md#azure-regions). We are continuously adding more regions and enabling multiple AZs. Note that same-zone HA is available in all regions.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
Last updated 11/30/2021
# Limits in Azure Database for PostgreSQL - Flexible Server The following sections describe capacity and functional limits in the database service. If you'd like to learn about resource (compute, memory, storage) tiers, see the [compute and storage](concepts-compute-storage.md) article.
When connections exceed the limit, you may receive the following error:
> FATAL: sorry, too many clients already. > [!IMPORTANT]
-> For best experience, we recommend that you use a connection pool manager like PgBouncer to efficiently manage connections. Azure Database for PostgreSQL - Flexible Server offers pgBouncer as [built-in connection pool management solution](concepts-pgbouncer.md).
+> For best experience, it is recommended to you use a connection pool manager like PgBouncer to efficiently manage connections. Azure Database for PostgreSQL - Flexible Server offers pgBouncer as [built-in connection pool management solution](concepts-pgbouncer.md).
A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. Connection pooling can be used to decrease idle connections and reuse existing connections. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea
### Storage -- Once configured, storage size cannot be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.-- Currently, storage auto-grow feature is not available. Please monitor the usage and increase the storage to a higher size.
+- Once configured, storage size can't be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.
+- Currently, storage auto-grow feature isn't available. You can monitor the usage and increase the storage to a higher size.
- When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. - We recommend to set alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.
A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea
- Moving in and out of VNET is currently not supported. - Combining public access with deployment within a VNET is currently not supported.-- Firewall rules are not supported on VNET, Network security groups can be used instead.-- Public access database servers can connect to public internet, for example through `postgres_fdw`, and this access cannot be restricted. VNET-based servers can have restricted outbound access using Network Security Groups.
+- Firewall rules aren't supported on VNET, Network security groups can be used instead.
+- Public access database servers can connect to public internet, for example through `postgres_fdw`, and this access can't be restricted. VNET-based servers can have restricted outbound access using Network Security Groups.
### High availability (HA) -- Please see [Zone Redundant HA Limitations documentation](concepts-high-availability.md#zone-redundant-high-availabilitylimitations) page.
+- See [HA Limitations documentation](concepts-high-availability.md#high-availabilitylimitations).
### Availability zones - Manually moving servers to a different availability zone is currently not supported.-- The availability zone of the HA standby server cannot be manually configured. ### Postgres engine, extensions, and PgBouncer -- Postgres 10 and older are not supported. We recommend using the [Single Server](../overview-single-server.md) option if you require older Postgres versions.
+- Postgres 10 and older aren't supported. We recommend using the [Single Server](../overview-single-server.md) option if you require older Postgres versions.
- Extension support is currently limited to the Postgres `contrib` extensions. - Built-in PgBouncer connection pooler is currently not available for Burstable servers.-- SCRAM authentication is not supported with connectivity using built-in PgBouncer.
+- SCRAM authentication isn't supported with connectivity using built-in PgBouncer.
### Stop/start operation
A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea
### Scheduled maintenance -- Changing the maintenance window less than five days before an already planned upgrade, will not affect that upgrade. Changes only take effect with the next scheduled maintenance.
+- Changing the maintenance window less than five days before an already planned upgrade, won't affect that upgrade. Changes only take effect with the next scheduled maintenance.
### Backing up a server - Backups are managed by the system, there is currently no way to run these backups manually. We recommend using `pg_dump` instead.-- Backups are always snapshot-based full backups (not differential backups), possibly leading to higher backup storage utilization. Note that transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously.
+- Backups are always snapshot-based full backups (not differential backups), possibly leading to higher backup storage utilization. The transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously.
### Restoring a server - When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server it is based on. - VNET based database servers are restored into the same VNET when you restore from a backup.-- The new server created during a restore does not have the firewall rules that existed on the original server. Firewall rules need to be created separately for the new server.-- Restoring a deleted server is not supported.-- Cross region restore is not supported.
+- The new server created during a restore doesn't have the firewall rules that existed on the original server. Firewall rules need to be created separately for the new server.
+- Restoring a deleted server isn't supported.
+- Cross region restore isn't supported.
### Other features
-* Azure AD authentication is not yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require Azure AD authentication.
-* Read replicas are not yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require read replicas.
-* Moving resources to another subscription is not supported.
+* Azure AD authentication isn't yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require Azure AD authentication.
+* Read replicas aren't yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require read replicas.
+* Moving resources to another subscription isn't supported.
## Next steps
purview Create Azure Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-dotnet.md
Title: 'Quickstart: Create Microsoft Purview Account using .NET SDK'
-description: Create a Microsoft Purview Account using .NET SDK.
+ Title: 'Quickstart: Create Microsoft Purview (formerly Azure Purview) account using .NET SDK'
+description: This article will guide you through creating a Microsoft Purview (formerly Azure Purview) account using .NET SDK.
ms.devlang: csharp Previously updated : 09/27/2021 Last updated : 06/17/2022
-# Quickstart: Create a Microsoft Purview account using .NET SDK
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using .NET SDK
-In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create a Microsoft Purview account.
+In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create a Microsoft Purview (formerly Azure Purview) account.
-Microsoft Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Microsoft Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Microsoft Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/) on
## Create an application in Azure Active Directory
-1. In [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal), create an application that represents the .NET application you are creating in this tutorial. For the sign-on URL, you can provide a dummy URL as shown in the article (`https://contoso.org/exampleapp`).
+1. In [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal), create an application that represents the .NET application you're creating in this tutorial. For the sign-on URL, you can provide a dummy URL as shown in the article (`https://contoso.org/exampleapp`).
1. In [Get values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in), get the **application ID** and **tenant ID**, and note down these values that you use later in this tutorial. 1. In [Certificates and secrets](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options), get the **authentication key**, and note down this value that you use later in this tutorial. 1. In [Assign the application to a role](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application), assign the application to the **Contributor** role at the subscription level so that the application can create data factories in the subscription.
Next, create a C# .NET console application in Visual Studio:
}; ```
-## Create a Microsoft Purview account
+## Create an account
-Add the following code to the **Main** method that creates a **Microsoft Purview Account**.
+Add the following code to the **Main** method that will create the **Microsoft Purview Account**.
```csharp // Create a purview Account
Go to the **Microsoft Purview accounts** page in the [Azure portal](https://port
## Delete Microsoft Purview account
-To programmatically delete a Microsoft Purview Account, add the following lines of code to the program:
+To programmatically delete a Microsoft Purview account, add the following lines of code to the program:
```csharp Console.WriteLine("Deleting the Microsoft Purview Account");
Console.WriteLine("Check Microsoft Purview account name");
Console.WriteLine(client.Accounts.CheckNameAvailability(checkNameAvailabilityRequest).NameAvailable); ```
-The above code with print 'True' if the name is available and 'False' if the name is not available.
+The above code with print 'True' if the name is available and 'False' if the name isn't available.
## Next steps
-The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, delete the account, and check for name availability. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
-Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
* [Create a collection](quickstart-create-collection.md)
-* [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Create Azure Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-python.md
Title: 'Quickstart: Create a Microsoft Purview account using Python'
-description: Create a Microsoft Purview account using Python.
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python'
+description: This article will guide you through creating a Microsoft Purview (formerly Azure Purview) account using Python.
ms.devlang: python Previously updated : 09/27/2021 Last updated : 06/17/2022
-# Quickstart: Create a Microsoft Purview account using Python
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python
-In this quickstart, youΓÇÖll create a Microsoft Purview account programatically using Python. [Python reference for Microsoft Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
+In this quickstart, youΓÇÖll create a Microsoft Purview (formerly Azure Purview) account programatically using Python. [The python reference for Microsoft Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
-Microsoft Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Microsoft Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Microsoft Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
## Next steps
-The code in this tutorial creates a purview account and deletes a purview account. You can now download the Python SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, delete the account, and check for name availability. You can now download the Python SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
-Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
* [Create a collection](quickstart-create-collection.md)
-* [Add users to your Microsoft Purview account](catalog-permissions.md)
+
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-portal.md
# Quickstart: Create an account in the Microsoft Purview governance portal
-This quickstart describes the steps to Create a Microsft Purview (formerly Azure Purview) account through the Azure portal. Then we'll get started on the process of classifying, securing, and discovering your data in the Microsoft Purview Data Map!
+This quickstart describes the steps to Create a Microsoft Purview (formerly Azure Purview) account through the Azure portal. Then we'll get started on the process of classifying, securing, and discovering your data in the Microsoft Purview Data Map!
-The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog, that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your data estate. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your data estate. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview governance services across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
purview Create Catalog Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-powershell.md
Title: 'Quickstart: Create a Microsoft Purview account with PowerShell/Azure CLI'
-description: This Quickstart describes how to create a Microsoft Purview account using Azure PowerShell/Azure CLI.
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account with PowerShell/Azure CLI'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using Azure PowerShell/Azure CLI.
Previously updated : 10/28/2021 Last updated : 06/17/2022 ms.devlang: azurecli #Customer intent: As a data steward, I want create a new Microsoft Purview Account so that I can scan and classify my data.
-# Quickstart: Create a Microsoft Purview account using Azure PowerShell/Azure CLI
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Azure PowerShell/Azure CLI
In this Quickstart, you'll create a Microsoft Purview account using Azure PowerShell/Azure CLI. [PowerShell reference for Microsoft Purview](/powershell/module/az.purview/) is available, but this article will take you through all the steps needed to create an account with PowerShell.
-Microsoft Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Microsoft Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Microsoft Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview governance services across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
For more information about Microsoft Purview, [see our overview page](overview.m
Install either Azure PowerShell or Azure CLI in your client machine to deploy the template: [Command-line deployment](../azure-resource-manager/templates/template-tutorial-create-first-template.md?tabs=azure-cli#command-line-deployment)
-## Create a Microsoft Purview account
+## Create an account
1. Sign in with your Azure credential
For more information about Microsoft Purview, [see our overview page](overview.m
-1. Create a resource group for your Microsoft Purview account. You can skip this step if you already have one:
+1. Create a resource group for your account. You can skip this step if you already have one:
# [PowerShell](#tab/azure-powershell)
For more information about Microsoft Purview, [see our overview page](overview.m
-1. Create or Deploy the Microsoft Purview account
+1. Create or Deploy the account:
# [PowerShell](#tab/azure-powershell)
For more information about Microsoft Purview, [see our overview page](overview.m
1. The deployment command returns results. Look for `ProvisioningState` to see whether the deployment succeeded.
-1. If you deployed the Microsoft Purview account using a service principal, instead of a user account, you will also need to run the below command in the Azure CLI:
+1. If you deployed the account using a service principal, instead of a user account, you'll also need to run the below command in the Azure CLI:
```azurecli az purview account add-root-collection-admin --account-name [Microsoft Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
For more information about Microsoft Purview, [see our overview page](overview.m
## Next steps
-In this quickstart, you learned how to create a Microsoft Purview account.
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account.
-Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
-* [Add users to your Microsoft Purview account](catalog-permissions.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
* [Create a collection](quickstart-create-collection.md)
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-kafka-dotnet.md
Title: Publish messages to and process messages from Microsoft Purview's Atlas Kafka topics via Event Hubs using .NET
-description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Microsoft Purview's Apache Atlas Kafka topics by using the latest Azure.Messaging.EventHubs package.
+ Title: Publish and process Atlas Kafka topics messages via Event Hubs
+description: Get a walkthrough on how to use Event Hubs and a .NET Core application to send/receive events to/from Microsoft Purview's Apache Atlas Kafka topics. Try the Azure.Messaging.EventHubs package.
ms.devlang: csharp Previously updated : 09/27/2021 Last updated : 06/12/2022
-# Publish messages to and process messages from Microsoft Purview's Atlas Kafka topics via Event Hubs using .NET
-This quickstart shows how to send events to and receive events from Microsoft Purview's Atlas Kafka topics via event hub using the **Azure.Messaging.EventHubs** .NET library.
+# Use Event Hubs and .NET to send and receive Atlas Kafka topics messages
+This quickstart teaches you how to send and receive *Atlas Kafka* topics events. We will make use of *Azure Event Hubs* and the **Azure.Messaging.EventHubs** .NET library.
> [!IMPORTANT]
-> A managed event hub is created as part of Microsoft Purview account creation, see [Microsoft Purview account creation](create-catalog-portal.md). You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to event hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new **Azure.Messaging.EventHubs** library.
+> A managed event hub is created automatically when your *Microsoft Purview* account is created. See, [Purview account creation](create-catalog-portal.md). You can publish messages to Event Hubs Kafka topic, ATLAS_HOOK. Purview will receive it, process it and notify Kafka topic ATLAS_ENTITIES of entity changes. This quickstart uses the new **Azure.Messaging.EventHubs** library.
## Prerequisites
-If you're new to Azure Event Hubs, see [Event Hubs overview](../event-hubs/event-hubs-about.md) before you do this quickstart.
+If you're new to Event Hubs, see [Event Hubs overview](../event-hubs/event-hubs-about.md) before you complete this quickstart.
-To complete this quickstart, you need the following prerequisites:
+To follow this quickstart, you need certain prerequisites in place:
-- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- **Microsoft Visual Studio 2019**. The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+- **A Microsoft Azure subscription**. To use Azure services, including Event Hubs, you need an Azure subscription. If you don't have an Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).
+- **Microsoft Visual Studio 2022**. The Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using a Visual Studio version prior to Visual Studio 2019 it doesn't have the tools needed to build C# 8.0 projects. Visual Studio 2022, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
-## Publish messages to Microsoft Purview
-This section shows you how to create a .NET Core console application to send events to a Microsoft Purview via event hub kafka topic **ATLAS_HOOK**.
+## Publish messages to Purview
+Let's create a .NET Core console application that sends events to Purview via Event Hub Kafka topic, **ATLAS_HOOK**.
## Create a Visual Studio project
-Next, create a C# .NET console application in Visual Studio:
+Next create a C# .NET console application in Visual Studio:
1. Launch **Visual Studio**. 2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
Next, create a C# .NET console application in Visual Studio:
### Create a console application
-1. Start Visual Studio 2019.
-1. Select **Create a new project**.
-1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
+1. Start Visual Studio 2022.
+1. Select **Create a new project**.
+1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
1. Select **C#** for the programming language.
- 1. Select **Console** for the type of the application.
- 1. Select **Console App (.NET Core)** from the results list.
- 1. Then, select **Next**.
+ 1. Select **Console** for the type of the application.
+ 1. Select **Console App (.NET Core)** from the results list.
+ 1. Then, select **Next**.
### Add the Event Hubs NuGet package
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. Run the following command to install the **Azure.Messaging.EventHubs** NuGet package and **Azure.Messaging.EventHubs.Producer** NuGet package: ```cmd Install-Package Azure.Messaging.EventHubs ```
-
+ ```cmd Install-Package Azure.Messaging.EventHubs.Producer
- ```
+ ```
-### Write code to send messages to the event hub
+### Write code that sends messages to the event hub
1. Add the following `using` statements to the top of the **Program.cs** file:
Next, create a C# .NET console application in Visual Studio:
using Azure.Messaging.EventHubs.Producer; ```
-2. Add constants to the `Program` class for the Event Hubs connection string and Event Hub name.
+2. Add constants to the `Program` class for the Event Hubs connection string and Event Hub name.
```csharp private const string connectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>"; private const string eventHubName = "<EVENT HUB NAME>"; ```
- You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Microsoft Purview account.
+ You can get the Event Hub namespace associated with the Purview account by looking at the Atlas kafka endpoint primary/secondary connection strings. These can be found in **Properties** tab of your Purview account.
- :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that shows an Event Hubs Namespace.":::
- The event hub name should be **ATLAS_HOOK** for sending messages to Microsoft Purview.
+ The event hub name for sending messages to Purview is **ATLAS_HOOK**.
-3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Microsoft Purview. See the code comments for details.
+3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Purview. See the comments in the code for details.
```csharp static async Task Main()
Next, create a C# .NET console application in Visual Studio:
} ```
-5. Build the project, and ensure that there are no errors.
-6. Run the program and wait for the confirmation message.
+5. Build the project. Ensure that there are no errors.
+6. Run the program and wait for the confirmation message.
> [!NOTE]
- > For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample04_PublishingEvents.md)
+ > For the complete source code with more informational comments, see [this file in GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample04_PublishingEvents.md)
-### Sample Create Entity JSON message to create a sql table with two columns.
+### Sample code that creates an sql table with two columns using a Create Entity JSON message
```json
Next, create a C# .NET console application in Visual Studio:
}
-```
+```
-## Consume messages from Microsoft Purview
-This section shows how to write a .NET Core console application that receives messages from an event hub using an event processor. You need to use ATLAS_ENTITIES event hub to receive messages from Microsoft Purview.The event processor simplifies receiving events from event hubs by managing persistent checkpoints and parallel receptions from those event hubs.
+## Receive Purview messages
+Next learn how to write a .NET Core console application that receives messages from event hubs using an event processor. The event processor manages persistent checkpoints and parallel receptions from event hubs. This simplifies the process of receiving events. You need to use the ATLAS_ENTITIES event hub to receive messages from Purview.
> [!WARNING]
-> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
+> Event Hubs SDK uses the most recent version of Storage API available. That version may not necessarily be available on your Stack Hub platform. If you run this code on Azure Stack Hub, you will experience runtime errors unless you target the specific version you are using. If you're using Azure Blob Storage as a checkpoint store, review the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and in your code, target that version.
>
-> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
+> The highest available version of the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). If you are using Azure Stack Hub version 2005, in addition to following the steps in this section, you will also need to add code that targets the Storage service API version 2019-02-02. To learn how to target a specific Storage API version, see [this sample in GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
### Create an Azure Storage and a blob container
-In this quickstart, you use Azure Storage as the checkpoint store. Follow these steps to create an Azure Storage account.
+We'll use Azure Storage as the checkpoint store. Use the following steps to create an Azure Storage account.
1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+3. [Get the connection string for the storage account](../storage/common/storage-configure-connection-string.md)
- Note down the connection string and the container name. You'll use them in the receive code.
+ Make note of the connection string and the container name. You'll use them in the receive code.
-### Create a project for the receiver
+### Create a Visual Studio project for the receiver
1. In the Solution Explorer window, select and hold (or right-click) the **EventHubQuickStart** solution, point to **Add**, and select **New Project**. 1. Select **Console App (.NET Core)**, and select **Next**.
-1. Enter **PurviewKafkaConsumer** for the **Project name**, and select **Create**.
+1. Enter **PurviewKafkaConsumer** for the **Project name**, and select **Create**.
### Add the Event Hubs NuGet package
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. Run the following command to install the **Azure.Messaging.EventHubs** NuGet package: ```cmd
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
```cmd Install-Package Azure.Messaging.EventHubs.Processor
- ```
+ ```
-### Update the Main method
+### Update the Main method
1. Add the following `using` statements at the top of the **Program.cs** file.
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
using Azure.Messaging.EventHubs.Consumer; using Azure.Messaging.EventHubs.Processor; ```
-1. Add constants to the `Program` class for the Event Hubs connection string and the event hub name. Replace placeholders in brackets with the proper values that you got when creating the event hub. Replace placeholders in brackets with the proper values that you got when creating the event hub and the storage account (access keys - primary connection string). Make sure that the `{Event Hubs namespace connection string}` is the namespace-level connection string, and not the event hub string.
+1. Add constants to the `Program` class for the Event Hubs connection string and the event hub name. Replace placeholders in brackets with the real values that you got when you created the event hub and the storage account (access keys - primary connection string). Make sure that the `{Event Hubs namespace connection string}` is the namespace-level connection string, and not the event hub string.
```csharp private const string ehubNamespaceConnectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>";
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
private const string blobStorageConnectionString = "<AZURE STORAGE CONNECTION STRING>"; private const string blobContainerName = "<BLOB CONTAINER NAME>"; ```
-
- You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Microsoft Purview account.
- :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
+ You can get event hub namespace associated with your Purview account by looking at your Atlas kafka endpoint primary/secondary connection strings. This can be found in the **Properties** tab of your Purview account.
+
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that show an Event Hubs Namespace.":::
- The event hub name should be **ATLAS_ENTITIES** for sending messages to Microsoft Purview.
+ Use **ATLAS_ENTITIES** as the event hub name when sending messages to Purview.
-3. Replace the `Main` method with the following `async Main` method. See the code comments for details.
+3. Replace the `Main` method with the following `async Main` method. See the comments in the code for details.
```csharp static async Task Main()
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
await processor.StopProcessingAsync(); } ```
-1. Now, add the following event and error handler methods to the class.
+1. Now add the following event and error handler methods to the class.
```csharp static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
return Task.CompletedTask; } ```
-1. Build the project, and ensure that there are no errors.
+1. Build the project. Ensure that there are no errors.
> [!NOTE] > For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/Sample01_HelloWorld.md).
-6. Run the receiver application.
+6. Run the receiver application.
-### Sample Message received from Microsoft Purview
+### An example of a Message received from Purview
```json {
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
``` > [!IMPORTANT]
-> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Microsoft Purview is currently enabled by default. If the scenario involves reading from Microsoft Purview contact us as it needs to be allow-listed. (provide subscription id and name of Microsoft Purview account).
+> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Purview is currently enabled by default. If your scenario involves reading from Purview contact us, as it needs to be allow-listed. You'll need to provide your subscription id and the name of Purview account.
## Next steps
-Check out the samples on GitHub.
+Check out more examples in GitHub.
-- [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples)-- [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples)-- [Atlas introduction to notifications](https://atlas.apache.org/2.0.0/Notifications.html)
+- [Event Hubs samples in GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples)
+- [Event processor samples in GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples)
+- [An introduction to Atlas notifications](https://atlas.apache.org/2.0.0/Notifications.html)
purview Quickstart ARM Create Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-ARM-create-azure-purview.md
Title: 'Quickstart: Create a Microsoft Purview account using an ARM Template'
-description: This Quickstart describes how to create a Microsoft Purview account using an ARM Template.
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using an ARM Template'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using an ARM Template.
Last updated 04/05/2022
-# Quickstart: Create a Microsoft Purview account using an ARM template
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using an ARM template
-This quickstart describes the steps to deploy a Microsoft Purview account using an Azure Resource Manager (ARM) template.
+This quickstart describes the steps to deploy a Microsoft Purview (formerly Azure Purview) account using an Azure Resource Manager (ARM) template.
-After you have created a Microsoft Purview account you can begin registering your data sources and using Microsoft Purview to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Microsoft Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
+After you've created the account, you can begin registering your data sources and using the Microsoft Purview governance portal to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Microsoft Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
To deploy a Microsoft Purview account to your subscription using an ARM template, follow the guide below.
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview governance portal.
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account and how to access the Microsoft Purview governance portal.
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-create-collection.md
Title: 'Quickstart: Create a collection'
-description: Collections are used for access control, and asset organization in Microsoft Purview. This article describes how to create a collection and add permissions, register sources, and register assets to collections.
+description: Collections are used for access control, and asset organization in the Microsoft Purview Data Map. This article describes how to create a collection and add permissions, register sources, and register assets to collections.
Previously updated : 11/04/2021 Last updated : 06/17/2022
-# Quickstart: Create a collection and assign permissions in Microsoft Purview
+# Quickstart: Create a collection and assign permissions in the Microsoft Purview Data Map
-Collections are Microsoft Purview's tool to manage ownership and access control across assets, sources, and information. They also organize your sources and assets into categories that are customized to match your management experience with your data. This guide will take you through setting up your first collection and collection admin to prepare your Microsoft Purview environment for your organization.
+Collections are the Microsoft Purview Data Map's tool to manage ownership and access control across assets, sources, and information. They also organize your sources and assets into categories that are customized to match your management experience with your data. This guide will take you through setting up your first collection and collection admin to prepare your Microsoft Purview environment for your organization.
## Prerequisites
Collections are Microsoft Purview's tool to manage ownership and access control
## Check permissions
-In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
+In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page. :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of the Microsoft Purview governance portal opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Microsoft Purview account. In our example below, it's called Contoso Microsoft Purview.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Microsoft Purview account. In our example below, it's called ContosoPurview.
:::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the root collection highlighted." border="true":::
In order to create and manage collections in Microsoft Purview, you will need to
:::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Microsoft Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
:::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the collection admin section highlighted." border="true"::: ## Create a collection in the portal
-To create your collection, we'll start in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the Azure portal and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
+To create your collection, we'll start in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the portal by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com) and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
To create your collection, we'll start in the [Microsoft Purview governance port
## Assign permissions to collection
-Now that you have a collection, you can assign permissions to this collection to manage your users access to Microsoft Purview.
+Now that you have a collection, you can assign permissions to this collection to manage your users access to the Microsoft Purview governance portal.
### Roles
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
Last updated 08/10/2021
![Indexer Stages](./media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png "indexer stages")
-In this article, you learn how to map enriched input fields to output fields in a searchable index. Once you have [defined a skillset](cognitive-search-defining-skillset.md), you must map the output fields of any skill that directly contributes values to a given field in your search index.
+In this article, you learn how to map enriched input fields to output fields in a searchable index. Once you've [defined a skillset](cognitive-search-defining-skillset.md), you must map the output fields of any skill that directly contributes values to a given field in your search index.
Output Field Mappings are required for moving content from enriched documents into the index. The enriched document is really a tree of information, and even though there is support for complex types in the index, sometimes you may want to transform the information from the enriched tree into a more simple type (for instance, an array of strings). Output field mappings allow you to perform data shape transformations by flattening information. Output field mappings always occur after skillset execution, although it is possible for this stage to run even if no skillset is defined.
The body of the request is structured as follows:
} ```
-For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#field-mapping-functions) that you require to transform the content of a field before it's stored in the index.
+For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#field-mapping-functions-and-examples) that you require to transform the content of a field before it's stored in the index.
## Flattening Information from Complex Types
This operation will simply ΓÇ£flattenΓÇ¥ each of the names of the customEntities
"diseases" : ["heart failure","morquio"] ```
-## Next steps
+## See also
-Once you have mapped your enriched fields to searchable fields, you can set the field attributes for each of the searchable fields [as part of the index definition](search-what-is-an-index.md).
+* [Search indexes in Azure Cognitive Search](search-what-is-an-index.md).
-For more information about field mapping, see [Field mappings in Azure Cognitive Search indexers](search-indexer-field-mappings.md).
+* [Define field mappings in a search indexer](search-indexer-field-mappings.md).
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-field-mappings.md
Previously updated : 01/19/2022 Last updated : 06/17/2022 # Field mappings and transformations using Azure Cognitive Search indexers ![Indexer Stages](./media/search-indexer-field-mappings/indexer-stages-field-mappings.png "indexer stages")
-When using Azure Cognitive Search indexers, the indexer will automatically map fields in a data source to fields in a target index, assuming field names and types are compatible. When input data doesn't quite match the schema of your target index, you can define *field mappings* to specifically set the data path.
+When using an Azure Cognitive Search indexer to push content into a search index, the indexer automatically assigns the source-to-destination field mappings. Implicit field mappings occur when field names and data types are compatible. If inputs and outputs don't match, you can define explicit *field mappings* to set up the data path, as described in this article.
-Field mappings address the following scenarios:
+Field mappings also provide light-weight data conversion through mapping functions. If more processing is required, consider [Azure Data Factory](../data-factory/index.yml) to bridge the gap.
-+ Mismatched field names. Suppose your data source has a field named `_id`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field.
+## Scenarios and limitations
-+ One field to many fields. You can populate several fields in the index from the same data source data. For example, you might want to apply different analyzers to each field.
+Field mappings enable the following scenarios:
-+ Many fields to one field. You want to populate an index field with data from more than one data source, and the data sources each use different field names.
++ Rename fields or handle name discrepancies. Suppose your data source has a field named `_id`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field.+++ Data type discrepancies. Cognitive Search has a smaller set of [supported data types](/rest/api/searchservice/supported-data-types) than many data sources. If you're importing SQL data, a field mapping allows you to [map the SQL data type](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types) you want in a search index.+++ One-to-many data paths. You can populate multiple fields in the index with content from the same field. For example, you might want to apply different analyzers to each field.+++ Multiple data sources with different field names where you want to populate a search field with documents from more than one data source. If the field names vary between the data sources, you can use a field mapping to clarify the path. + Base64 encoding or decoding of data. Field mappings support several [**mapping functions**](#mappingFunctions), including functions for Base64 encoding and decoding.
-+ Splitting strings or recasting a JSON array into a string collection. Field mapping functions provide this capability.
++ Splitting strings or recasting a JSON array into a string collection. [Field mapping functions](#mappingFunctions) provide this capability.
-Field mappings in indexers are a simple way to map data fields to index fields, with some ability for light-weight data conversion. More complex data might require pre-processing to reshape it into a form that's conducive to indexing. One option you might consider is [Azure Data Factory](../data-factory/index.yml).
+### Limitations
-> [!NOTE]
-> Field mappings apply to search indexes only. For indexers that also create [knowledge stores](knowledge-store-concept-intro.md), data shapes and projections determine field associations, and any field mappings and output field mappings in the indexer are ignored.
+Before you start mapping fields, make sure the following limitations won't block you:
+++ The "targetFieldName" must be set to a single field name, either a simple field or a collection. You can't define a field path to a subfield in a complex field (such as `address/city`) at this time. A workaround is to add a skillset and use a [Shaper skill](cognitive-search-skill-shaper.md).+++ Field mappings only work for search indexes. For indexers that also create [knowledge stores](knowledge-store-concept-intro.md), [data shapes](knowledge-store-projection-shape.md) and [projections](knowledge-store-projections-examples.md) determine field associations, and any field mappings and output field mappings in the indexer are ignored. ## Set up field mappings
-A field mapping consists of three parts:
+Field mappings are added to the "fieldMappings" array of the indexer definition. A field mapping consists of three parts.
-+ "sourceFieldName", which represents a field in your data source. This property is required.
-+ An optional "targetFieldName", representing a field in your search index. If omitted, the value of "sourceFieldName" is used for the target.
-+ An optional "mappingFunction", which can transform your data using one of several [predefined functions](#mappingFunctions). This can be applied on both input and output field mappings.
+| Property | Description |
+|-|-|
+| "sourceFieldName" | Required. Represents a field in your data source. |
+| "targetFieldName" | Optional. Represents a field in your search index. If omitted, the value of "sourceFieldName" is assumed for the target. |
+| "mappingFunction" | Optional. Consists of [predefined functions](#mappingFunctions) that transform data. You can apply functions to both source and target field mappings. |
-Field mappings are added to the "fieldMappings" array of the indexer definition.
+Azure Cognitive Search uses case-insensitive comparison to resolve the field and function names in field mappings. This is convenient (you don't have to get all the casing right), but it means that your data source or index can't have fields that differ only by case.
> [!NOTE] > If no field mappings are present, indexers assume data source fields should be mapped to index fields with the same name. Adding a field mapping overrides these default field mappings for the source and target field. Some indexers, such as the [blob storage indexer](search-howto-indexing-azure-blob-storage.md), add default field mappings for the index key field.
-## Map fields using REST
+You can use the portal, REST API, or an Azure SDK to define field mappings.
+
+### [**Azure portal**](#tab/portal)
+
+If you're using the [Import data wizard](search-import-data-portal.md), field mappings aren't supported because the wizard creates target search fields that mirror the origin source fields.
+
+In the portal, you can set field mappings in an indexer after the indexer already exists:
+
+1. Open the JSON definition of an existing indexer.
+
+1. Under the "fieldMappings" section, add the source and destination fields. Destination fields must exist in the search index and conform to [field naming conventions](/rest/api/searchservice/naming-rules). Refer to the REST API tab for more JSON syntax details.
+
+1. Save your changes.
+
+1. If the search field is empty, run the indexer to import data from the source field to the newly mapped search field. If the search field was previously populated, reset the indexer before running it to drop and add the content.
+
+### [**REST APIs**](#tab/rest)
-You can add field mappings when creating a new indexer using the [Create Indexer](/rest/api/searchservice/create-Indexer) API request. You can manage the field mappings of an existing indexer using the [Update Indexer](/rest/api/searchservice/update-indexer) API request.
+Add field mappings when creating a new indexer using the [Create Indexer](/rest/api/searchservice/create-Indexer) API request. Manage the field mappings of an existing indexer using the [Update Indexer](/rest/api/searchservice/update-indexer) API request.
-For example, here's how to map a source field to a target field with a different name:
+This example maps a source field to a target field with a different name:
```JSON PUT https://[service name].search.windows.net/indexers/myindexer?api-version=[api-version]
A source field can be referenced in multiple field mappings. The following examp
] ```
-> [!NOTE]
-> Azure Cognitive Search uses case-insensitive comparison to resolve the field and function names in field mappings. This is convenient (you don't have to get all the casing right), but it means that your data source or index cannot have fields that differ only by case.
->
-
-## Map fields using .NET
+### [**.NET SDK (C#)**](#tab/csharp)
-You can define field mappings in the .NET SDK using the [FieldMapping](/dotnet/api/azure.search.documents.indexes.models.fieldmapping) class, which has the properties "SourceFieldName" and "TargetFieldName", and an optional "MappingFunction" reference.
+In the Azure SDK for .NET, use the [FieldMapping](/dotnet/api/azure.search.documents.indexes.models.fieldmapping) class that provides "SourceFieldName" and "TargetFieldName" properties and an optional "MappingFunction" reference.
-You can specify field mappings when constructing the indexer, or later by directly setting [SearchIndexer.FieldMappings](/dotnet/api/azure.search.documents.indexes.models.searchindexer.fieldmappings).
-
-The following C# example sets the field mappings when constructing an indexer.
+Specify field mappings when constructing the indexer, or later by directly setting [SearchIndexer.FieldMappings](/dotnet/api/azure.search.documents.indexes.models.searchindexer.fieldmappings). The following C# example sets the field mappings when constructing an indexer.
```csharp var indexer = new SearchIndexer("hotels-sql-idxr", dataSource.Name, searchIndex.Name)
var indexer = new SearchIndexer("hotels-sql-idxr", dataSource.Name, searchIndex.
await indexerClient.CreateOrUpdateIndexerAsync(indexer); ``` + <a name="mappingFunctions"></a>
-## Field mapping functions
+## Field mapping functions and examples
A field mapping function transforms the contents of a field before it's stored in the index. The following mapping functions are currently supported:
If the `useHttpServerUtilityUrlTokenEncode` or `useHttpServerUtilityUrlTokenDeco
> [!WARNING] > If `base64Encode` is used to produce key values, `useHttpServerUtilityUrlTokenEncode` must be set to true. Only URL-safe base64 encoding can be used for key values. See [Naming rules](/rest/api/searchservice/naming-rules) for the full set of restrictions on characters in key values.
-The .NET libraries in Azure Cognitive Search assume the full .NET Framework, which provides built-in encoding. The `useHttpServerUtilityUrlTokenEncode` and `useHttpServerUtilityUrlTokenDecode` options leverage this built-in functionality. If you are using .NET Core or another framework, we recommend setting those options to `false` and calling your framework's encoding and decoding functions directly.
+The .NET libraries in Azure Cognitive Search assume the full .NET Framework, which provides built-in encoding. The `useHttpServerUtilityUrlTokenEncode` and `useHttpServerUtilityUrlTokenDecode` options leverage this built-in functionality. If you're using .NET Core or another framework, we recommend setting those options to `false` and calling your framework's encoding and decoding functions directly.
-The following table compares different base64 encodings of the string `00>00?00`. To determine the required additional processing (if any) for your base64 functions, apply your library encode function on the string `00>00?00` and compare the output with the expected output `MDA-MDA_MDA`.
+The following table compares different base64 encodings of the string `00>00?00`. To determine the required processing (if any) for your base64 functions, apply your library encode function on the string `00>00?00` and compare the output with the expected output `MDA-MDA_MDA`.
| Encoding | Base64 encode output | Additional processing after library encoding | Additional processing before library decoding | | | | | |
Your data source contains a `PersonName` field, and you want to index it as two
Transforms a string formatted as a JSON array of strings into a string array that can be used to populate a `Collection(Edm.String)` field in the index.
-For example, if the input string is `["red", "white", "blue"]`, then the target field of type `Collection(Edm.String)` will be populated with the three values `red`, `white`, and `blue`. For input values that cannot be parsed as JSON string arrays, an error is returned.
+For example, if the input string is `["red", "white", "blue"]`, then the target field of type `Collection(Edm.String)` will be populated with the three values `red`, `white`, and `blue`. For input values that can't be parsed as JSON string arrays, an error is returned.
#### Example - populate collection from relational data
Azure SQL Database doesn't have a built-in data type that naturally maps to `Col
### urlEncode function
-This function can be used to encode a string so that it is "URL safe". When used with a string that contains characters that are not allowed in a URL, this function will convert those "unsafe" characters into character-entity equivalents. This function uses the UTF-8 encoding format.
+This function can be used to encode a string so that it is "URL safe". When used with a string that contains characters that aren't allowed in a URL, this function will convert those "unsafe" characters into character-entity equivalents. This function uses the UTF-8 encoding format.
#### Example - document key lookup
When you retrieve the encoded key at search time, you can then use the `urlDecod
``` <a name="fixedLengthEncodeFunction"></a>
-
+ ### fixedLengthEncode function This function converts a string of any length to a fixed-length string.
-
+ ### Example - map document keys that are too long
-
-When facing errors complaining about document key being longer than 1024 characters, this function can be applied to reduce the length of the document key.
+
+When errors occur that are related to document key length exceeding 1024 characters, this function can be applied to reduce the length of the document key.
```JSON
When facing errors complaining about document key being longer than 1024 charact
"name" : "fixedLengthEncode" } }]
- ```
+ ```
+
+## See also
+++ [Supported data types in Cognitive Search](/rest/api/searchservice/supported-data-types)++ [SQL data type map](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types)
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
If you're billed at Pay-As-You-Go rate, the following table shows how Microsoft
#### [Free data meters](#tab/free-data-meters)
-The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services. For more information, see [Viewing Data Allocation Benefits](../azure-monitor/usage-estimated-costs.md#viewing-data-allocation-benefits).
+The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services. For more information, see [View Data Allocation Benefits](../azure-monitor/usage-estimated-costs.md#view-data-allocation-benefits).
Cost description | Service name | Meter | |--|--|--|
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The allowed values for a device ID type are:
| **VectraId** | A Vectra AI assigned resource ID.| | **Other** | An ID type not listed above.|
-For example, the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-log-search.md) provides network sessions information in the `VMConnection`. The table provides an Azure Resource ID in the `_ResourceId` field and a VM insights specific device ID in the `Machine` field. Use the following mapping to represent those IDs:
+For example, the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-log-query.md) provides network sessions information in the `VMConnection`. The table provides an Azure Resource ID in the `_ResourceId` field and a VM insights specific device ID in the `Machine` field. Use the following mapping to represent those IDs:
| Field | Map to | | -- | -- |
For more information, see:
- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM) - [Advanced Security Information Model (ASIM) overview](normalization.md) - [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md)-- [Advanced Security Information Model (ASIM) content](normalization-content.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
| **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft | |**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](/security/zero-trust/integrate/sentinel-solution) |Identity, Security - Others |Microsoft |
+## Akamai
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Akamai Security** |[Data connector](data-connectors-reference.md#akamai-security-events-preview), parser | Security - Cloud Security |Microsoft |
+
+## Amazon Web Services
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Amazon Web Services** |[Data connector](connect-aws.md), analytics rules, hunting queries, workbooks | Security - Cloud Security |Microsoft |
++ ## Apache |Name |Includes |Categories |Supported by | |||||
-|**Tomcat** |Data connector, parser | DevOps, application |[Microsoft |
+|**Tomcat** |Data connector, parser | DevOps, application |Microsoft |
## Arista Networks
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Arista Networks** (Awake Security) |Data connector, workbooks, analytics rules | Security - Network |[Arista - Awake Security](https://awakesecurity.com/) |
+## Armorblox
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Armorblox - Sentinel** |Data connector | Security - Threat protection |[Armorblox](https://www.armorblox.com/contact/) |
## Atlassian |Name |Includes |Categories |Supported by | |||||
-|**Atlassian Confluence Audit** |Data connector |IT operations, application |Microsoft|
+|**Atlassian Confluence Audit** |[Data connector](data-connectors-reference.md#atlassian-confluence-audit-preview) |IT operations, application |Microsoft|
|**Atlassian Jira Audit** |Workbook, analytics rules, hunting queries |DevOps |Microsoft|
-## Armorblox
+## Aruba
|Name |Includes |Categories |Supported by | |||||
-|**Armorblox - Sentinel** |Data connector | Security - Threat protection |[Armorblox](https://www.armorblox.com/contact/) |
-
+|**Aruba ClearPass** |[Data connector](data-connectors-reference.md#aruba-clearpass-preview), parser |Security - Threat Protection |Microsoft|
## Azure |Name |Includes |Categories |Supported by | |||||
-|**Azure Firewall Solution for Sentinel**| [Data connector](data-connectors-reference.md#azure-firewall), workbook, analytics rules, playbooks, hunting queries, custom Logic App connector |Security - Network Security, Networking | Community|
-| **Microsoft Purview** | [Data connector](data-connectors-reference.md#microsoft-purview), workbook, analytics rules <br><br>For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Purview](purview-solution.md). | Compliance, Security- Cloud Security, and Security- Information Protection | Microsoft |
-|**Microsoft Sentinel for SQL PaaS** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics rules, playbooks, hunting queries | Application | Community |
-|**Microsoft Sentinel Training Lab** |Workbook, analytics rules, playbooks, hunting queries | Training and tutorials |Microsoft |
-|**Azure SQL** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics, playbooks, hunting queries | Application |Microsoft |
+|**Azure Active Directory**|[Data connector](data-connectors-reference.md#azure-active-directory), workbooks, analytic rules |Identity|Microsoft|
+|**Azure Active Directory Identity Protection**|[Data connector](data-connectors-reference.md#azure-active-directory-identity-protection), analytic rules |Security - Threat Protection|Microsoft|
+|**Azure Activity**|[Data connector](data-connectors-reference.md#azure-activity), workbooks, analytic rules |IT Operations|Microsoft|
+|**Azure DDoS Protection**| [Data connector](data-connectors-reference.md#azure-ddos-protection), workbook |Cloud Provider, Security - Network | Microsoft|
+|**Azure Firewall Solution for Sentinel**| [Data connector](data-connectors-reference.md#azure-firewall), workbook, analytics rules, hunting queries, workbook |Security - Network Security, Networking | Community|
+|**Azure Information Protection** | [Data connector](data-connectors-reference.md#azure-information-protection-preview), workbook | Cloud Provider, Security - Others|Microsoft |
+|**Azure Key Vault** | [Data connector](data-connectors-reference.md#azure-key-vault), analytics rules | Application |Microsoft |
+|**Azure Kubernetes Service (AKS)** | [Data connector](data-connectors-reference.md#azure-kubernetes-service-aks), workbook | DevOps |Microsoft |
+|**Azure SQL Database** | [Data connector](data-connectors-reference.md#azure-sql-databases) | Cloud Provider, IT Operations |Microsoft |
+|**Azure Storage** | [Data connector](data-connectors-reference.md#azure-storage-account) | Cloud Provider, IT Operations, Storage|Microsoft |
+|**Azure Web Application Firewall (WAF)** | [Data connector](data-connectors-reference.md#azure-web-application-firewall-waf), analytics rules, workbooks | Security - Network|Microsoft |
+
+## Barracuda
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Barracuda WAF**| [Data connector](data-connectors-reference.md#barracuda-waf) |Security - Network |[Barracuda](https://www.barracuda.com/support) |
+
+## Blackberry
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Blackberry CylancePROTECT**| [Data connector](data-connectors-reference.md#blackberry-cylanceprotect-preview), parser |Security - Threat Protection |Microsoft |
## Bosch
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Box Solution**| Data connector, workbook, analytics rules, hunting queries, parser | Storage, application | Microsoft|
+## Broadcom
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Broadcom SymantecDLP**| [Data connector](data-connectors-reference.md#broadcom-symantec-data-loss-prevention-dlp-preview), parser | Security - Information Protection | Microsoft|
+ ## Check Point |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Cisco Meraki** |[Data connector](data-connectors-reference.md#cisco-meraki-preview), playbooks, custom Logic App connector |Security - Network |Microsoft | |**Cisco Secure Email Gateway / ESA** |Data connector, parser |Security - Threat Protection |Microsoft | |**Cisco StealthWatch** |Data connector, parser |Security - Network | Microsoft|
+|**Cisco UCS** |[Data connector](data-connectors-reference.md#cisco-unified-computing-system-ucs-preview), parser |Platform |Microsoft |
|**Cisco Umbrella** |[Data connector](data-connectors-reference.md#cisco-umbrella-preview), workbooks, analytics rules, playbooks, hunting queries, parser, custom Logic App connector |Security - Cloud Security |Microsoft | |**Cisco Web Security Appliance (WSA)** | Data connector, parser|Security - Network |Microsoft |
+## Citrix ADC
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Citrix ADC**|Data connector, parser| Networking |Microsoft |
+ ## Cloudflare |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Contrast Protect Microsoft Sentinel Solution**|Data connector, workbooks, analytics rules |Security - Threat protection |Microsoft | - ## Crowdstrike - |Name |Includes |Categories |Supported by | ||||| |**CrowdStrike Falcon Endpoint Protection Solution**| Data connector, workbooks, analytics rules, playbooks, parser| Security - Threat protection| Microsoft|
+## CyberArk
-## Digital Guardian
+|Name |Includes |Categories |Supported by |
+|||||
+|**CyberArk Enterprise Password Vault (EPV)**| [Data connector](data-connectors-reference.md#cyberark-enterprise-password-vault-epv-events-preview), workbooks| Identity| [CyberArk](https://www.cyberark.com/customer-support/)|
+|**CyberArk EPM Integration)**| Data connector, parser| Identity, Security - Threat Protection| [CyberArk](https://www.cyberark.com/customer-support/)|
+
+## Cyberpion
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Cyberpion Security Logs**| [Data connector](data-connectors-reference.md#cyberpion-security-logs-preview), analytics rule, workbook| Security - Threat Protection| [Cyberpion](https://www.cyberpion.com/contact/)|
+## Digital Guardian
|Name |Includes |Categories |Supported by | ||||| |**Digital Guardian** |Data connector, parser |Security - Information Protection |Microsoft |
+## Exabeam
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Exabeam Advanced Analytics** |[Data connector](data-connectors-reference.md#exabeam-advanced-analytics-preview), parser |Security - Others |Microsoft |
+
+## Facebook
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Workplace from Facebook** |[Data connector](data-connectors-reference.md#workplace-from-facebook-preview), parser |Application | Microsoft|
## FalconForce
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Juniper IDP** |Data connector, parser|Security - Network |Microsoft |-
+|**Juniper SRX** |[Data connector](data-connectors-reference.md#juniper-srx-preview), parser|Networking |Microsoft |
## Kaspersky
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**McAfee ePolicy Orchestrator Solution**| Data connector, workbook, analytics rules, playbooks, hunting queries, parser, custom Logic App connector| Security - Threat protection| Microsoft | |**McAfee Network Security Platform Solution** (Intrushield) + AntiVirus Information (T1 minus Logic apps) |Data connector, workbooks, analytics rules, hunting queries, parser |Security - Threat protection | Microsoft| - ## Microsoft |Name |Includes |Categories |Supported by | |||||
+|**DNS**| [Data connector](data-connectors-reference.md#windows-dns-server-preview), workbook, analytics rules, hunting queries|Networking|Microsoft|
+|**Microsoft Defender for Cloud** | [Data connector](data-connectors-reference.md#microsoft-defender-for-cloud), analytics rule| Security - Threat Protection |Microsoft |
+|**Microsoft Defender for Cloud Apps** | [Data connector](data-connectors-reference.md#microsoft-defender-for-cloud-apps), analytics rule| Security - Cloud Security |Microsoft |
|**Microsoft Defender for Endpoint** | Hunting queries, parsers | Security - Threat Protection |Microsoft |
+|**Microsoft Defender for Identity** | [Data connector](data-connectors-reference.md#microsoft-defender-for-identity) | Security - Threat Protection |Microsoft |
+|**Microsoft Defender for Office 365** | [Data connector](data-connectors-reference.md#microsoft-defender-for-office-365), workbook | Security - Threat Protection |Microsoft |
+|**Microsoft PowerBI** | [Data connector](data-connectors-reference.md#microsoft-power-bi-preview) | Application |Microsoft |
+|**Microsoft Project** | [Data connector](data-connectors-reference.md#microsoft-project-preview) | Application |Microsoft |
+| **Microsoft Purview** | [Data connector](data-connectors-reference.md#microsoft-purview), workbook, analytics rules <br><br>For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Purview](purview-solution.md). | Compliance, Security- Cloud Security, and Security- Information Protection | Microsoft |
|**Microsoft Sentinel for Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft | |**Microsoft Sentinel for Teams** | Analytics rules, playbooks, hunting queries | Application | Microsoft |
+|**Microsoft Sentinel for SQL PaaS** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics rules, playbooks, hunting queries | Application | Community |
+|**Microsoft Sentinel Training Lab** |Workbook, analytics rules, playbooks, hunting queries | Training and tutorials |Microsoft |
| **Microsoft Sysmon for Linux** | [Data connector](data-connectors-reference.md#microsoft-sysmon-for-linux-preview) | Platform | Microsoft |
+| **Network Security Groups** | Data connector | Security - Network| Microsoft |
+|**Threat Intelligence** | [Data connector](threat-intelligence-integration.md), analytics rules, hunting queries, workbooks| Security - Threat Intelligence |Microsoft |
+| **Windows Firewall** | [Data connector](data-connectors-reference.md#windows-firewall), workbook | Security - Network| Microsoft |
+| **Windows Forwarded Events** | [Data connector](data-connectors-reference.md#windows-forwarded-events-preview), analytics rules | IT Operations| Microsoft |
+| **Windows Security Events** | [Data connector](data-connectors-reference.md#windows-security-events-via-ama), analytics rules, hunting queries, workbooks | Security - Threat Protection| Microsoft |
+|**Syslog**|Data connector, analytics rules, hunting queries, workbook|IT Operations|Microsoft|
+
+## NetSkope
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**NetSkope** | [Data connector](data-connectors-reference.md#netskope-preview), parser | Security ΓÇô Network |[NetSkope](https://www.netskope.com/services#support) |
## NGINX
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | |||||
-|**NXLog AIX Audit** | Data connector, parser | IT operations |NXLog |
-|**NXLog DNS Logs** | Data connector | Networking |NXLog |
+|**NXLog AIX Audit** | Data connector, parser | IT Operations, Security - Network |[NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+|**NXLog BSM macOS** | [Data connector](data-connectors-reference.md#nxlog-basic-security-module-bsm-macos-preview) | IT Operations, Security - Others |[NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+|**NXLog DNS Logs** | [Data connector](data-connectors-reference.md#nxlog-dns-logs-preview), parser | IT Operations, Security - Network |[NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+|**NXLog LinuxAudit** | [Data connector](data-connectors-reference.md#nxlog-linuxaudit-preview) | IT Operations, Security - Network |[NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
## Oracle
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Oracle Database Audit** | Data connector, workbook, analytics rules, hunting queries, parser| Application|Microsoft | |**Oracle WebLogic Server** | Data connector, workbook, analytics rules, hunting queries, parser| IT Operations|Microsoft |
+## OSSEC
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**OSSEC** |[Data connector](data-connectors-reference.md#ossec-preview), parser | Security - Threat Protection | Microsoft|
+ ## Palo Alto |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Palo Alto PAN-OS**|[Data connector](#palo-alto), playbooks, custom Logic App connector |Security - Automation (SOAR), Security - Network |Microsoft | |**Palo Alto Prisma Solution**|[Data connector](#palo-alto), workbooks, analytics rules, hunting queries, parser |Security - Cloud security |Microsoft |
+## Perimeter 81
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Perimeter 81** |[Data connector](data-connectors-reference.md#perimeter-81-activity-logs-preview), workbook| Security - Network |[Perimeter 81](https://support.perimeter81.com/docs) |
## Ping Identity
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**PingFederate Solution** |Data connector, workbooks, analytics rules, hunting queries, parser| Identity|Microsoft | - ## Proofpoint |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Proofpoint POD Solution** |[Data connector](data-connectors-reference.md#proofpoint-on-demand-pod-email-security-preview), workbook, analytics rules, hunting queries, parser| Security - Threat protection|Microsoft | |**Proofpoint TAP Solution** | Workbooks, analytics rules, playbooks, custom Logic App connector|Security - Automation (SOAR), Security - Threat protection |Microsoft |
+## Pulse Secure
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Pulse Connect Secure** |[Data connector](data-connectors-reference.md#pulse-connect-secure-preview), workbook, analytics rules, parser |Security - Threat Protection |Microsoft |
## Qualys |Name |Includes |Categories |Supported by | |||||
-|**Qualys VM Solution** |Workbooks, analytics rules |Security - Vulnerability Management |Microsoft |
-
+|**Qualys VM** |Workbook, analytics rules |Compliance, Security - Vulnerability Management |Microsoft |
+|**Qualys VM Knowledgebase** |[Data connector](data-connectors-reference.md#qualys-vm-knowledgebase-kb-preview), parser |Security - Vulnerability Management |Microsoft |
## Rapid7
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Rapid7 InsightVM CloudAPI Solution** |Data connector, parser|Security - Vulnerability Management |Microsoft | - ## ReversingLabs |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**RSA SecurID** |Data connector, parser |Security - Others, Identity |Microsoft |
+## Salesforce
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Salesforce Service Cloud*** |[Data connector](data-connectors-reference.md#salesforce-service-cloud-preview), parser |Cloud Provider |Microsoft |
## SAP
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**SIGNL4 Mobile Alerting** |Data connector, playbook |DevOps, IT Operations |[SIGNL4](https://www.signl4.com) |
+## SonicWall
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**SonicWall Network Security** |Data connector |Security - Network |[SonicWall](https://www.sonicwall.com/support/) |
+ ## Sonrai Security |Name |Includes |Categories |Supported by | ||||| |**Sonrai Security - Microsoft Sentinel** |Data connector, workbooks, analytics rules | Compliance|Sonrai Security |
+## Squid
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**SquidProxy** |[Data connector](data-connectors-reference.md#squid-proxy-preview), parser | Networking| Microsoft |
## Slack
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Sophos Endpoint Protection Solution** |Data connector, parser| Security - Threat protection |Microsoft | |**Sophos XG Firewall Solution**| Workbooks, analytics rules, parser |Security - Network |Microsoft |
+## Squadra Technologies
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Squadra Technologies secRMM** |[Data connector](data-connectors-reference.md#squadra-technologies-secrmm), workbook| Security - Information Protection, Security - Threat Protection |[Squadra Technologies](https://www.squadratechnologies.com/Contact.aspx) |
## Symantec |Name |Includes |Categories |Supported by | |||||
-|**Symantec Endpoint**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser| Security - Threat protection|Microsoft |
-|**Symantec ProxySG Solution**|Workbooks, analytics rules |Security - Network |Symantec |
+|**Symantec Endpoint Protection**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser| Security - Threat protection|Microsoft |
+|**Symantec ProxySG**|Workbooks, analytics rules |Security - Network |Microsoft |
+|**Symantec VIP**|[Data connector](data-connectors-reference.md#symantec-vip-preview), analytics rules, parser, workbooks |Security - Network |Microsoft |
## Tenable
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Trend Micro Apex One Solution** | Data connector, hunting queries, parser| Security - Threat protection|Microsoft |
+|**Trend Micro Cloud App Security** | Data connector, analytics rules, hunting queries, parser| Security - Threat protection|Microsoft |
## Ubiquiti
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**VMware Carbon Black Solution**|Workbooks, analytics rules| Security - Threat protection| Microsoft| |**VMware ESXi**|Workbooks, analytics rules, data connectors, hunting queries, parser| IT Operations| Microsoft|
+## WatchGuard
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**WatchGuard Firebox**|[Data connector](data-connectors-reference.md#watchguard-firebox-preview), parser| Security - Network|[WatchGuard](https://www.watchguard.com/wgrd-support/contact-support)|
+ ## Zeek Network |Name |Includes |Categories |Supported by | ||||| |**Corelight for Microsoft Sentinel**|Data connector, workbooks, analytics rules, hunting queries, parser | IT Operations, Security - Network | [Zeek Network](https://support.corelight.com/)|
+## Zimperium
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Zimperium Mobile Threat Defense**|[Data connector](data-connectors-reference.md#zimperium-mobile-thread-defense-preview), workbook| Security - Threat Protection | [Zimperium](https://support.zimperium.com)|
+
+## Zoom
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Zoom Reports**|[Data connector](data-connectors-reference.md#zoom-reports-preview), parser | Application| Microsoft|
+ ## Zscaler |Name |Includes |Categories |Supported by | |||||
-|**Zscaler Private Access**|Data connector, workbook, analytics rules, hunting queries, parser | Security - Network | Microsoft|
+|**Zscaler Private Access**|[Data connector](data-connectors-reference.md#zscaler-private-access-zpa-preview), workbook, analytics rules, hunting queries, parser | Security - Network | Microsoft|
## Next steps
spring-cloud How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-service-registry.md
Title: How to Use Tanzu Service Registry with Azure Spring Apps Enterprise Tier-
-description: How to use Tanzu Service Registry with Azure Spring Apps Enterprise Tier.
+ Title: How to Use Tanzu Service Registry with Azure Spring Apps Enterprise tier
+description: How to use Tanzu Service Registry with Azure Spring Apps Enterprise tier.
-+ Previously updated : 02/09/2022 Last updated : 06/17/2022
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to use VMware Tanzu® Service Registry with Azure Spring Apps Enterprise Tier.
+This article shows you how to use VMware Tanzu® Service Registry with Azure Spring Apps Enterprise tier.
-[Tanzu Service Registry](https://docs.vmware.com/en/Spring-Cloud-Services-for-VMware-Tanzu/2.1/spring-cloud-services/GUID-service-registry-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key tenets of a Spring-based architecture. It can be difficult, and brittle in production, to hand-configure each client of a service or adopt some form of access convention. Instead, your apps can use Tanzu Service Registry to dynamically discover and call registered services.
+The [Tanzu Service Registry](https://docs.vmware.com/en/Spring-Cloud-Services-for-VMware-Tanzu/2.1/spring-cloud-services/GUID-service-registry-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. This component helps you apply the *service discovery* design pattern to your applications.
+
+Service discovery is one of the main ideas of the microservices architecture. Without service discovery, you'd have to hand-configure each client of a service or adopt some form of access convention. This process can be difficult, and the configurations and conventions can be brittle in production. Instead, you can use the Tanzu Service Registry to dynamically discover and invoke registered services in your application.
+
+With Azure Spring Apps Enterprise tier, you don't have to create or start the Service Registry yourself. You can use the Tanzu Service Registry by selecting it when you create your Azure Spring Apps Enterprise tier instance.
## Prerequisites
This article shows you how to use VMware Tanzu® Service Registry with Azure Spr
> [!NOTE] > To use Tanzu Service Registry, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
-## Use Service Registry with apps
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+
+## Create applications that use Service Registry
+
+In this article, you'll create two services and register them with Azure Spring Apps Service Registry. After registration, one service will be able to use Service Registry to discover and invoke the other service. The following diagram summarizes the required steps:
++
+These steps are described in more detail in the following sections.
+
+1. Create Service A.
+2. Deploy Service A to Azure Spring Apps and register it with Service Registry.
+3. Create Service B and implement it to call Service A.
+4. Deploy Service B and register it with Service Registry.
+5. Invoke Service A through Service B.
+
+## Create environment variables
+
+This article uses the following environment variables. Set these variables to the values you used when you created your Azure Spring Apps Enterprise tier instance.
+
+| Variable | Description |
+|--|--|
+| $RESOURCE_GROUP | Resource group name. |
+| $AZURE_SPRING_APPS_NAME | Azure Spring Apps instance name. |
+
+## Create Service A with Spring Boot
+
+Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka) to create sample Service A. This link uses the following URL to initialize the settings.
+
+```URL
+https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka
+```
+
+The following screenshot shows Spring Initializr with the required settings.
++
+Next, select **GENERATE** to get a sample project for Spring Boot with the following directory structure.
+
+```text
+Γö£ΓöÇΓöÇ HELP.md
+Γö£ΓöÇΓöÇ mvnw
+Γö£ΓöÇΓöÇ mvnw.cmd
+Γö£ΓöÇΓöÇ pom.xml
+ΓööΓöÇΓöÇ src
+ Γö£ΓöÇΓöÇ main
+ │   ├── java
+ │   │   └── com
+ │   │   └── example
+ │   │   └── Sample
+ │   │   └── Service
+ │   │   └── A
+ │   │   └── SampleServiceAApplication.java
+ │   └── resources
+ │   ├── application.properties
+ │   ├── static
+ │   └── templates
+ ΓööΓöÇΓöÇ test
+ ΓööΓöÇΓöÇ java
+ ΓööΓöÇΓöÇ com
+ ΓööΓöÇΓöÇ example
+ ΓööΓöÇΓöÇ Sample
+ ΓööΓöÇΓöÇ Service
+ ΓööΓöÇΓöÇ A
+ ΓööΓöÇΓöÇ SampleServiceAApplicationTests.java
+```
+
+### Confirm the configuration of dependent libraries for the Service Registry client (Eureka client)
-Before your application can manage service registration and discovery using Tanzu Service Registry, you must include the following dependency in your application's *pom.xml* file:
+Next, confirm that the *pom.xml* file for the project contains the following dependency. Add the dependency if it's missing.
```xml <dependency>
Before your application can manage service registration and discovery using Tanz
</dependency> ```
-Additionally, add an annotation to the top level class of your application as shown in the following example:
+### Implement the Service Registry client
+
+Add an `@EnableEurekaClient` annotation to the *SampleServiceAApplication.java* file to configure it as a Eureka Client.
+
+```java
+package com.example.Sample.Service.A;
+
+import org.springframework.boot.SpringApplication;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
+
+@SpringBootApplication
+@EnableEurekaClient
+public class SampleServiceAApplication {
+
+ public static void main(String[] args) {
+ SpringApplication.run(SampleServiceAApplication.class, args);
+ }
+}
+```
+
+### Create a REST endpoint for testing
+
+You can now register the service to Service Registry, but you can't verify it until you implement a service endpoint. To create RESTful endpoints that external services can call, add a *ServiceAEndpoint.java* file to your project with the following code.
+
+```java
+package com.example.Sample.Service.A;
+import java.util.Map;
+
+import org.springframework.web.bind.annotation.GetMapping;
+import org.springframework.web.bind.annotation.RestController;
+
+@RestController
+public class ServiceAEndpoint {
+
+ @GetMapping("/serviceA")
+ public String getServiceA(){
+ return "This is a result of Service A";
+ }
+
+ @GetMapping("/env")
+ public Map<String, String> getEnv(){
+ Map<String, String> env = System.getenv();
+ return env;
+ }
+}
+```
+
+### Build a Spring Boot application
+
+Now that you have a simple service, compile and build the source code by running the following command:
+
+```bash
+mvn clean package
+```
+
+## Deploy Service A and register with Service Registry
+
+This section explains how to deploy Service A to Azure Spring Apps Enterprise tier and register it with Service Registry.
+
+### Create an Azure Spring Apps application
+
+First, create an application in Azure Spring Apps by using the following command:
+
+```azurecli
+az spring app create \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --name serviceA \
+ --instance-count 1 \
+ --memory 2Gi \
+ --assign-endpoint
+```
+
+The `--assign-endpoint` argument grants a public IP for validation and enables access from the external network.
+
+### Connect to the Service Registry from the app
+
+You've now created a service with Spring Boot and created an application in Azure Spring Apps. The next task is to deploy the application and confirm the operation. Before that, however, you must bind your application to the Service Registry so that it can get connection information from the registry.
+
+Typically, a Eureka client needs to write the following connection information settings in the *application.properties* configuration file of a Spring Boot application so that you can connect to the server:
+
+```properties
+eureka.client.service-url.defaultZone=http://eureka:8761/eureka/
+```
+
+However, if you write these settings directly in your application, you'll need to re-edit and rebuild the project again each time the Service Registry server changes. To avoid this effort, Azure Spring Apps enables your applications to get connection information from the service registry by binding to it. Specifically, after binding the application to the Service Registry, you can get the service registry connection information (`eureka.client.service-url.defaultZone`) from the Java environment variable. In this way, you can connect to the Service Registry by loading the contents of the environment variables when the application starts.
+
+In practice, the following environment variables are added to the `JAVA_TOOL_OPTIONS` variable:
+
+```options
+-Deureka.client.service-url.defaultZone=https://$AZURE_SPRING_APPS_NAME.svc.azuremicroservices.io/eureka/default/eureka
+```
+
+### Bind a service to the Service Registry
+
+Use the following command to bind the service to Azure Service Registry, enabling it to connect to the server.
+
+```azurecli
+az spring service-registry bind \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --app serviceA
+```
+
+You can also set up the application bindings from the Azure portal, as shown in the following screenshot.
++
+> [!NOTE]
+> These changes will take a few minutes to propagate to all applications when the service registry status changes.
+>
+> If you change the binding/unbinding status, you'll need to restart or redeploy the application.
+
+### Deploy an application to Azure Spring Apps
+
+Now that you've bound your application, you'll deploy the Spring Boot artifact file *Sample-Service-A-A-0.0.1-SNAPSHOT.jar* to Azure Spring Apps. To deploy, use the following command:
+
+```azurecli
+az spring app deploy \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --name serviceA \
+ --artifact-path ./target/Sample-Service-A-0.0.1-SNAPSHOT.jar \
+ --jvm-options="-Xms1024m -Xmx1024m"
+```
+
+Use the following command to see if your deployment is successful.
+
+```azurecli
+az spring app list \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --output table
+```
+
+This command produces output similar to the following example.
+
+```output
+Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
+ - - - -- -- -- -- -- -- -
+servicea southeastasia $RESOURCE_GROUP https://$AZURE_SPRING_APPS_NAME-servicea.azuremicroservices.io default Succeeded 1 2Gi 1/1 N/A - default -
+```
+
+### Confirm that the Service A application is running
+
+The output of the previous command includes the public URL for the service. To access the RESTful endpoint, append `/serviceA` to the URL, as shown in the following command:
+
+```bash
+curl https://$AZURE_SPRING_APPS_NAME-servicea.azuremicroservices.io/serviceA
+```
+
+This command produces the following output.
+
+```output
+This is a result of Service A
+```
+
+Service A includes a RESTful endpoint that displays a list of environment variables. Access the endpoint with `/env` to see the environment variables, as shown in the following command:
+
+```bash
+curl https://$AZURE_SPRING_APPS_NAME-servicea.azuremicroservices.io/env
+```
+
+This command produces the following output.
+
+```output
+"JAVA_TOOL_OPTIONS":"-Deureka.client.service-url.defaultZone=https://$AZURE_SPRING_APPS_NAME.svc.azuremicroservices.io/eureka/default/eureka
+```
+
+As you can see, `eureka.client.service-url.defaultZone` has been added to `JAVA_TOOL_OPTIONS`. In this way, the application can register the service to the Service Registry and make it available from other services.
+
+You can now register the service to the Service Registry (Eureka Server) in Azure Spring Apps. Other services can now access the service by using service registry.
+
+## Implement a new Service B that accesses Service A through Service Registry
+
+### Implement Service B with Spring Boot
+
+Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka) to create a new project for Service B. This link uses the following URL to initialize the settings:
+
+```URL
+https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka
+```
+
+Then, select **GENERATE** to get the new project.
+
+### Implement Service B as a Service Registry client (Eureka client)
+
+Like Service A, add the `@EnableEurekaClient` annotation to Service B to configure it as a Eureka client.
```java
+package com.example.Sample.Service.B;
+
+import org.springframework.boot.SpringApplication;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
+ @SpringBootApplication @EnableEurekaClient
-public class DemoApplication {
+public class SampleServiceBApplication {
+ public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
+ SpringApplication.run(SampleServiceBApplication.class, args);
+ }
+}
+```
+
+### Implement service endpoints in Service B
+
+Next, implement a new service endpoint (`/invoke-serviceA`) that invokes Service A. Add a *ServiceBEndpoint.java* file to your project with the following code.
+
+```java
+package com.example.Sample.Service.B;
+import java.util.List;
+import java.util.stream.Collectors;
+import com.netflix.discovery.EurekaClient;
+import com.netflix.discovery.shared.Application;
+import com.netflix.discovery.shared.Applications;
+import org.springframework.beans.factory.annotation.Autowired;
+import org.springframework.web.bind.annotation.GetMapping;
+import org.springframework.web.bind.annotation.RestController;
+import org.springframework.web.client.RestTemplate;
+
+@RestController
+public class ServiceBEndpoint {
+ @Autowired
+ private EurekaClient discoveryClient;
+
+ @GetMapping(value = "/invoke-serviceA")
+ public String invokeServiceA()
+ {
+ RestTemplate restTemplate = new RestTemplate();
+ String response = restTemplate.getForObject("http://servicea/serviceA",String.class);
+ return "INVOKE SERVICE A FROM SERVICE B: " + response;
+ }
+
+ @GetMapping(value = "/list-all")
+ public List<String> listsAllServices() {
+ Applications applications = discoveryClient.getApplications();
+ List<Application> registeredApplications = applications.getRegisteredApplications();
+ List<String> appNames = registeredApplications.stream().map(app -> app.getName()).collect(Collectors.toList());
+ return appNames;
} } ```
-Use the following steps to bind an application to Tanzu Service Registry.
+This example uses `RestTemplate` for simplicity. The endpoint returns the response string with another string (`INVOKE SERVICE A FROM SERVICE B: "`) to indicate that it was called by Service B.
+
+This example also implements another endpoint (`/list-all`) for validation. This implementation ensures that the service is communicating correctly with the Service Registry. You can call this endpoint to get the list of applications registered in the Service Registry.
+
+This example invokes Service A as `http://servicea`. The service name is the name that you specified during the creation of the Azure Spring Apps application. (For example: `az spring app create --name ServiceA`.) The application name matches the service name you registered with the service registry, making it easier to manage the service name.
+
+### Build Service B
+
+Use the following command to build your project.
+
+```bash
+mvn clean package
+```
+
+## Deploy Service B to Azure Spring Apps
+
+Use the following command to create an application in Azure Spring Apps to deploy Service B.
+
+```azurecli
+az spring app create \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --name serviceB \
+ --instance-count 1 \
+ --memory 2Gi \
+ --assign-endpoint
+```
+
+Next, use the following command to bind the application to the Service Registry.
+
+```azurecli
+az spring service-registry bind \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --app serviceB
+```
+
+Next, use the following command to deploy the service.
-1. Open the **App binding** tab.
+```azurecli
+az spring app deploy \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --name serviceB \
+ --artifact-path ./target/Sample-Service-B-0.0.1-SNAPSHOT.jar \
+ --jvm-options="-Xms1024m -Xmx1024m"
+```
+
+Next, use the following command to check the status of the application.
+
+```azurecli
+az spring app list \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_NAME \
+ --output table
+```
+
+If Service A and Service B are deployed correctly, this command will produce output similar to the following example.
-1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
+```output
+Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
+-- - - -- -- -- -- -- -- -
+servicea southeastasia SpringCloud-Enterprise https://$AZURE_SPRING_APPS_NAME-servicea.azuremicroservices.io default Succeeded 1 2Gi 1/1 1/1 - default -
+serviceb southeastasia SpringCloud-Enterprise https://$AZURE_SPRING_APPS_NAME-serviceb.azuremicroservices.io default Succeeded 1 2Gi 1/1 1/1 - default -
+```
+
+## Invoke Service A from Service B
+
+The output of the previous command includes the public URL for the service. To access the RESTful endpoint, append `/invoke-serviceA` to the URL, as shown in the following command:
+
+```bash
+curl https://$AZURE_SPRING_APPS_NAME-serviceb.azuremicroservices.io/invoke-serviceA
+```
- :::image type="content" source="media/enterprise/how-to-enterprise-service-registry/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Service Registry page and 'App binding' section with 'Bind app' dropdown showing.":::
+This command produces the following output:
+
+```output
+INVOKE SERVICE A FROM SERVICE B: This is a result of Service A
+```
+
+### Get some information from Service Registry
+
+Finally, access the `/list-all` endpoint and retrieve some information from the Service Registry. The following command retrieves a list of services registered in the Service Registry.
+
+```bash
+curl https://$AZURE_SPRING_APPS_NAME-serviceb.azuremicroservices.io/list-all
+```
+
+This command produces the following output.
+
+```output
+["SERVICEA","EUREKA-SERVER","SERVICEB"]
+```
- > [!NOTE]
- > When you change the bind/unbind status, you must restart or redeploy the app to make the change take effect.
+In this way, you can obtain detailed information from the program as needed.
## Next steps
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png)| ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
### Metrics in Azure Monitor
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
If a feature is disabled in Synapse Studio, a tooltip will indicate the required
- Learn more about [Synapse RBAC](./synapse-workspace-synapse-rbac.md) - Learn more about [Synapse RBAC roles](./synapse-workspace-synapse-rbac-roles.md)-- Learn [How to set up access control](./how-to-set-up-access-control.md) for a Synapse Workspace using security groups.-- Learn [How to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md)-- Learn [How to manage Synapse RBAC role assignments](./how-to-manage-synapse-rbac-role-assignments.md)
+- Learn [how to set up access control](./how-to-set-up-access-control.md) for a Synapse Workspace using security groups.
+- Learn [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md)
+- Learn [how to manage Synapse RBAC role assignments](./how-to-manage-synapse-rbac-role-assignments.md)
virtual-wan How To Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-nva-hub.md
Title: 'Azure Virtual WAN: Create a Network Virtual Appliance (NVA) in the hub' description: Learn how to deploy a Network Virtual Appliance in the Virtual WAN hub.- Previously updated : 07/29/2021 Last updated : 06/17/2022 # Customer intent: As someone with a networking background, I want to create a Network Virtual Appliance (NVA) in my Virtual WAN hub.
This article shows you how to use Virtual WAN to connect to your resources in Az
The steps in this article help you create a **Barracuda CloudGen WAN** Network Virtual Appliance in the Virtual WAN hub. To complete this exercise, you must have a Barracuda Cloud Premise Device (CPE) and a license for the Barracuda CloudGen WAN appliance that you deploy into the hub before you begin.
-For deployment documentation of **Cisco SD-WAN** within Azure Virtual WAN, see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701).
+For deployment documentation of **Cisco SD-WAN** within Azure Virtual WAN, see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701).
For deployment documentation of **VMware SD-WAN** within Azure Virtual WAN, see [Deployment Guide for VMware SD-WAN in Virtual WAN Hub](https://kb.vmware.com/s/article/82746) ## Prerequisites
-Verify that you have met the following criteria before beginning your configuration:
+Verify that you've met the following criteria before beginning your configuration:
* Obtain a license for your Barracuda CloudGen WAN gateway. To learn more about how to do this, see the [Barracuda CloudGen WAN Documentation](https://www.barracuda.com/products/cloudgenwan) * You have a virtual network that you want to connect to. Verify that none of the subnets of your on-premises networks overlap with the virtual networks that you want to connect to. To create a virtual network in the Azure portal, see the [Quickstart](../virtual-network/quick-create-portal.md).
-* Your virtual network does not have any virtual network gateways. If your virtual network has a gateway (either VPN or ExpressRoute), you must remove all gateways. This configuration requires that virtual networks are connected instead, to the Virtual WAN hub gateway.
+* Your virtual network doesn't have any virtual network gateways. If your virtual network has a gateway (either VPN or ExpressRoute), you must remove all gateways. This configuration requires that virtual networks are connected instead, to the Virtual WAN hub gateway.
-* Obtain an IP address range for your hub region. The hub is a virtual network that is created and used by Virtual WAN. The address range that you specify for the hub cannot overlap with any of your existing virtual networks that you connect to. It also cannot overlap with your address ranges that you connect to your on-premises sites. If you are unfamiliar with the IP address ranges located in your on-premises network configuration, coordinate with someone who can provide those details for you.
+* Obtain an IP address range for your hub region. The hub is a virtual network that is created and used by Virtual WAN. The address range that you specify for the hub can't overlap with any of your existing virtual networks that you connect to. It also can't overlap with your address ranges that you connect to your on-premises sites. If you're unfamiliar with the IP address ranges located in your on-premises network configuration, coordinate with someone who can provide those details for you.
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Verify that you have met the following criteria before beginning your configurat
## <a name="hub"></a>Create a hub
-A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, point-to-site, or Network Virtual Appliance functionality. Once the hub is created, you'll be charged for the hub, even if you don't attach any sites. If you choose to create a site-to-site VPN gateway, it takes 30 minutes to create the site-to-site VPN gateway in the virtual hub. Unlike site-to-site, ExpressRoute, or point-to-site, the hub must be created first before you can deploy a Network Virtual Appliance into the hub VNet.
+Create a virtual hub by filling out the **Basics** tab to create an empty virtual hub (a virtual hub that doesn't contain any gateways).
-1. Locate the Virtual WAN that you created. On the **Virtual WAN** page, under the **Connectivity** section, select **Hubs**.
-1. On the **Hubs** page, select +New Hub to open the **Create virtual hub** page.
-
- :::image type="content" source="./media/how-to-nva-hub/vwan-hub.png" alt-text="Basics":::
-1. On the **Create virtual hub** page **Basics** tab, complete the following fields:
-
- **Project details**
-
- * Region (previously referred to as Location)
- * Name
- * Hub private address space. The minimum address space is /24 to create a hub, which implies anything range from /25 to /32 will produce an error during creation. Azure Virtual WAN, being a managed service by Microsoft, creates the appropriate subnets in the virtual hub for the different gateways/services. (For example: Network Virtual Appliances, VPN gateways, ExpressRoute gateways, User VPN/Point-to-site gateways, Firewall, Routing, etc.). There is no need for the user to explicitly plan for subnet address space for the services in the Virtual hub because Microsoft does this as a part of the service.
-1. Select **Review + Create** to validate.
-1. Select **Create** to create the hub.
## Create the Network Virtual Appliance in the hub
-In this step, you will create a Network Virtual Appliance in the hub. The procedure for each NVA will be different for each NVA partner's product. For this example, we are creating a Barracuda CloudGen WAN Gateway.
+In this step, you'll create a Network Virtual Appliance in the hub. The procedure for each NVA will be different for each NVA partner's product. For this example, we're creating a Barracuda CloudGen WAN gateway.
1. Locate the Virtual WAN hub you created in the previous step and open it.
- :::image type="content" source="./media/how-to-nva-hub/nva-hub.png" alt-text="Virtual hub":::
-1. Find the Network Virtual Appliances tile and select the **Create** link.
-1. On the **Network Virtual Appliance** blade, select **Barracuda CloudGen WAN**, then select the **Create** button.
+ :::image type="content" source="./media/how-to-nva-hub/nva-hub.png" alt-text="Screenshot of the Network Virtual Appliance tile." lightbox="./media/how-to-nva-hub/nva-hub.png":::
- :::image type="content" source="./media/how-to-nva-hub/select-nva.png" alt-text="Select NVA":::
-1. This will take you to the Azure Marketplace offer for the Barracuda CloudGen WAN gateway. Read the terms, then select the **Create** button when you're ready.
+1. Find the **Network Virtual Appliance** tile and select the **Create** link.
+1. On the **Network Virtual Appliance** page, from the dropdown, select **Barracuda CloudGen WAN**, then select the **Create** button and **Leave**. This takes you to the Azure Marketplace offer for the Barracuda CloudGen WAN gateway.
+1. Read the terms, select **Get it now**, then click **Continue** when you're ready. The page will automatically change to the page for the **Barracuda CloudGen WAN Gateway**. Select **Create** to open the **Basics** page for gateway settings.
- :::image type="content" source="./media/how-to-nva-hub/barracuda-create-basics.png" alt-text="Barracuda NVA basics":::
-1. On the **Basics** page you will need to provide the following information:
+ :::image type="content" source="./media/how-to-nva-hub/barracuda-create-basics.png" alt-text="Screenshot of the Basics page."lightbox="./media/how-to-nva-hub/barracuda-create-basics.png":::
+1. On the Create Barracuda CloudGen WAN Gateway **Basics** page, provide the following information:
* **Subscription** - Choose the subscription you used to deploy the Virtual WAN and hub. * **Resource Group** - Choose the same Resource Group you used to deploy the Virtual WAN and hub. * **Region** - Choose the same Region in which your Virtual hub resource is located. * **Application Name** - The Barracuda NextGen WAN is a Managed Application. Choose a name that makes it easy to identify this resource, as this is what it will be called when it appears in your subscription. * **Managed Resource Group** - This is the name of the Managed Resource Group in which Barracuda will deploy resources that are managed by them. The name should be pre-populated for this.
-1. Select the **Next: CloudGen WAN gateway** button.
+1. Select **Next: CloudGen WAN gateway** to open the **Create Barracuda CloudGen WAN Gateway** page.
- :::image type="content" source="./media/how-to-nva-hub/barracuda-cloudgen-wan.png" alt-text="CloudGen WAN Gateway":::
-1. Provide the following information here:
+ :::image type="content" source="./media/how-to-nva-hub/barracuda-cloudgen-wan.png" alt-text="Screenshot of the Create Barracuda CloudGen WAN Gateway page."lightbox="./media/how-to-nva-hub/barracuda-cloudgen-wan.png":::
+1. On the **Create Barracuda CloudGen WAN Gateway** page, provide the following information:
* **Virtual WAN Hub** - The Virtual WAN hub you want to deploy this NVA into. * **NVA Infrastructure Units** - Indicate the number of NVA Infrastructure Units you want to deploy this NVA with. Choose the amount of aggregate bandwidth capacity you want to provide across all of the branch sites that will be connecting to this hub through this NVA. * **Token** - Barracuda requires that you provide an authentication token here in order to identify yourself as a registered user of this product. You'll need to obtain this from Barracuda. 1. Select the **Review and Create** button to proceed.
-1. On this page, you will be asked to accept the terms of the Co-Admin Access agreement. This is standard with Managed Applications where the Publisher will have access to some resources in this deployment. Check the **I agree to the terms and conditions above** box, and then select **Create**.
+1. On this page, you'll be asked to accept the terms of the Co-Admin Access agreement. This is standard with Managed Applications where the Publisher will have access to some resources in this deployment. Check the **I agree to the terms and conditions above** box, and then select **Create**.
## <a name="vnet"></a>Connect the VNet to the hub