Updates from: 06/21/2021 03:06:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | ASP.NET |[GitHub repo](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [Admin Restricted Scopes <br/> &#8226; Sign in users <br/> &#8226; call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) | Microsoft Graph Training Sample | [MSAL.NET](https://aka.ms/msal-net) | |
-> | Java </p> Spring |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial) | Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Uses App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java <br/> AAD Boot Starter | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
+> | Java </p> Spring |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial) | Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java <br/> AAD Boot Starter | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
> | Java </p> Servlets |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) | > | Java |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapp) | Sign in users, call Microsoft Graph | MSAL Java | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) | > | Java </p> Spring|[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapi) | Sign in users & call Microsoft Graph via OBO </p> &#8226; web API | MSAL Java | &#8226; [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) <br/> &#8226; [On-Behalf-Of (OBO) flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow) | > | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app sample <br/> &#8226; Sign in users | MSAL Node | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
-> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md) | MSAL Node | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
+> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | MSAL Node | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | Flask Series <br/> &#8226; Sign in users <br/> &#8226; Sign in users (B2C) <br/> &#8226; Call Microsoft Graph <br/> &#8226; Deploy to Azure App Service | MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) | > | Python </p> Django |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-django-tutorial) | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) | > | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-webapp) | Flask standalone sample <br/> [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) | MSAL Python | [Auth code flow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) |
To learn about [samples](https://github.com/microsoftgraph/msgraph-community-sam
## See also
-[Microsoft Graph API conceptual and reference](/graph/use-the-api?context=graph%2fapi%2fbeta&view=graph-rest-beta&preserve-view=true)
+[Microsoft Graph API conceptual and reference](/graph/use-the-api?context=graph%2fapi%2fbeta&view=graph-rest-beta&preserve-view=true)
active-directory Active Directory How Subscriptions Associated Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
When a user signs up for a Microsoft cloud service, a new Azure AD tenant is
All of your users have a single *home* directory for authentication. Your users can also be guests in other directories. You can see both the home and guest directories for each user in Azure AD. + > [!Important] > When you associate a subscription with a different directory, users that have roles assigned using [Azure role-based access control](../../role-based-access-control/role-assignments-portal.md) lose their access. Classic subscription administrators, including Service Administrator and Co-Administrators, also lose access. >
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
The last successful sign-in provides potential insights into a user's continued
## How to detect inactive user accounts
-You detect inactive accounts by evaluating the **lastSignInDateTime** property exposed by the **signInActivity** resource type of the **Microsoft Graph** API. Using this property, you can implement a solution for the following scenarios:
+You detect inactive accounts by evaluating the **lastSignInDateTime** property exposed by the **signInActivity** resource type of the **Microsoft Graph** API. The **lastSignInDateTime** property shows the last time a user made a successful interactive sign-in to Azure AD. Using this property, you can implement a solution for the following scenarios:
- **Users by name**: In this scenario, you search for a specific user by name, which enables you to evaluate the lastSignInDateTime: `https://graph.microsoft.com/beta/users?$filter=startswith(displayName,'markvi')&$select=displayName,signInActivity`
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-from-template.md
To create an ASEv1 by using a Resource Manager template, see [Create an ILB ASE
<!--Links--> [quickstartilbasecreate]: https://azure.microsoft.com/resources/templates/web-app-asev2-ilb-create [quickstartasev2create]: https://azure.microsoft.com/resources/templates/web-app-asev2-create
-[quickstartconfiguressl]: https://azure.microsoft.com/resources/templates/201-web-app-ase-ilb-configure-default-ssl
+[quickstartconfiguressl]: https://azure.microsoft.com/resources/templates/web-app-ase-ilb-configure-default-ssl
[quickstartwebapponasev2create]: https://azure.microsoft.com/resources/templates/web-app-asp-app-on-asev2-create [examplebase64encoding]: https://powershellscripts.blogspot.com/2007/02/base64-encode-file.html
-[configuringDefaultSSLCertificate]: https://azure.microsoft.com/documentation/templates/201-web-app-ase-ilb-configure-default-ssl/
+[configuringDefaultSSLCertificate]: https://azure.microsoft.com/resources/templates/web-app-ase-ilb-configure-default-ssl/
[Intro]: ./intro.md [MakeExternalASE]: ./create-external-ase.md [MakeASEfromTemplate]: ./create-from-template.md
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-arc.md
The following example creates a Node.js app. Replace `<app-name>` with a name th
```azurecli-interactive az webapp create \
- --plan myPlan
+ --plan myPlan \
--resource-group myResourceGroup \ --name <app-name> \ --custom-location $customLocationId \
az webapp deployment source config-zip --resource-group myResourceGroup --name <
> [!NOTE] > To use Log Analytics, you should've previously enabled it when [installing the App Service extension](manage-create-arc-environment.md#install-the-app-service-extension). If you installed the extension without Log Analytics, skip this step.
-Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name.
+Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name. If there's an error when running a query, try again in 10-15 minutes (there may be a delay for Log Analytics to start receiving logs from your application).
```kusto let StartTime = ago(72h);
To create a custom container app, run [az webapp create](/cli/azure/webapp#az_we
For example, try: ```azurecli-interactive
-az webapp create
+az webapp create \
+ --plan myPlan \
--resource-group myResourceGroup \ --name <app-name> \ --custom-location $customLocationId \
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` inst
* `(A-Z,a-z,0-9)` - alphanumeric characters * `-` - hyphen or minus * `.` - period as a delimiter
-* `*` - can match with multiple characters in the allowed range
-* `?` - can match with a single character in the allowed range
+* `*` - can match with multiple characters in the allowed range
+* `?` - can match with a single character in the allowed range
### Conditions for using wildcard characters and multiple host names in a listener:
-* You can only mention up to 5 host names in a single listener
-* Asterisk `*` can be mentioned only once in a component of a domain style name or host name. For example, component1*.component2*.component3. `(*.contoso-*.com)` is valid.
-* There can only be up to two asterisks `*` in a host name. For example, `*.contoso.*` is valid and `*.contoso.*.*.com` is invalid.
-* There can only be a maximum of 4 wildcard characters in a host name. For example, `????.contoso.com`, `w??.contoso*.edu.*` are valid, but `????.contoso.*` is invalid.
-* Using asterisk `*` and question mark `?` together in a component of a host name (`*?` or `?*` or `**`) is invalid. For example, `*?.contoso.com` and `**.contoso.com` are invalid.
+* You can only mention up to 5 host names in a single listener
+* Asterisk `*` can be mentioned only once in a component of a domain style name or host name. For example, component1*.component2*.component3. `(*.contoso-*.com)` is valid.
+* There can only be up to two asterisks `*` in a host name. For example, `*.contoso.*` is valid and `*.contoso.*.*.com` is invalid.
+* There can only be a maximum of 4 wildcard characters in a host name. For example, `????.contoso.com`, `w??.contoso*.edu.*` are valid, but `????.contoso.*` is invalid.
+* Using asterisk `*` and question mark `?` together in a component of a host name (`*?` or `?*` or `**`) is invalid. For example, `*?.contoso.com` and `**.contoso.com` are invalid.
### Considerations and limitations of using wildcard or multiple host names in a listener:
-* [SSL termination and End-to-End SSL](ssl-overview.md) requires you to configure the protocol as HTTPS and upload a certificate to be used in the listener configuration. If it is a multi-site listener, you can input the host name as well, usually this is the CN of the SSL certificate. When you are specifying multiple host names in the listener or use wildcard characters, you must consider the following:
- * If it is a wildcard hostname like *.contoso.com, you must upload a wildcard certificate with CN like *.contoso.com
- * If multiple host names are mentioned in the same listener, you must upload a SAN certificate (Subject Alternative Names) with the CNs matching the host names mentioned.
-* You cannot use a regular expression to mention the host name. You can only use wildcard characters like asterisk (*) and question mark (?) to form the host name pattern.
-* For backend health check, you cannot associate multiple [custom probes](application-gateway-probe-overview.md) per HTTP settings. Instead, you can probe one of the websites at the backend or use ΓÇ£127.0.0.1ΓÇ¥ to probe the localhost of the backend server. However, when you are using wildcard or multiple host names in a listener, the requests for all the specified domain patterns will be routed to the backend pool depending on the rule type (basic or path-based).
-* The properties ΓÇ£hostname" takes one string as input, where you can mention only one non-wildcard domain name and ΓÇ£hostnamesΓÇ¥ takes an array of strings as input, where you can mention up to 5 wildcard domain names. But both the properties cannot be used at once.
-* You cannot create a [redirection](redirect-overview.md) rule with a target listener which uses wildcard or multiple host names.
+* [SSL termination and End-to-End SSL](ssl-overview.md) requires you to configure the protocol as HTTPS and upload a certificate to be used in the listener configuration. If it is a multi-site listener, you can input the host name as well, usually this is the CN of the SSL certificate. When you are specifying multiple host names in the listener or use wildcard characters, you must consider the following:
+ * If it is a wildcard hostname like *.contoso.com, you must upload a wildcard certificate with CN like *.contoso.com
+ * If multiple host names are mentioned in the same listener, you must upload a SAN certificate (Subject Alternative Names) with the CNs matching the host names mentioned.
+* You cannot use a regular expression to mention the host name. You can only use wildcard characters like asterisk (*) and question mark (?) to form the host name pattern.
+* For backend health check, you cannot associate multiple [custom probes](application-gateway-probe-overview.md) per HTTP settings. Instead, you can probe one of the websites at the backend or use "127.0.0.1" to probe the localhost of the backend server. However, when you are using wildcard or multiple host names in a listener, the requests for all the specified domain patterns will be routed to the backend pool depending on the rule type (basic or path-based).
+* The properties "hostname" takes one string as input, where you can mention only one non-wildcard domain name and "hostnames" takes an array of strings as input, where you can mention up to 5 wildcard domain names. But both the properties cannot be used at once.
+* You cannot create a [redirection](redirect-overview.md) rule with a target listener which uses wildcard or multiple host names.
See [create multi-site using Azure PowerShell](tutorial-multiple-sites-powershell.md) or [using Azure CLI](tutorial-multiple-sites-cli.md) for the step-by-step guide on how to configure wildcard host names in a multi-site listener.
Learn how to configure multiple site hosting in Application Gateway
* [Using Azure PowerShell](tutorial-multiple-sites-powershell.md) * [Using Azure CLI](tutorial-multiple-sites-cli.md)
-You can visit [Resource Manager template using multiple site hosting](https://github.com/Azure/azure-quickstart-templates/blob/master/201-application-gateway-multihosting) for an end to end template-based deployment.
+You can visit [Resource Manager template using multiple site hosting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/application-gateway-multihosting) for an end to end template-based deployment.
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
Example of JWT generated for an SGX enclave:
"x-ms-sgx-mrsigner": <SGX enclave msrigner value>, "x-ms-sgx-product-id": 1, "x-ms-sgx-svn": 1,
- "x-ms-ver": "1.0"
+ "x-ms-ver": "1.0",
+ "x-ms-sgx-config-id": "000102030405060708090a0b0c0d8f99000102030405060708090a0b0c860e9a000102030405060708090a0b7d0d0e9b000102030405060708090a740c0d0e9c",
+ "x-ms-sgx-config-svn": 3451,
+ "x-ms-sgx-isv-extended-product-id": "8765432143211234abcdabcdef123456",
+ "x-ms-sgx-isv-family-id": "1234567812344321abcd1234567890ab"
}.[Signature] ```+ Some of the claims used above are considered deprecated but are fully supported. It is recommended that all future code and tooling use the non-deprecated claim names. See [claims issued by Azure Attestation](claim-sets.md) for more information.
+The below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and and Sharing Support](https://github.com/openenclave/openenclave/issues/3054)
+
+**x-ms-sgx-config-id**
+
+**x-ms-sgx-config-svn**
+
+**x-ms-sgx-isv-extended-product-id**
+
+**x-ms-sgx-isv-family-id**
+ ## Encryption of data at rest To safeguard customer data, Azure Attestation persists its data in Azure Storage. Azure storage provides encryption of data at rest as it's written into data centers, and decrypts it for customers to access it. This encryption occurs using a Microsoft managed encryption key.
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/overview.md
SGX refers to hardware-grade isolation, which is supported on certain Intel CPUs
Client applications can be designed to take advantage of SGX enclaves by delegating security-sensitive tasks to take place inside those enclaves. Such applications can then make use of Azure Attestation to routinely establish trust in the enclave and its ability to access sensitive data.
+Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
+ ### Open Enclave [Open Enclave](https://openenclave.io/sdk/) (OE) is a collection of libraries targeted at creating a single unified enclaving abstraction for developers to build TEE-based applications. It offers a universal secure app model that minimizes platform specificities. Microsoft views it as an essential stepping-stone toward democratizing hardware-based enclave technologies such as SGX and increasing their uptake on Azure.
Azure Attestation customers have expressed a requirement for Microsoft to be ope
Azure Attestation is the preferred choice for attesting TEEs as it offers the following benefits: - Unified framework for attesting multiple environments such as TPMs, SGX enclaves and VBS enclaves -- Multi-tenant service which allows configuration of custom attestation providers and policies to restrict token generation
+- Allows creation of custom attestation providers and configuration of policies to restrict token generation
- Offers regional shared providers which can attest with no configuration from users - Protects its data while-in use with implementation in an SGX enclave - Highly available service
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/extensions.md
Title: "Azure Arc enabled Kubernetes cluster extensions" Previously updated : 05/25/2021 Last updated : 06/18/2021
A conceptual overview of this feature is available in [Cluster extensions - Azur
| [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) | Allows you to provision an App Service Kubernetes environment on top of Azure Arc enabled Kubernetes clusters. | | [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) | Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc enabled Kubernetes clusters. | | [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) | Deploy and manage API Management gateway on Azure Arc enabled Kubernetes clusters. |
+| [Azure Arc enabled Machine Learning](../../machine-learning/how-to-attach-arc-kubernetes.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. |
## Usage of cluster extensions
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Add namespaces to the mesh by running the following command:
osm namespace add <namespace_name> ```
-More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/tasks_usage/onboard_services/).
+More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/tasks/onboard_services/).
### Configure OSM with Service Mesh Interface (SMI) policies
You can start with a [demo application](https://release-v0-8.docs.openservicemes
The OSM extension has [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/) and [Grafana](https://grafana.com/docs/grafana/latest/installation/) installation disabled by default so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, check the following documentation: -- [BYO-Jaeger instance](https://github.com/openservicemesh/osm-docs/blob/main/content/docs/tasks_usage/observability/tracing.md#byo-bring-your-own)
+- [BYO-Jaeger instance](https://github.com/openservicemesh/osm-docs/blob/main/content/docs/tasks/observability/tracing.md#byo-bring-your-own)
- To set the values described in this documentation, you will need to update the `osm-config` ConfigMap with the following settings: ```json {
azure-cache-for-redis Cache Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-network-isolation.md
Azure Private Link provides private connectivity from a virtual network to Azure
### Limitations * Network security groups (NSG) are disabled for private endpoints. However, if there are other resources on the subnet, NSG enforcement will apply to those resources.
-* Not supported yet: Geo-replication, firewall rules, portal console support, multiple endpoints per clustered cache, persistence to firewall, and VNet injected caches.
+* Currently, portal console support, and persistence to firewall storage accounts are not supported.
* To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled` and there can only be one private endpoint connection. > [!NOTE]
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-private-link.md
Azure Private Endpoint is a network interface that connects you privately and se
* Azure subscription - [create one for free](https://azure.microsoft.com/free/) > [!IMPORTANT]
-> Currently, zone redundancy, portal console support, and persistence to firewall storage accounts are not supported.
+> Currently, portal console support, and persistence to firewall storage accounts are not supported.
> >
If your cache is already a VNet injected cache, private endpoints cannot be used
### What features aren't supported with private endpoints?
-Currently, zone redundancy, portal console support, and persistence to firewall storage accounts are not supported.
+Currently, portal console support, and persistence to firewall storage accounts are not supported.
### How can I change my private endpoint to be disabled or enabled from public network access?
azure-cache-for-redis Cache Web App Arm With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-arm-with-redis-cache-provision.md
You can use this template for your own deployments, or customize it to meet your
For more information about creating templates, see [Authoring Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md). To learn about the JSON syntax and properties for cache resource types, see [Microsoft.Cache resource types](/azure/templates/microsoft.cache/allversions).
-For the complete template, see [Web App with Azure Cache for Redis template](https://github.com/Azure/azure-quickstart-templates/blob/master/201-web-app-with-redis-cache/azuredeploy.json).
+For the complete template, see [Web App with Azure Cache for Redis template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/web-app-with-redis-cache/azuredeploy.json).
## What you will deploy In this template, you deploy:
azure-functions Functions How To Use Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-nat-gateway.md
Next, you create a function app in the [Premium plan](functions-premium-plan.md)
## Create a function app in a Premium plan
+This tutorial shows you how to create your function app in a [Premium plan](functions-premium-plan.md). The same functionality is also available when using a [Dedicated (App Service) plan](dedicated-plan.md).
+ > [!NOTE] > For the best experience in this tutorial, choose .NET for runtime stack and choose Windows for operating system. Also, create you function app in the same region as your virtual network.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
Learn more about how to develop and configure Azure Functions.
<!-- LINKS --> [Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json
-[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-linux/azuredeploy.json
+[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json
azure-functions Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/ip-addresses.md
IP addresses are associated with function apps, not with individual functions. I
Each function app has a single inbound IP address. To find that IP address:
+# [Azure Portal](#tab/portal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to the function app. 3. Under **Settings**, select **Properties**. The inbound IP address appears under **Virtual IP address**.
+# [Azure CLI](#tab/azurecli)
+
+Use the `nslookup` utility from your local client computer:
+
+```command
+nslookup <APP_NAME>.azurewebsites.net
+```
+++ ## <a name="find-outbound-ip-addresses"></a>Function app outbound IP addresses Each function app has a set of available outbound IP addresses. Any outbound connection from a function, such as to a back-end database, uses one of the available outbound IP addresses as the origin IP address. You can't know beforehand which IP address a given connection will use. For this reason, your back-end service must open its firewall to all of the function app's outbound IP addresses. To find the outbound IP addresses available to a function app:
+# [Azure portal](#tab/portal)
+ 1. Sign in to the [Azure Resource Explorer](https://resources.azure.com). 2. Select **subscriptions > {your subscription} > providers > Microsoft.Web > sites**. 3. In the JSON panel, find the site with an `id` property that ends in the name of your function app. 4. See `outboundIpAddresses` and `possibleOutboundIpAddresses`.
-The set of `outboundIpAddresses` is currently available to the function app. The set of `possibleOutboundIpAddresses` includes IP addresses that will be available only if the function app [scales to other pricing tiers](#outbound-ip-address-changes).
-
-An alternative way to find the available outbound IP addresses is by using the [Cloud Shell](../cloud-shell/quickstart.md):
+# [Azure CLI](#tab/azurecli)
```azurecli-interactive
-az webapp show --resource-group <group_name> --name <app_name> --query outboundIpAddresses --output tsv
-az webapp show --resource-group <group_name> --name <app_name> --query possibleOutboundIpAddresses --output tsv
+az functionapp show --resource-group <GROUP_NAME> --name <APP_NAME> --query outboundIpAddresses --output tsv
+az functionapp show --resource-group <GROUP_NAME> --name <APP_NAME> --query possibleOutboundIpAddresses --output tsv
```++
+The set of `outboundIpAddresses` is currently available to the function app. The set of `possibleOutboundIpAddresses` includes IP addresses that will be available only if the function app [scales to other pricing tiers](#outbound-ip-address-changes).
> [!NOTE]
-> When a function app that runs on the [Consumption plan](consumption-plan.md) or the [Premium plan](functions-premium-plan.md) is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you may need to add the entire data center to an allowlist.
+> When a function app that runs on the [Consumption plan](consumption-plan.md) or the [Premium plan](functions-premium-plan.md) is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you can't rely on the reported outbound IP addresses to create a definitive allowlist. To be able to include all potential outbound addresses used during dynamic scaling, you'll need to add the entire data center to your allowlist.
## Data center outbound IP addresses
The relative stability of the outbound IP address depends on the hosting plan.
Because of autoscaling behaviors, the outbound IP can change at any time when running on a [Consumption plan](consumption-plan.md) or in a [Premium plan](functions-premium-plan.md).
-If you need to control the outbound IP address of your function app, such as when you need to add it to an allow list, consider implementing a [virtual network NAT gateway](#virtual-network-nat-gateway-for-outbound-static-ip) in your premium plan.
+If you need to control the outbound IP address of your function app, such as when you need to add it to an allow list, consider implementing a [virtual network NAT gateway](#virtual-network-nat-gateway-for-outbound-static-ip) while running in a Premium hosting plan. You can also do this by running in a Dedicated (App Service) plan.
### Dedicated plans
There are several strategies to explore when your function app requires static,
### Virtual network NAT gateway for outbound static IP
-You can control the IP address of outbound traffic from your functions by using a virtual network NAT gateway to direct traffic through a static public IP address. You can use this topology when running in a [Premium plan](functions-premium-plan.md). To learn more, see [Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
+You can control the IP address of outbound traffic from your functions by using a virtual network NAT gateway to direct traffic through a static public IP address. You can use this topology when running in a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md). To learn more, see [Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
### App Service Environments
For full control over the IP addresses, both inbound and outbound, we recommend
To find out if your function app runs in an App Service Environment:
+# [Azure Porta](#tab/portal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to the function app. 3. Select the **Overview** tab. 4. The App Service plan tier appears under **App Service plan/pricing tier**. The App Service Environment pricing tier is **Isolated**.
-
-As an alternative, you can use the [Cloud Shell](../cloud-shell/quickstart.md):
+
+# [Azure CLI](#tab/azurecli)
```azurecli-interactive az webapp show --resource-group <group_name> --name <app_name> --query sku --output tsv ``` ++ The App Service Environment `sku` is `Isolated`. ## Next steps
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To update the `occupied` state of the unit with feature `id` "UNIT26":
2. In the **Create New** window, select **Request**.
-3. Enter a **Request name** for the request, such as *POST Set Stateset*.
+3. Enter a **Request name** for the request, such as *PUT Set Stateset*.
4. Select the collection you previously created, and then select **Save**.
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Any alert instance describes the resource that was affected and the cause of the
### Log alerts > [!NOTE]
-> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludeSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
+> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
#### `monitoringService` = `Log Analytics`
Any alert instance describes the resource that was affected and the cause of the
] } ],
- "IncludeSearchResults": "True",
+ "IncludedSearchResults": "True",
"AlertType": "Metric measurement" } }
Any alert instance describes the resource that was affected and the cause of the
} ] },
- "IncludeSearchResults": "True",
+ "IncludedSearchResults": "True",
"AlertType": "Metric measurement" } }
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sdk-connection-string.md
NetCore config.json:
# [Java](#tab/java)
+You can set the connection string in the `applicationinsights.json` configuration file:
-Java (v2.5.x) Explicitly Set:
-```java
-TelemetryConfiguration.getActive().setConnectionString("InstrumentationKey=00000000-0000-0000-0000-000000000000");
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000"
+}
```
-ApplicationInsights.xml
+See [connection string configuration](/azure/azure-monitor/app/java-standalone-config#connection-string) for more details.
+
+For Application Insights Java 2.x, you can set the connection string in the `ApplicationInsights.xml` configuration file:
+ ```xml <?xml version="1.0" encoding="utf-8"?> <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
- <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000;</ConnectionString>
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
</ApplicationInsights> ```
azure-monitor Deploy Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/deploy-scale.md
To view the built-in policy definitions related to monitoring, perform the follo
![Screenshot of the Azure Policy Definitions page in Azure portal showing a list of policy definitions for the Monitoring category and Built-in Type.](media/deploy-scale/builtin-policies.png)
-## Azure Monitor Agent (preview)
+## Azure Monitor Agent
The [Azure Monitor agent](agents/azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. It uses [Data Collection Rules](agents/data-collection-rule-overview.md) to configure data to collect from each agent, that enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. Use the policies and policy initiatives below to automatically install the agent and associate it to a data collection rule, every time you create a virtual machine.
You may have scenarios where you want to install the Log Analytics agent but not
## Next steps - Read more about [Azure Policy](../governance/policy/overview.md).-- Read more about [diagnostic settings](essentials/diagnostic-settings.md).
+- Read more about [diagnostic settings](essentials/diagnostic-settings.md).
azure-monitor Log Analytics Workspace Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-workspace-insights-overview.md
In our demo workspace, you can clearly see that 3 Kuberbetes clusters send far m
### Health tab
-This tab shows the workspace health state and when it was last reported, as well as operational errors and warnings (retrieved from the _LogOperation table).
+This tab shows the workspace health state and when it was last reported, as well as operational [errors and warnings](./monitor-workspace.md) (retrieved from the _LogOperation table).
+ :::image type="content" source="media/log-analytics-workspace-insights-overview/workspace-health.png" alt-text="Screenshot of the workspace health tab" lightbox="media/log-analytics-workspace-insights-overview/workspace-health.png":::
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/monitor-workspace.md
Ingestion operations are issues that occurred during data ingestion including no
#### Operation: Data collection stopped
-Data collection stopped due to reaching the daily limit.
+"Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota"
In the past 7 days, logs collection reached the daily set limit. The limit is set either as the workspace is set to "free tier", or daily collection limit was configured for this workspace. Note, after reaching the set limit, your data collection will automatically stop for the day and will resume only during the next collection day.
Or, you can decide to ([Manage your maximum daily data volume](./manage-cost-sto
* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-reached) on "Data collection resumed" Operation event. #### Operation: Ingestion rate
-Ingestion rate limit approaching\passed the limit.
-
- Your ingestion rate has passed the 80%; at this point there is not issue. Note, data collected exceeding the threshold will be dropped. </br>
+"The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped."
Recommended Actions: * Check _LogOperation table for ingestion rate event
For further information: </br>
#### Operation: Maximum table column count
-Custom fields count have reached the limit.
+"Data of type \<**table name**\> was dropped because number of fields \<**new fields count**\> is above the limit of \<**current field count limit**\> custom fields per data type."
Recommended Actions: For custom tables, you can move to [Parsing the data](./parse-text.md) in queries. #### Operation: Field content validation
-One of the fields of the data being ingested had more than 32 Kb in size, so it got truncated.
+"The following fields' values \<**field name**\> of type \<**table name**\> have been trimmed to the max allowed size, \<**field size limit**\> bytes. Please adjust your input accordingly."
-Log Analytics limits ingested fields size to 32 Kb, larger size fields will be trimmed to 32 Kb. We donΓÇÖt recommend sending fields larger than 32 Kb as the trim process might remove important information.
+Field larger then the limit size was proccessed by Azure logs, the field was trimed to the allowed field limit. We donΓÇÖt recommend sending fields larger than the allowed limit as this will resualt in data loss.
Recommended Actions: Check the source of the affected data type:
Check the source of the affected data type:
### Data collection #### Operation: Azure Activity Log collection
+"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there is no impact to the services, but information for the tenant could take up to an hour to propagate. '"
+ Description: In some situations, like moving a subscription to a different tenant, the Azure Activity logs might stop flowing in into the workspace. In those situations, we need to reconnect the subscription following the process described in this article. Recommended Actions:
Recommended Actions:
### Agent #### Operation: Linux Agent
+"Two successive configuration applications from OMS Settings failed"
+ Config settings on the portal have changed. Recommended Action
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
See [Register for Azure NetApp Files](azure-netapp-files-register.md) for more i
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-The following code snippet shows how to create a NetApp account in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts](/azure/templates/microsoft.netapp/netappaccounts) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-anf-nfs-volume/azuredeploy.json) from our GitHub repo.
+The following code snippet shows how to create a NetApp account in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts](/azure/templates/microsoft.netapp/netappaccounts) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json" range="177-183":::
The following code snippet shows how to create a NetApp account in an Azure Reso
<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
-The following code snippet shows how to create a capacity pool in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts/capacityPools](/azure/templates/microsoft.netapp/netappaccounts/capacitypools) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-anf-nfs-volume/azuredeploy.json) from our GitHub repo.
+The following code snippet shows how to create a capacity pool in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts/capacityPools](/azure/templates/microsoft.netapp/netappaccounts/capacitypools) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json" range="184-196":::
The following code snippet shows how to create a capacity pool in an Azure Resou
<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
-The following code snippets show how to set up a VNet and create an Azure NetApp Files volume in an Azure Resource Manager template (ARM template). VNet setup uses the [Microsoft.Network/virtualNetworks](/azure/templates/Microsoft.Network/virtualNetworks) resource. Volume creation uses the [Microsoft.NetApp/netAppAccounts/capacityPools/volumes](/azure/templates/microsoft.netapp/netappaccounts/capacitypools/volumes) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-anf-nfs-volume/azuredeploy.json) from our GitHub repo.
+The following code snippets show how to set up a VNet and create an Azure NetApp Files volume in an Azure Resource Manager template (ARM template). VNet setup uses the [Microsoft.Network/virtualNetworks](/azure/templates/Microsoft.Network/virtualNetworks) resource. Volume creation uses the [Microsoft.NetApp/netAppAccounts/capacityPools/volumes](/azure/templates/microsoft.netapp/netappaccounts/capacitypools/volumes) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json" range="148-176":::
azure-relay Relay Create Namespace Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-create-namespace-portal.md
A namespace is a scoping container for all your Azure Relay components. Multiple
## Create a namespace in the Azure portal Congratulations! You have now created a relay namespace.
azure-relay Relay Hybrid Connections Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-hybrid-connections-dotnet-get-started.md
Last updated 06/23/2020
# Get started with Relay Hybrid Connections WebSockets in .NET In this quickstart, you create .NET sender and receiver applications that send and receive messages by using Hybrid Connections WebSockets in Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
To complete this tutorial, you need the following prerequisites:
* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Create a namespace ## Create a hybrid connection ## Create a server application (listener) In Visual Studio, write a C# console application to listen for and receive messages from the relay. ## Create a client application (sender) In Visual Studio, write a C# console application to send messages to the relay. ## Run the applications 1. Run the server application.
azure-relay Relay Hybrid Connections Http Requests Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md
Last updated 06/23/2020
# Get started with Relay Hybrid Connections HTTP requests in .NET In this quickstart, you create .NET sender and receiver applications that send and receive messages by using the HTTP protocol. The applications use Hybrid Connections feature of Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
To complete this tutorial, you need the following prerequisites:
* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Create a namespace ## Create a hybrid connection ## Create a server application (listener) In Visual Studio, write a C# console application to listen for and receive messages from the relay. ## Create a client application (sender) In Visual Studio, write a C# console application to send messages to the relay. ## Run the applications 1. Run the server application. You see the following text in the console window:
azure-relay Relay Hybrid Connections Http Requests Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-hybrid-connections-http-requests-node-get-started.md
# Get started with Relay Hybrid Connections HTTP requests in Node In this quickstart, you create Node.js sender and receiver applications that send and receive messages by using the HTTP protocol. The applications use Hybrid Connections feature of Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
In this quickstart, you take the following steps:
- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Create a namespace using the Azure portal ## Create a hybrid connection using the Azure portal ## Create a server application (listener) To listen and receive messages from the Relay, write a Node.js console application. ## Create a client application (sender) To send messages to the Relay, you can use any HTTP client, or write a Node.js console application. ## Run the applications
azure-relay Relay Hybrid Connections Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-hybrid-connections-node-get-started.md
Title: Azure Relay Hybrid Connections - WebSockets in Node
-description: Write a Node.js console application for Azure Relay Hybrid Connections Websockets
+description: Write a Node.js console application for Azure Relay Hybrid Connections WebSockets
Last updated 06/23/2020
# Get started with Relay Hybrid Connections WebSockets in Node.js In this quickstart, you create Node.js sender and receiver applications that send and receive messages by using Hybrid Connections WebSockets in Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
In this quickstart, you take the following steps:
- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Create a namespace ## Create a hybrid connection ## Create a server application (listener) To listen and receive messages from the Relay, write a Node.js console application. ## Create a client application (sender) To send messages to the Relay, write a Node.js console application. ## Run the applications
azure-relay Service Bus Dotnet Hybrid App Using Service Bus Relay https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md
Once the installation is finished, you have everything necessary to start to dev
The first step is to create a namespace, and to obtain a [Shared Access Signature (SAS)](../service-bus-messaging/service-bus-sas.md) key. A namespace provides an application boundary for each application exposed through the relay service. An SAS key is automatically generated by the system when a service namespace is created. The combination of service namespace and SAS key provides the credentials for Azure to authenticate access to an application. ## Create an on-premises server
azure-relay Service Bus Relay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/service-bus-relay-tutorial.md
To complete this tutorial, you need the following prerequisites:
The first step is to create a namespace, and to obtain a [Shared Access Signature (SAS)](../service-bus-messaging/service-bus-sas.md) key. A namespace provides an application boundary for each application exposed through the relay service. An SAS key is automatically generated by the system when a service namespace is created. The combination of service namespace and SAS key provides the credentials for Azure to authenticate access to an application. ## Define a WCF service contract
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/key-vault-parameter.md
description: Shows how to pass a secret from a key vault as a parameter during B
Previously updated : 06/16/2021 Last updated : 06/18/2021 # Use Azure Key Vault to pass secure parameter value during Bicep deployment
-Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID. The key vault can exist in a different subscription than the resource group you're deploying to.
+Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID.
-This article's focus is how to pass a sensitive value as a Bicep parameter. The article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault.
-For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
+> [!IMPORTANT]
+> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
+>
+> This article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
## Deploy key vaults and secrets
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | actionGroups | Yes | Yes |
+> | actionGroups | Yes | No |
> | activityLogAlerts | Yes | Yes | > | alertrules | Yes | Yes | > | autoscalesettings | Yes | Yes |
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/key-vault-parameter.md
Title: Key Vault secret with template description: Shows how to pass a secret from a key vault as a parameter during deployment. Previously updated : 05/17/2021 Last updated : 06/18/2021 # Use Azure Key Vault to pass secure parameter value during deployment
-Instead of putting a secure value (like a password) directly in your template or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. You retrieve the value by referencing the key vault and secret in your parameter file. The value is never exposed because you only reference its key vault ID. The key vault can exist in a different subscription than the resource group you're deploying to.
+Instead of putting a secure value (like a password) directly in your template or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. You retrieve the value by referencing the key vault and secret in your parameter file. The value is never exposed because you only reference its key vault ID.
-This article's focus is how to pass a sensitive value as a template parameter. The article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault.
-For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
+> [!IMPORTANT]
+> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
+>
+> This article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
## Deploy key vaults and secrets
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of list* are shown in the following table.
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/listkeys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
-| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/getbuildsourceuploadurl) |
+| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) |
| Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) | | Microsoft.ContainerRegistry/registries/agentpools | listQueueStatus |
The possible uses of list* are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/workspacesandcomputes/machinelearningcompute/listkeys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/workspacesandcomputes/machinelearningcompute/listnodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspacesandcomputes/workspaces/listkeys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
Built-in policy definitions are tenant level resources. To deploy a policy assig
* For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md). * To merge multiple templates, see [Using linked and nested templates when deploying Azure resources](linked-templates.md). * To iterate a specified number of times when creating a type of resource, see [Resource iteration in ARM templates](copy-resources.md).
-* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-resource-manager Template Spec Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-spec-convert.md
To see if you have any templates to convert, view the [template gallery in the p
To simplify converting templates in the template gallery, use a PowerShell script from the Azure Quickstart Templates repo. When you run the script, you can either create a new template spec for each template or download a template that creates the template spec. The script doesn't delete the template from the template gallery.
-1. Copy the [migration script](https://github.com/Azure/azure-quickstart-templates/blob/master/201-templatespec-migrate-create/Migrate-GalleryItems.ps1). Save a local copy with the name *Migrate-GalleryItems.ps1*.
+1. Copy the [migration script](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.resources/templatespec-migrate-create/Migrate-GalleryItems.ps1). Save a local copy with the name *Migrate-GalleryItems.ps1*.
1. To create new template specs, provide values for the `-ResourceGroupName` and `-Location` parameters. Set `ItemsToExport` to `MyGalleryItems` to export your templates. Set it to `AllGalleryItems` to export all templates you have access to.
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
For many businesses, the decision to transition to a cloud service is as much ab
For many IT departments, meeting up-time obligations of a service-level agreement (SLA) is a top priority. In this section, we look at what SLA applies to each database hosting option.
-For both **Azure SQL Database** and **Azure SQL Managed Instance**, Microsoft provides an availability SLA of 99.99%. For the latest information, see [Service-level agreement](https://azure.microsoft.com/support/legal/sla/sql-database/).
+For both **Azure SQL Database** and **Azure SQL Managed Instance**, Microsoft provides an availability SLA of 99.99%. For the latest information, see [Service-level agreement](https://azure.microsoft.com/support/legal/sla/azure-sql-database).
For **SQL on Azure VM**, Microsoft provides an availability SLA of 99.95% that covers just the virtual machine. This SLA does not cover the processes (such as SQL Server) running on the VM and requires that you host at least two VM instances in an availability set. For the latest information, see the [VM SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/). For database high availability (HA) within VMs, you should configure one of the supported high availability options in SQL Server, such as [Always On availability groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server). Using a supported high availability option doesn't provide an additional SLA, but allows you to achieve >99.99% database availability.
For **SQL on Azure VM**, Microsoft provides an availability SLA of 99.95% that c
- See [Your first Azure SQL Managed Instance](managed-instance/instance-create-quickstart.md) to get started with SQL Managed Instance. - See [SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/). - See [Provision a SQL Server virtual machine in Azure](virtual-machines/windows/create-sql-vm-portal.md) to get started with SQL Server on Azure VMs.-- [Identify the right SQL Database or SQL Managed Instance SKU for your on-premises database](/sql/dma/dma-sku-recommend-sql-db/).
+- [Identify the right SQL Database or SQL Managed Instance SKU for your on-premises database](/sql/dma/dma-sku-recommend-sql-db/).
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/computer-vision-for-spatial-analysis.md
The `CognitiveServicesVisionProcessor` node plays the role of a proxy. It conver
## Create the Computer Vision resource
-You need to create an Azure resource of type Computer Vision either on [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or via Azure CLI. You will be able to create the resource once your request for access to the container has been approved and your Azure Subscription ID has been registered. Go to https://aka.ms/csgate to submit your use case and your Azure Subscription ID. You need to create the Azure resource using the same Azure subscription that has been provided on the Request for Access form.
+You need to create an Azure resource of type Computer Vision either on [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or via Azure CLI.
### Gathering required parameters
Sample output for personZoneEvent (from `SpatialAnalysisPersonZoneCrossingOperat
``` ### More operations:-
+There are different operations that the `spatialAnalysis` module offers:
+
+- **personCount**
+- **personDistance**
+- **personCrossingLine**
+- **personZoneCrossing**
+- **customOperation**
+<br></br>
<details>
- <summary>Click to expand</summary>
+ <summary>Click to expand and see the different configuration options for each of the operations.</summary>
### Person Line Crossing
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-pop-locations.md
This article lists current metros containing point-of-presence (POP) locations,
| Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |
-| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Seoul, South Korea<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />Turkey<br />Vietnam |
+| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />Turkey<br />Vietnam |
| Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
Title: How to add data feeds from different sources to Metrics Advisor
+ Title: Connect different data sources to Metrics Advisor
-description: add different data feeds to Metrics Advisor
+description: Add different data feeds to Metrics Advisor
Previously updated : 10/12/2020 Last updated : 05/26/2021
-# Add data feeds from different data sources to Metrics Advisor
-Use this article to find the settings and requirements for connecting different types of data sources to Metrics Advisor. Make sure to read how to [Onboard your data](how-tos/onboard-your-data.md) to learn about the key concepts for using your data with Metrics Advisor. \
+# How-to: Connect different data sources
+
+Use this article to find the settings and requirements for connecting different types of data sources to Metrics Advisor. Make sure to read how to [Onboard your data](how-tos/onboard-your-data.md) to learn about the key concepts for using your data with Metrics Advisor.
## Supported authentication types | Authentication types | Description | | |-|
-|**Basic** | You will need to be able to provide basic parameters for accessing data sources. For example, a connection string or key. Data feed admins are able to view these credentials. |
-| **AzureManagedIdentity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory. It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.|
-| **AzureSQLConnectionString**| Store your AzureSQL connection string as a **credential entity** in Metrics Advisor, and use it directly each time when onboarding metrics data. Only admins of the Credential entity are able to view these credentials, but enables authorized viewers to create data feeds without needing to know details for the credentials. |
-| **DataLakeGen2SharedKey**| Store your data lake account key as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of the Credential entity are able to view these credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
-| **Service principal**| Store your service principal as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of Credential entity are able to view the credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
-| **Service principal from key vault**|Store your service principal in a key vault as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of a **credential entity** are able to view the credentials, but also leave viewers able to create data feed without needing to know detailed credentials. |
+|**Basic** | You need to provide basic parameters for accessing data sources. For example, a connection string or a password. Data feed admins can view these credentials. |
+| **Azure Managed Identity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory. It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.|
+| **Azure SQL Connection String**| Store your AzureSQL connection string as a **credential entity** in Metrics Advisor, and use it directly each time when onboarding metrics data. Only admins of the credential entity can view these credentials, but enables authorized viewers to create data feeds without needing to know details for the credentials. |
+| **Data Lake Gen2 Shared Key**| Store your data lake account key as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of the Credential entity can view these credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
+| **Service principal**| Store your [Service Principal](../../active-directory/develop/app-objects-and-service-principals.md) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of Credential entity can view the credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
+| **Service principal from key vault**|Store your [Service Principal in a Key Vault](/azure/azure-stack/user/azure-stack-key-vault-store-credentials) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of a **credential entity** can view the credentials, but also leave viewers able to create data feed without needing to know detailed credentials. |
-## Data sources supported and corresponding authentication types
+## <span id ='jump1'>Create a credential entity to manage your credential in secure</span>
+
+You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The procedure of creating and using a credential entity is shown below:
+
+1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
+ ![create credential entity](media/create-credential-entity.png)
+
+2. Set the credential entity name, description (if needed), and credential type (equals to *authentication types*).
+
+ ![set credential entity](media/set-credential-entity.png)
+
+3. After creating a credential entity, you can choose it when specifying authentication type.
+
+ ![choose credential entity](media/choose-credential-entity.png)
+
+## Data sources supported and corresponding authentication types
| Data sources | Authentication Types | |-| |
-|[**Azure Application Insights**](#appinsights)| Basic |
-|[**Azure Blob Storage (JSON)**](#blob) | Basic<br>ManagedIdentity|
+|[**Azure Application Insights**](#appinsights) | Basic |
+|[**Azure Blob Storage (JSON)**](#blob) | Basic<br>ManagedIdentity |
|[**Azure Cosmos DB (SQL)**](#cosmosdb) | Basic |
-|[**Azure Data Explorer (Kusto)**](#kusto) | Basic<br>ManagedIdentity|
-|[**Azure Data Lake Storage Gen2**](#adl) | Basic<br>DataLakeGen2SharedKey<br>Service principal<br>Service principal from key vault<br> |
-|[**Azure SQL Database / SQL Server**](#sql) | Basic<br>ManagedIdentity<br>Service principal<br>Service principal from key vault<br>AzureSQLConnectionString
+|[**Azure Data Explorer (Kusto)**](#kusto) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault |
+|[**Azure Data Lake Storage Gen2**](#adl) | Basic<br>Data Lake Gen2 Shared Key<br>Service principal<br>Service principal from key vault |
+|[**Azure Log Analytics**](#log) | Basic<br>Service principal<br>Service principal from key vault |
+|[**Azure SQL Database / SQL Server**](#sql) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault<br>Azure SQL Connection String |
|[**Azure Table Storage**](#table) | Basic |
-|[**ElasticSearch**](#es) | Basic |
-|[**Http request**](#http) | Basic |
|[**InfluxDB (InfluxQL)**](#influxdb) | Basic | |[**MongoDB**](#mongodb) | Basic | |[**MySQL**](#mysql) | Basic |
-|[**PostgreSQL**](#pgsql)| Basic|
+|[**PostgreSQL**](#pgsql) | Basic|
+|[**Local files(CSV)**](#csv) | Basic|
-Create a Credential entity** and use it for authenticating to your data sources. The following sections specify the parameters required by for *Basic* authentication.
+The following sections specify the parameters required for all authentication types within different data source scenarios.
## <span id="appinsights">Azure Application Insights</span>
-* **Application ID**: This is used to identify this application when using the Application Insights API. To get the Application ID, do the following:
+* **Application ID**: This is used to identify this application when using the Application Insights API. To get the Application ID, take the following steps:
- 1. From your Application Insights resource, click API Access.
+ 1. From your Application Insights resource, click API Access.
+
+ ![Get application ID from your Application Insights resource](media/portal-app-insights-app-id.png)
- 2. Copy the Application ID generated into **Application ID** field in Metrics Advisor.
-
- See the [Azure Bot Service documentation](/azure/bot-service/bot-service-resources-app-insights-keys#application-id) for more information.
+ 2. Copy the Application ID generated into **Application ID** field in Metrics Advisor.
-* **API Key**: API keys are used by applications outside the browser to access this resource. To get the API key, do the following:
+* **API Key**: API keys are used by applications outside the browser to access this resource. To get the API key, take the following steps:
- 1. From the Application Insights resource, click API Access.
+ 1. From the Application Insights resource, click **API Access**.
- 2. Click Create API Key.
+ 2. Click **Create API Key**.
- 3. Enter a short description, check the Read telemetry option, and click the Generate key button.
+ 3. Enter a short description, check the **Read telemetry** option, and click the **Generate key** button.
- 4. Copy the API key to the **API key** field in Metrics Advisor.
+ ![Get API key in Azure portal](media/portal-app-insights-app-id-api-key.png)
-* **Query**: Azure Application Insights logs are built on Azure Data Explorer, and Azure Monitor log queries use a version of the same Kusto query language. The [Kusto query language documentation](/azure/data-explorer/kusto/query/) has all of the details for the language and should be your primary resource for writing a query against Application Insights.
+ > [!WARNING]
+ > Copy this **API key** and save it because this key will never be shown to you again. If you lose this key, you have to create a new one.
+ 4. Copy the API key to the **API key** field in Metrics Advisor.
+* **Query**: Azure Application Insights logs are built on Azure Data Explorer, and Azure Monitor log queries use a version of the same Kusto query language. The [Kusto query language documentation](/azure/data-explorer/kusto/query) has all of the details for the language and should be your primary resource for writing a query against Application Insights.
+
+ Sample query:
+
+ ``` Kusto
+ [TableName] | where [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd;
+ ```
+ You can also refer to the [Tutorial: Write a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
## <span id="blob">Azure Blob Storage (JSON)</span>
-* **Connection String**: See the Azure Blob Storage [connection string](../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account) article for information on retrieving this string.
+* **Connection String**: There are two authentication types for Azure Blob Storage(JSON), one is **Basic**, the other is **Managed Identity**.
+
+ * **Basic**: See [Configure Azure Storage connection strings](../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account) for information on retrieving this string. Also, you can visit the Azure portal for your Azure Blob Storage resource, and find connection string directly in the **Settings > Access keys** section.
+
+ * **Managed Identity**: Managed identities for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services.
+
+ You can create a managed identity in Azure portal for your Azure Blob Storage resource, and choose **role assignments** in **Access Control(IAM)** section, then click **add** to create. A suggested role type is: Storage Blob Data Reader. For more details, refer to [Use managed identity to access Azure Storage](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access-1).
+
+ ![MI blob](media/managed-identity-blob.png)
+
* **Container**: Metrics Advisor expects time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field.
-* **Blob Template**: This is the template of the Blob file names. For example: `/%Y/%m/X_%Y-%m-%d-%h-%M.json`. The following parameters are supported:
- * `%Y` is the year formatted as `yyyy`
- * `%m` is the month formatted as `MM`
- * `%d` is the day formatted as `dd`
- * `%h` is the hour formatted as `HH`
- * `%M` is the minute formatted as `mm`
+* **Blob Template**: Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. "%Y/%m" is the path, if you have "%d" in your path, you can add after "%m". If your JSON file is named by date, you could also use `%Y-%m-%d-%h-%M.json`.
+
+ The following parameters are supported:
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ For example, in the following dataset, the blob template should be "%Y/%m/%d/00/JsonFormatV2.json".
+
+ ![blob template](media/blob-template.png)
+
-* **JSON format version**: Defines the data schema in the JSON files. Currently Metrics Advisor supports two versions:
+* **JSON format version**: Defines the data schema in the JSON files. Currently Metrics Advisor supports two versions, you can choose one to fill in the field:
- * v1 (Default value)
+ * **v1** (Default value)
Only the metrics *Name* and *Value* are accepted. For example:
Create a Credential entity** and use it for authenticating to your data sources.
{"count":11, "revenue":1.23} ```
- * v2
+ * **v2**
The metrics *Dimensions* and *timestamp* are also accepted. For example:
Create a Credential entity** and use it for authenticating to your data sources.
] ```
-Only one timestamp is allowed per JSON file.
+ Only one timestamp is allowed per JSON file.
## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
-* **Connection String**: The connection string to access your Azure Cosmos DB. This can be found in the Cosmos DB resource, in **Keys**.
-* **Database**: The database to query against. This can be found in the **Browse** page under **Containers** section.
-* **Collection ID**: The collection ID to query against. This can be found in the **Browse** page under **Containers** section.
-* **SQL Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@StartTime` and `@EndTime` variables in your query. They should be formatted: `yyyy-MM-dd HH:mm:ss`.
+* **Connection String**: The connection string to access your Azure Cosmos DB. This can be found in the Cosmos DB resource in Azure portal, in **Keys**. Also, you can find more information in [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
+* **Database**: The database to query against. This can be found in the **Browse** page under **Containers** section in the Azure portal.
+* **Collection ID**: The collection ID to query against. This can be found in the **Browse** page under **Containers** section in the Azure portal.
+* **SQL Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
Sample query:
- ``` mssql
- select StartDate, JobStatusId, COUNT(*) AS JobNumber from IngestionJobs WHERE and StartDate = @StartTime
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
```+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="kusto">Azure Data Explorer (Kusto)</span>
+
+* **Connection String**: There are four authentication types for Azure Data Explorer (Kusto), they are **Basic**, **Service Principal**, **Service Principal From KeyVault**, and **Managed Identity**. The data source in connection string should be in URI format(starts with 'https'), you can find the URI in Azure portal.
- Sample query for a data slice from 2019/12/12:
+ * **Basic**: Metrics Advisor supports accessing Azure Data Explorer(Kusto) by using Azure AD application authentication. You need to create and register an Azure AD application and then authorize it to access an Azure Data Explorer database, see detail in [Create an AAD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app) documentation.
+ Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>;AAD Federated Security=True;Application Client ID=<Application Client ID>;Application Key=<Application Key>;Authority ID=<Tenant ID>
+ ```
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access. There are 3 steps to use service principal in Metrics Advisor.
- ``` mssql
- select StartDate, JobStatusId, COUNT(*) AS JobNumber from IngestionJobs WHERE and StartDate = '2019-12-12 00:00:00'
- ```
+ **1. Create Azure AD application registration.** See first part in [Create an AAD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app).
-## <span id="kusto">Azure Data Explorer (Kusto)</span>
+ **2. Manage Azure Data Explorer database permissions.** See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) to know about Service Principal and manage permissions.
+
+ **3. Create a credential entity in Metrics Advisor.** See how to [create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
-* **Connection String**: Metrics Advisor supports accessing Azure Data Explorer(Kusto) by using Azure AD application authentication. You will need to create and register an Azure AD application and then authorize it to access an Azure Data Explorer database. To get your connection string, see the [Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app) documentation.
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
+ Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
-* **Query**: See [Kusto Query Language](/azure/data-explorer/kusto/query) to get and formulate data into multi-dimensional time series data. You can use the `@StartTime` and `@EndTime` variables in your query. They should be formatted: `yyyy-MM-dd HH:mm:ss`.
+ * **Managed Identity**: Managed identity for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. Learn how to [authorize with a managed identity](../../storage/common/storage-auth-aad-msi.md#enable-managed-identities-on-a-vm).
+
+ You can create a managed identity in Azure portal for your Azure Data Explorer (Kusto), choose **Permissions** section, and click **add** to create. The suggested role type is: admin / viewer.
+
+ ![MI kusto](media/managed-identity-kusto.png)
+
+ Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ <!-- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples. -->
+
+* **Query**: See [Kusto Query Language](/azure/data-explorer/kusto/query) to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ``` Kusto
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
## <span id="adl">Azure Data Lake Storage Gen2</span>
-* **Account Name**: The account name of your Azure Data Lake Storage Gen2. This can be found in your Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys**.
+* **Account Name**: There are four authentication types for Azure Data Lake Storage Gen2, they are **Basic**, **Azure Data Lake Storage Gen2 Shared Key**, **Service Principal**, and **Service Principal From KeyVault**.
+
+ * **Basic**: The **Account Name** of your Azure Data Lake Storage Gen2. This can be found in your Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys**.
+
+ * **Azure Data Lake Storage Gen2 Shared Key**: First, you should specify the account key to access your Azure Data Lake Storage Gen2 (the same as Account Key in *Basic* authentication type). This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting. Then you should [create a credential entity](#jump1) for *Azure Data Lake Storage Gen2 Shared Key* type and fill in the account key.
+
+ The account name is the same as *Basic* authentication type.
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ The account name is the same as **Basic** authentication type.
+
+ **Step1:** Create and register an Azure AD application and then authorize it to access database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
+
+ **Step2:** Assign roles.
+ 1. In the Azure portal, go to the **Storage accounts** service.
+
+ 2. Select the ADLS Gen2 account to use with this application registration.
+
+ 3. Click **Access Control (IAM)**.
-* **Account Key**: Please specify the account name to access your Azure Data Lake Storage Gen2. This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+ 4. Click **+ Add** and select **Add role assignment** from the dropdown menu.
-* **File System Name (Container)**: Metrics Advisor will expect your time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field. This can be found in your Azure storage account (Azure Data Lake Storage Gen2) instance, and click 'Containers' in 'Blob Service' section.
+ 5. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
+ ![lake-service-principals](media/datafeeds/adls-gen2-app-reg-assign-roles.png)
+
+ **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+
+ * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
+ The account name is the same as *Basic* authentication type.
+
+
+* **Account Key**(only *Basic* needs): Specify the account key to access your Azure Data Lake Storage Gen2. This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+
+* **File System Name (Container)**: Metrics Advisor will expect your time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field. This can be found in your Azure storage account (Azure Data Lake Storage Gen2) instance, and click **'Containers'** in **'Data Lake Storage'** section, then you'll see the container name.
* **Directory Template**:
-This is the directory template of the Blob file. For example: */%Y/%m/%d*. The following parameters are supported:
- * `%Y` is the year formatted as `yyyy`
- * `%m` is the month formatted as `MM`
- * `%d` is the day formatted as `dd`
- * `%h` is the hour formatted as `HH`
- * `%M` is the minute formatted as `mm`
+ This is the directory template of the Blob file.
+ The following parameters are supported:
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ Query sample for a daily metric: `%Y/%m/%d`.
+
+ Query sample for an hourly metric: `%Y/%m/%d/%h`.
+
+ * **File Template**:
-This is the file template of the Blob file. For example: *X_%Y-%m-%d-%h-%M.json*. The following parameters are supported:
- * `%Y` is the year formatted as `yyyy`
- * `%m` is the month formatted as `MM`
- * `%d` is the day formatted as `dd`
- * `%h` is the hour formatted as `HH`
- * `%M` is the minute formatted as `mm`
-
-Currently Metrics Advisor supports the data schema in the JSON files as follows. For example:
-
-``` JSON
-[
- {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
- {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
-]
-```
+ Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, if you have `%d` in your path, you can add after `%m`.
+ The following parameters are supported:
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
+
+ Currently Metrics Advisor supports the data schema in the JSON files as follows. For example:
+
+ ``` JSON
+ [
+ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
+ {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
+ ]
+ ```
<!-- ## <span id="eventhubs">Azure Event Hubs</span>- * **Connection String**: This can be found in 'Shared access policies' in your Event Hubs instance. Also for the 'EntityPath', it could be found by clicking into your Event Hubs instance and clicking at 'Event Hubs' in 'Entities' blade. Items that listed can be input as EntityPath. -
-* **Consumer Group**: A [consumer group](../../event-hubs/event-hubs-features.md#consumer-groups) is a view (state, position, or offset) of an entire event hub.
+* **Consumer Group**: A [consumer group](https://docs.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups) is a view (state, position, or offset) of an entire event hub.
Event Hubs use the latest offset of a consumer group to consume (subscribe from) the data from data source. Therefore a dedicated consumer group should be created for one data feed in your Metrics Advisor instance.- * **Timestamp**: Metrics Advisor uses the Event Hubs timestamp as the event timestamp if the user data source does not contain a timestamp field. The timestamp field must match one of these two formats:
+* "YYYY-MM-DDTHH:MM:SSZ" format;
+* * Number of seconds or milliseconds from the epoch of 1970-01-01T00:00:00Z.
+ No matter which timestamp field it left aligns to granularity.For example, if timestamp is "2019-01-01T00:03:00Z", granularity is 5 minutes, then Metrics Advisor aligns the timestamp to "2019-01-01T00:00:00Z". If the event timestamp is "2019-01-01T00:10:00Z", Metrics Advisor uses the timestamp directly without any alignment.
+-->
- * "YYYY-MM-DDTHH:MM:SSZ" format;
+## <span id="log">Azure Log Analytics</span>
- * Number of seconds or milliseconds from the epoch of 1970-01-01T00:00:00Z.
+There are three authentication types for Azure Log Analytics, they are **Basic**, **Service Principal** and **Service Principal From KeyVault**.
+* **Basic**: You need to fill in **Tenant ID**, **Client ID**, **Client Secret**, **Workspace ID**.
+ To get **Tenant ID**, **Client ID**, **Client Secret**, see [Register app or web API](../../active-directory/develop/quickstart-register-app.md).
+ * **Tenant ID**: Specify the tenant ID to access your Log Analytics.
+ * **Client ID**: Specify the client ID to access your Log Analytics.
+ * **Client Secret**: Specify the client secret to access your Log Analytics.
+ * **Workspace ID**: Specify the workspace ID of Log Analytics. For **Workspace ID**, you can find it in Azure portal.
- No matter which timestamp field it left aligns to granularity.For example, if timestamp is "2019-01-01T00:03:00Z", granularity is 5 minutes, then Metrics Advisor aligns the timestamp to "2019-01-01T00:00:00Z". If the event timestamp is "2019-01-01T00:10:00Z", Metrics Advisor uses the timestamp directly without any alignment.
>
-## <span id="sql">Azure SQL Database | SQL Server</span>
+ ![workspace id](media/workspace-id.png)
+
+* **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application and then authorize it to access a database, see first part in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app).
+
+ **Step 2:** Assign roles.
+ 1. In the Azure portal, go to the **Storage accounts** service.
+ 2. Click **Access Control (IAM)**.
+ 3. Click **+ Add** and select **Add role assignment** from the dropdown menu.
+ 4. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
+
+ ![lake-service-principals](media/datafeeds/adls-gen2-app-reg-assign-roles.png)
+
+
+ **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+* **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
-* **Connection String**: Metrics Advisor accepts an [ADO.NET Style Connection String](/dotnet/framework/data/adonet/connection-string-syntax) for sql server data source.
+* **Query**: Specify the query of Log Analytics. For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md)
- Sample connection string:
+ Sample query:
```
- Data Source=db-server.database.windows.net:[port];initial catalog=[database];User ID=[username];Password=[password];Connection Timeout=10ms;
+ [TableName]
+ | where [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ | summarize [count_per_dimension]=count() by [Dimension]
```
-* **Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use a `@StartTime` variable in your query to help with getting expected metrics value.
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
- * `@StartTime`: a datetime in the format of `yyyy-MM-dd HH:mm:ss`
+## <span id="sql">Azure SQL Database | SQL Server</span>
- Sample query:
+* **Connection String**: There are five authentication types for Azure SQL Database | SQL Server, they are **Basic**, **Managed Identity**, **Azure SQL Connection String**, **Service Principal** and **Service Principal From KeyVault**.
- ``` mssql
- select StartDate, JobStatusId, COUNT(*) AS JobNumber from IngestionJobs WHERE and StartDate = @StartTime
- ```
+ * **Basic**: Metrics Advisor accepts an [ADO.NET Style Connection String](/dotnet/framework/data/adonet/connection-string-syntax) for sql server data source.
+ Here's an example of connection string:
- Actual query executed for data slice of 2019/12/12:
+ ```
+ Data Source=<Server>;Initial Catalog=<db-name>;User ID=<user-name>;Password=<password>
+ ```
- ``` mssql
- select StartDate, JobStatusId, COUNT(*) AS JobNumber from IngestionJobs WHERE and StartDate = '2019-12-12 00:00:00'
- ```
+ * <span id='jump'>**Managed Identity**</span>: Managed identity for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. To [enable your managed entity](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), you can refer to following steps:
+ 1. Enabling a system-assigned managed identity is a one-click experience. In Azure portal for your Metrics Advisor workspace, set the status as `on` in **Settings > Identity > System assigned**.
+
+ ![set status as on](media/datafeeds/set-identity-status.png)
+
+ 1. Enable Azure AD authentication. In the Azure portal for your data source, click **Set admin** in **Settings > Active Directory admin**, select an **Azure AD user account** to be made an administrator of the server, and click **Select**.
+
+ ![set admin](media/datafeeds/set-admin.png)
+
+ 1. In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the User name field, enter the name of the Azure AD account that you set as the server administrator in step 2, for example, test@contoso.com
+
+ ![set connection detail](media/datafeeds/connection-details.png)
++
+ 1. The last step is to enable managed identity(MI) in Metrics Advisor. In the **Object Explorer**, expand the **Databases** folder. Right-click on a user database and click **New query**. In the query window, you should enter the following line, and click Execute in the toolbar:
+
+ ```
+ CREATE USER [MI Name] FROM EXTERNAL PROVIDER
+ ALTER ROLE db_datareader ADD MEMBER [MI Name]
+ ```
+
+ > [!NOTE]
+ > The `MI Name` is the **Managed Identity Name** in Metrics Advisor (for service principal, it should be replaced with **Service Principal name**). Also, you can learn more detail in this document: [Authorize with a managed identity](../../storage/common/storage-auth-aad-msi.md#enable-managed-identities-on-a-vm).
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Azure SQL Connection String**:
+
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>;User ID=<user-name>;Password=<password>
+ ```
+
+
+ * **Service Principal**: A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application and then authorize it to access a database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
+
+ **Step 2:** Follow the same steps with [managed identity in SQL Server](#jump), which is mentioned above.
+
+ **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault. Also, your connection string could be found in Azure SQL Server resource in **Settings > Connection strings** section.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+* **Query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting expected metrics value in an interval. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
## <span id="table">Azure Table Storage</span>
-* **Connection String**: Please create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the Settings section then click Shared access signature. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
+* **Connection String**: Create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the **Settings section** then click **Shared access signature**. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
+
+ ![azure table generate sas](media/azure-table-generate-sas.png)
* **Table Name**: Specify a table to query against. This can be found in your Azure Storage Account instance. Click **Tables** in the **Table Service** section.
-* **Query**
-You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy-MM-ddTHH:mm:ss format string in script. Tip: Use Azure Storage Explorer to create a query with specific time range and make sure it runs okay, then do the replacement.
+* **Query**: You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting expected metrics value in an interval. They should be formatted: `yyyy-MM-ddTHH:mm:ssZ`.
+ Sample query:
+
``` mssql
- date ge datetime'@StartTime' and date lt datetime'@EndTime'
+ PartitionKey ge '@IntervalStart' and PartitionKey lt '@IntervalEnd'
```
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+ ## <span id="es">Elasticsearch</span> * **Host**: Specify the master host of Elasticsearch Cluster.
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
* **Authorization Header**: Specify the authorization header value of Elasticsearch Cluster. * **Query**: Specify the query to get data. Placeholder `@StartTime` is supported. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@StartTime = 2020-06-21T00:00:00`.
-## <span id="http">HTTP request</span>
* **Request URL**: An HTTP url that can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`. * **Request HTTP method**: Use GET or POST.
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
* **Connection String**: The connection string to access your InfluxDB. * **Database**: The database to query against. * **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+ * **User name**: This is optional for authentication. * **Password**: This is optional for authentication.
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
* **Connection String**: The connection string to access your MongoDB. * **Database**: The database to query against.
-* **Command**: A command to get and formulate data into multi-dimensional time series data for ingestion.
+* **Query**: A command to get and formulate data into multi-dimensional time series data for ingestion. We recommend the command is verified on [db.runCommand()](https://docs.mongodb.com/manual/reference/method/db.runCommand/https://docsupdatetracker.net/index.html).
+
+ Sample query:
+
+ ``` MongoDB
+ {"find": "[TableName]","filter": { [Timestamp]: { $gte: ISODate(@IntervalStart) , $lt: ISODate(@IntervalEnd) }},"singleBatch": true}
+ ```
+
## <span id="mysql">MySQL</span> * **Connection String**: The connection string to access your MySQL DB. * **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn]< @IntervalEnd
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+ ## <span id="pgsql">PostgreSQL</span> * **Connection String**: The connection string to access your PostgreSQL DB. * **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="csv">Local files(CSV)</span>
+
+> [!NOTE]
+> This feature is only used for quick system evaluation focusing on anomaly detection. It only accepts static data from a local CSV and performs anomaly detection on single time series data. However, for the full experience analyzing on multi-dimensional metrics including real-time data ingestion, anomaly notification, root cause analysis, cross-metric incident analysis, use other supported data sources.
+
+**Requirements on data in CSV:**
+- Have at least one column, which represents measurements to be analyzed. For better and quicker user experience, we recommend you try a CSV file containing two columns: (1) Timestamp column (2) Metric Column. (Timestamp format: 2021-03-30T00:00:00Z, the 'seconds' part is best to be ':00Z'), and the time granularity between every record should be the same.
+- Timestamp column is optional, if there's no timestamp, Metrics Advisor will use timestamp starting from today 00:00:00(UTC) and map each measure in the row at a one-hour interval. If there is timestamp column in CSV and you want to keep it, make sure the data time period follow this rule [historical data processing window].
+- There is no re-ordering or gap-filling happening during data ingestion, make sure your data in CSV is ordered by timestamp **ascending (ASC)**.
+
## Next steps * While waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/glossary.md
A data feed is what Metrics Advisor ingests from your data source, such as Cosmo
* zero or more dimensions * one or more measures.
+## Interval
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+<!-- ![What is interval](media/tutorial/what-is-interval.png) -->
+ ## Metric A metric is a quantifiable measure that is used to monitor and assess the status of a specific business process. It can be a combination of multiple time series values divided into dimensions. For example a *web health* metric might contain dimensions for *user count* and the *en-us market*.
A measure is a fundamental or unit-specific term and a quantifiable value of the
A time series is a series of data points indexed (or listed or graphed) in chronological order. Most commonly, a time series is a sequence taken at successive, equally spaced points in time. It is a sequence of discrete-time data.
-In Metrics Advisor, values of one metric on a specific dimension combination is called one series.
+In Metrics Advisor, values of one metric on a specific dimension combination are called one series.
## Granularity Granularity indicates how frequent data points will be generated at the data source. For example, daily, hourly.
-## Start time
+## Ingest data since(UTC)
-Start time is the time that you want Metrics Advisor to begin ingesting data from your data source. Your data source must have data at the specified start time.
+Ingest data since(UTC) is the time that you want Metrics Advisor to begin ingesting data from your data source. Your data source must have data at the specified ingestion start time.
## Confidence boundaries
There are two roles to manage data feed permissions: *Administrator*, and *Viewe
## Next steps - [Metrics Advisor overview](overview.md)-- [Use the web portal](quickstarts/web-portal.md)
+- [Use the web portal](quickstarts/web-portal.md)
cognitive-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/configure-metrics.md
Use this article to start configuring your Metrics Advisor instance using the we
:::image type="content" source="../media/metrics/select-metric.png" alt-text="Select a metric" lightbox="../media/metrics/select-metric.png":::
-Click on one of the metric names to see its details. In this detailed view, you can switch to another metric in the same data feed using the drop down list in the top right corner of the screen.
+Select one of the metric names to see its details. In this detailed view, you can switch to another metric in the same data feed using the drop down list in the top right corner of the screen.
-When you first view a metrics' details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
+When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
You can also select time ranges, and change the layout of the page.
This configuration will be applied to the group of series or specific series ins
### Anomaly detection methods
-Metrics Advisor offers multiple anomaly detection methods. You can use one or combine them using logical operators by clicking the **+** button.
+Metrics Advisor offers multiple anomaly detection methods: **Hard threshold, Smart detection, Change threshold**. You can use one or combine them using logical operators by clicking the **'+'** button.
+
+**Hard threshold**
+
+ Hard threshold is a basic method for anomaly detection. You can set an upper and/or lower bound to determine the expected value range. Any points fall out of the boundary will be identified as an anomaly.
**Smart detection** Smart detection is powered by machine learning that learns patterns from historical data, and uses them for future detection. When using this method, the **Sensitivity** is the most important parameter for tuning the detection results. You can drag it to a smaller or larger value to affect the visualization on the right side of the page. Choose one that fits your data and save it.
-In smart detection mode, the sensitivity and boundary version parameters are used to fine tune the anomaly detection result.
+In smart detection mode, the sensitivity and boundary version parameters are used to fine-tune the anomaly detection result.
Sensitivity can affect the width of the expected value range of each point. When increased, the expected value range will be tighter, and more anomalies will be reported:
Use the following steps to use this mode:
* **Up** configures detection to only detect anomalies when (current data point) - (comparison data point) > **+** threshold percentage. * **Down** configures detection to only detect anomalies when (current data point) - (comparing data point) < **-** threshold percentage.
-
-**Hard threshold**
- Hard threshold is a basic method for anomaly detection. You can set an upper and/or lower bound to determine the expected value range. Any points fall out of the boundary will be identified as an anomaly.
## Preset events
Sometimes, expected events and occurrences (such as holidays) can generate anoma
> [!Note] > Preset event configuration will take holidays into consideration during anomaly detection, and may change your results. It will be applied to the data points ingested after you save the configuration.
-Click the **Configure Preset Event** button next to the metrics drop down list on each metric details page.
+Click the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
:::image type="content" source="../media/metrics/preset-event-button.png" alt-text="preset event button":::
The **Cycle event** section can be used in some scenarios to help reduce unneces
- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern. - Metrics that do not have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).
-Not all options are selectable for every granularity. The available options per granularity are below:
+Not all options are selectable for every granularity. The available options per granularity are below (Γ£ö for available, X for unavailable):
| Granularity | YoY | MoM | WoW | DoD | |:-|:-|:-|:-|:-|
Not all options are selectable for every granularity. The available options per
| Secondly | X | X | X | X | | Custom* | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-X - Unavailable.
-Γ£ö - Available.
-\* When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day.
+When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day.
Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it will report an anomaly if multiple data points don't follow the pattern. **Strict mode** is used to enable anomaly reporting if even one data point doesn't follow the pattern.
cognitive-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/manage-data-feeds.md
Previously updated : 08/28/2020 Last updated : 04/20/2021
Learn how to manage your onboarded data feeds in Metrics Advisor. This article g
Only the administrator of a data feed is allowed to make changes to it.
-To pause or reactivate a data feed:
+On the data feed list page, you can **pause, reactivate, delete** a data feed:
-1. On the data feed list page, click the operation you want to perform on the data feed.
+* **Pause/Reactivate**: Select the **Pause/Play** button to pause/reactivate a data feed.
-2. On the data feed details page, click the **Status** switch button.
+* **Delete**: Select **Delete** button to delete a data feed.
-To delete a data feed:
-
-1. On the data feed list page, click **Delete** on the data feed.
-
-2. In the data feed details page, click **Delete**.
-
-When changing the start time, you need to verify the schema again. You can change it by using **Edit parameters**.
+If you change the ingestion start time, you need to verify the schema again. You can change it by clicking **Edit** in the data feed detail page.
## Backfill your data feed
Workspace access is controlled by the Metrics Advisor resource, which uses Azure
Metrics Advisor lets you grant permissions to different groups of people on different data feeds. There are two types of roles: -- Administrator: Has full permissions to manage a data feed, including modify and delete.-- Viewer: Has access to a read-only view of the data feed.
+- **Administrator**: Has full permissions to manage a data feed, including modify and delete.
+- **Viewer**: Has access to a read-only view of the data feed.
## Advanced settings
There are several optional advanced settings when creating a new data feed, they
> [!NOTE] > This setting won't affect your data source and will not affect the data charts displayed on the portal. The auto-filling only occurs during anomaly detection.
-Some time series are not continuous. When there are missing data points, Metrics Advisor will use the specified value to fill them before anomaly detection for better accuracy.
+Sometimes series are not continuous. When there are missing data points, Metrics Advisor will use the specified value to fill them before anomaly detection to improve accuracy.
The options are: * Using the value from the previous actual data point. This is used by default.
Once you've filled in the action link, click **Go to action link** on the incide
| `%timestamp` | - | Timestamp of an anomaly or end time of a persistent incident | | `%tagset` | `%tagset`, <br> `[%tagset.get("Dim1")]`, <br> `[ %tagset.get("Dim1", "filterVal")]` | Dimension values of an anomaly or top anomaly of an incident. <br> The `filterVal` is used to filter out matching values within the square brackets. |
-Examples :
+Examples:
-* If the action link template is `https://action-link/metric/%metric?detectConfigId=%detect_config`,
+* If the action link template is `https://action-link/metric/%metric?detectConfigId=%detect_config`:
* The action link `https://action-link/metric/1234?detectConfigId=2345` would go to anomalies or incidents under metric `1234` and detect config `2345`.
-* If the action link template is `https://action-link?[Dim1=%tagset.get('Dim1','')&][Dim2=%tagset.get('Dim2','')]`,
+* If the action link template is `https://action-link?[Dim1=%tagset.get('Dim1','')&][Dim2=%tagset.get('Dim2','')]`:
* The action link would be `https://action-link?Dim1=Val1&Dim2=Val2` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`.
- * The action link would be `https://action-link?Dim2=Val2` when the anomaly is `{ "Dim1": "", "Dim2": "Val2" } `, since `[Dim1=***&]` is skipped for the dimension value empty string.
+ * The action link would be `https://action-link?Dim2=Val2` when the anomaly is `{ "Dim1": "", "Dim2": "Val2" }`, since `[Dim1=***&]` is skipped for the dimension value empty string.
-* If the action link template is `https://action-link?filter=[Name/Dim1 eq '%tagset.get('Dim1','')' and ][Name/Dim2 eq '%tagset.get('Dim2','')']`,
+* If the action link template is `https://action-link?filter=[Name/Dim1 eq '%tagset.get('Dim1','')' and ][Name/Dim2 eq '%tagset.get('Dim2','')']`:
* The action link would be `https://action-link?filter=Name/Dim1 eq 'Val1' and Name/Dim2 eq 'Val2'` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`, * The action link would be `https://action-link?filter=Name/Dim2 eq 'Val2'` when anomaly is `{ "Dim1": "", "Dim2": "Val2" }` since `[Name/Dim1 eq '***' and ]` is skipped for the dimension value empty string.
cognitive-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/onboard-your-data.md
Previously updated : 09/14/2020 Last updated : 04/20/2021
Use this article to learn about onboarding your data to Metrics Advisor.
## Data schema requirements and configuration [!INCLUDE [data schema requirements](../includes/data-schema-requirements.md)]
+If you are not sure about some of the terms, refer to [Glossary](../glossary.md).
## Avoid loading partial data
To avoid loading partial data, we recommend two approaches:
Set the **Ingestion time offset** parameter for your data feed to delay the ingestion until the data is fully prepared. This can be useful for some data sources which don't support transactions such as Azure Table Storage. See [advanced settings](manage-data-feeds.md#advanced-settings) for details.
-## Add a data feed using the web-based workspace
+## Start by adding a data feed
After signing into your Metrics Advisor portal and choosing your workspace, click **Get started**. Then, on the main page of the workspace, click **Add data feed** from the left menu. ### Add connection settings
+#### 1. Basic settings
Next you'll input a set of parameters to connect your time-series data source. * **Source Type**: The type of data source where your time series data is stored.
-* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, and Custom. The lowest interval The customization option supports is 60 seconds.
+* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, and Custom. The lowest interval the customization option supports is 300 seconds.
* **Seconds**: The number of seconds when *granularityName* is set to *Customize*. * **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
-Next, you'll need to specify the connection information for the data source, and the custom queries used to convert the data into the required schema. For details on the other fields and connecting different types of data sources, see [Add data feeds from different data sources](../data-feeds-from-different-sources.md).
+#### 2. Specify connection string
+Next, you'll need to specify the connection information for the data source. For details on the other fields and connecting different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
+#### 3. Specify query for a single timestamp
+<!-- Next, you'll need to specify a query to convert the data into the required schema, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
-### Verify and get schema
+For details of different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
-After the connection string and query string are set, select **Verify and get schema** to verify the connection and run the query to get your data schema from the data source. Normally it takes a few seconds depending on your data source connection. If there's an error at this step, confirm that:
+### Load data
+
+After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
+
+If there's an error at this step:
+1. First check if the connection string is valid.
+2. Then check if there's sufficient permissions and that the ingestion worker IP address is granted access.
+3. Then check if required parameters (@IntervalStart and @IntervalEnd) are used in your query.
-* Your connection string and query are correct.
-* Your Metrics Advisor instance is able to connect to the data source if there are firewall settings.
### Schema configuration
If the timestamp of a data point is omitted, Metrics Advisor will use the timest
|Selection |Description |Notes | ||||
-| **Display Name** | Name to be displayed in your workspace instead of the original column name. | |
+| **Display Name** | Name to be displayed in your workspace instead of the original column name. | Optional.|
|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. If you get a **column cannot be specified as Timestamp** error, check your query or data source for duplicate timestamps. | |**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. | |**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country, language, tenant. You can select zero or more columns as dimensions. Note: be cautious when selecting a non-string column as a dimension. | Optional. |
-|**Ignore** | Ignore the selected column. | Optional. See the below text. |
+|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
If you want to ignore columns, we recommend updating your query or data source t
| 4 | 2019/11/11 | US | EN-US | 23000 | | ... | ...| ... | ... | ... |
-If *Country* is a dimension and *Language* is set as *Ignored*, then the first and second rows will have the same dimensions. Metrics Advisor will arbitrarily use one value from the two rows. Metrics Advisor will not aggregate the rows in this case.
+If *Country* is a dimension and *Language* is set as *Ignored*, then the first and second rows will have the same dimensions for a timestamp. Metrics Advisor will arbitrarily use one value from the two rows. Metrics Advisor will not aggregate the rows in this case.
+
+After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
+- Whether timestamp of queried data falls into one single interval.
+- Whether there's duplicate values returned for the same dimension combination within one metric interval.
### Automatic roll up settings
Metrics Advisor can automatically perform aggregation(for example SUM, MAX, MIN)
Consider the following scenarios:
-* *I do not need to include the roll-up analysis for my data.*
+* *"I do not need to include the roll-up analysis for my data."*
You do not need to use the Metrics Advisor roll-up.
-* *My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others.*
+* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
Consider the following scenarios:
| US | EN-US | 12000 | | | EN-US | 5000 |
-* *I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by <some string>*
+* *"I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by {some string}."*
Some data sources such as Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion. This option means you need Metrics Advisor to calculate the roll-up using the algorithm you've selected and use the specified string to represent the roll-up in Metrics Advisor. This won't change any data in your data source.
Consider the following scenarios:
Consider the following before using the Auto roll up feature: * If you want to use *SUM* to aggregate your data, make sure your metrics are additive in each dimension. Here are some examples of *non-additive* metrics:
- * Fraction-based metrics. This includes ratio, percentage, etc. For example, you should not add the unemployment rate of each state to calculate the unemployment rate of the entire country.
- * Overlap in dimension. For example, you should not add the number of people in to each sport to calculate the number of people who like sports, because there is an overlap between them, one person can like multiple sports.
+ - Fraction-based metrics. This includes ratio, percentage, etc. For example, you should not add the unemployment rate of each state to calculate the unemployment rate of the entire country.
+ - Overlap in dimension. For example, you should not add the number of people in to each sport to calculate the number of people who like sports, because there is an overlap between them, one person can like multiple sports.
* To ensure the health of the whole system, the size of cube is limited. Currently, the limit is 1,000,000. If your data exceeds that limit, ingestion will fail for that timestamp. ## Advanced settings
To check ingestion failure details:
:::image type="content" source="../media/datafeeds/check-failed-ingestion.png" alt-text="Check failed ingestion"::: A *failed* status indicates the ingestion for this data source will be retried later.
-An *Error* status indicates Metrics Advisor won't retry for the data source. To reload data, you need trigger a backfill/reload manually.
+An *Error* status indicates Metrics Advisor won't retry for the data source. To reload data, you need to trigger a backfill/reload manually.
-You can also reload the progress of an ingestion by clicking **Refresh Progress**. After data ingestion complete, you're free to click into metrics and check anomaly detection results.
+You can also reload the progress of an ingestion by clicking **Refresh Progress**. After data ingestion completes, you're free to click into metrics and check anomaly detection results.
## Next steps - [Manage your data feeds](manage-data-feeds.md)
cognitive-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/quickstarts/web-portal.md
When you provision a Metrics Advisor instance, you can use the APIs and web-base
> [!TIP]
-> * It may 10 to 30 minutes for your Metrics Advisor resource to deploy. Click **Go to resource** once it successfully deploys.
+> * It may 10 to 30 minutes for your Metrics Advisor resource to deploy. Select **Go to resource** once it successfully deploys.
> * If you'd like to use the REST API to interact with the service, you will need the key and endpoint from the resource you create. You can find them in the **Keys and endpoints** tab in the created resource. + This document uses a SQL Database as an example for creating your first monitor. ## Sign in to your workspace
-After your resource is created, sign in to [Metrics Advisor portal](https://go.microsoft.com/fwlink/?linkid=2143774). Select your workspace to start monitoring your metrics.
+After your resource is created, sign in to [Metrics Advisor portal](https://go.microsoft.com/fwlink/?linkid=2143774) with your Active Directory account. From the landing page, select your **Directory**, **Subscription** and **Workspace** that just created, then select **Get started**. For onboarding time series data, select **Add data feed** from the left menu.
+ Currently you can create one Metrics Advisor resource at each available region. You can switch workspaces in Metrics Advisor portal at any time. ## Onboard time series data
-Metrics Advisor provides connectors for different data sources, such as SQL Database, Azure Data Explorer, and Azure Table Storage. The steps for connecting data are similar for different connectors, although some configuration parameters may vary. See [connect your data from different sources](../data-feeds-from-different-sources.md) for the required parameters for specific data sources.
+Metrics Advisor provides connectors for different data sources, such as SQL Database, Azure Data Explorer, and Azure Table Storage. The steps for connecting data are similar for different connectors, although some configuration parameters may vary. See [connect data different data feed sources](../data-feeds-from-different-sources.md) for different data connection settings.
This quickstart uses a SQL Database as an example. You can also ingest your own data follow the same steps.
-To get started, sign into your Metrics Advisor workspace, with your Active Directory account. From the landing page, select your **Directory**, **Subscription** and **Workspace** that just created, then click **Get started**. After the main page of the workload loads, select **Add data feed** from the left menu.
### Data schema requirements and configuration [!INCLUDE [data schema requirements](../includes/data-schema-requirements.md)]
-### Configure connection settings
+### Configure connection settings and query
-> [!TIP]
-> See [how to add data feeds](../how-tos/onboard-your-data.md) for details on the available parameters.
-
-Add the data feed by connecting to your time-series data source. Start by selecting the following parameters:
+[Add the data feeds](../how-tos/onboard-your-data.md) by connecting to your time series data source. Start by selecting the following parameters:
* **Source Type**: The type of data source where your time series data is stored. * **Granularity**: The interval between consecutive data points in your time series data, for example Yearly, Monthly, Daily. The lowest interval customization supports is 60 seconds. * **Ingest data since (UTC)**: The start time for the first timestamp to be ingested.
-Next, specify the **Connection string** with the credentials for your data source, and a custom **Query**. The query is used to specify the data to be ingested, and converted into the required schema.
-
+<!-- Next, specify the **Connection string** with the credentials for your data source, and a custom **Query**, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
:::image type="content" source="../media/connection-settings.png" alt-text="Connection settings" lightbox="../media/connection-settings.png":::
-### Verify the connection and load the data schema
+### Load data
-After the connection string and query string are created, select **Verify and get schema** to verify the connection and run the query to get your data schema from the data source. Normally it takes a few seconds depending on your data source connection. If there's an error at this step, confirm that:
+After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
-1. Your connection string and query are correct.
-2. Your Metrics Advisor instance is able to connect to the data source if there are firewall settings.
+If there's an error at this step:
+1. First check if the connection string is valid.
+2. Then confirm that there's sufficient permissions and that the ingestion worker IP address is granted access.
+3. Next check if the required parameters (@IntervalStart and @IntervalEnd) are used in your query.
### Schema configuration
-Once the data schema is loaded and shown like below, select the appropriate fields.
+Once the data is loaded by running the query and shown like below, select the appropriate fields.
|Selection |Description |Notes |
Once the data schema is loaded and shown like below, select the appropriate fiel
|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you could specify at most one column as timestamp. | Optional. Should be specified with at most one column. | |**Measure** | The numeric values in the data feed. For each data feed, you could specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. | |**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country, language, tenant. You could select none or arbitrary number of columns as dimensions. Note: if you're selecting a non-string column as dimension, be cautious with dimension explosion. | Optional. |
-|**Ignore** | Ignore the selected column. | |
+|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
:::image type="content" source="../media/schema-configuration.png" alt-text="Schema configuration" lightbox="../media/schema-configuration.png":::
+After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
+- Whether timestamp of queried data falls into one single interval.
+- Whether there's duplicate values returned for the same dimension combination within one metric interval.
+ ### Automatic roll up settings > [!IMPORTANT] > If you'd like to enable **root cause analysis** and other diagnostic capabilities, 'automatic roll up setting' needs to be configured. > Once enabled, the automatic roll up settings cannot be changed.
-Metrics Advisor can automatically perform aggregation(SUM/MAX/MIN...) on each dimension during ingestion, then builds a hierarchy which will be used in root case analysis and other diagnostic features. See [Automatic roll up settings](../how-tos/onboard-your-data.md#automatic-roll-up-settings) for more details.
+Metrics Advisor can automatically perform aggregation(SUM/MAX/MIN...) on each dimension during ingestion, then builds a hierarchy, which will be used in root case analysis and other diagnostic features. See [Automatic roll up settings](../how-tos/onboard-your-data.md#automatic-roll-up-settings) for more details.
-Give a custom name for the data feed, which will be displayed in your workspace. Click on **Submit**.
+Give a custom name for the data feed, which will be displayed in your workspace. Select **Submit**.
## Tune detection configuration
-After the data feed is added, Metrics Advisor will attempt to ingest metric data from the specified start date. It will take some time for data to be fully ingested, and you can view the ingestion status by clicking **Ingestion progress** at the top of the data feed page. If data is ingested, Metrics Advisor will apply detection, and continue to monitor the source for new data.
+After the data feed is added, Metrics Advisor will attempt to ingest metric data from the specified start date. It will take some time for data to be fully ingested, and you can view the ingestion status by selecting **Ingestion progress** at the top of the data feed page. If data is ingested, Metrics Advisor will apply detection, and continue to monitor the source for new data.
-When detection is applied, click one of the metrics listed in data feed to find the **Metric detail page** to:
-- View visualizations of all time series slices under this metric
+When detection is applied, select one of the metrics listed in data feed to find the **Metric detail page** to:
+- View visualizations of all time series' slices under this metric
- Update detection configuration to meet expected results - Set up notification for detected anomalies
When detection is applied, click one of the metrics listed in data feed to find
## View the diagnostic insights
-After tuning the detection configuration, anomalies that are found should reflect actual anomalies in your data. Metrics Advisor performs analysis on multi-dimensional metrics, like anomaly clustering, incident correlation and root cause analysis. Use these features to analyze and diagnose incidents in your data.
+After tuning the detection configuration, anomalies that are found should reflect actual anomalies in your data. Metrics Advisor performs analysis on multi-dimensional metrics to locate root cause into specific dimension and also cross-metrics analysis by using "Metrics graph".
-To view the diagnostic insights, click on the red dots on time series visualizations, which represent detected anomalies. A window will appear with a link to incident analysis page.
+To view the diagnostic insights, select the red dots on time series visualizations, which represent detected anomalies. A window will appear with a link to incident analysis page.
:::image type="content" source="../media/incident-link.png" alt-text="Incident link" lightbox="../media/incident-link.png":::
-After clicking the link, you will be pivoted to the incident analysis page which analyzes on corresponding anomaly, with a bunch of diagnostics insights. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
+After selecting the link, you will be pivoted to the incident analysis page, which analyzes on a group of related anomalies with a bunch of diagnostics insights. There're 3 major steps to diagnose an incident:
+
+### Check summary of current incident
+
+At the top, there will be a summary including basic information, actions & tracings and an analyzed root cause. Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included".
-- The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
- :::image type="content" source="../media/diagnostics/incident-summary.png" alt-text="Incident summary":::
-- After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
- :::image type="content" source="../media/diagnostics/cross-dimension-diagnostic.png" alt-text="Cross dimension diagnostic using diagnostic tree":::
-- And last to view cross-metrics diagnostic insights using "Metrics graph".
- :::image type="content" source="../media/diagnostics/cross-metrics-analysis.png" alt-text="Cross metrics analysis":::
+Analyzed root cause is an automatic analyzed result. Metrics Advisor analyzes on all anomalies that captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates a root cause advice.
++
+Based on these, you can already get a straightforward view of current abnormal status and the impact of the incident and the most potential root cause. So that immediate action could be taken to resolve incident as soon as possible.
+
+### View cross-dimension diagnostic insights
+
+After getting basic info and automatic analysis insight, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using **"Diagnostic tree"**.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named as "Diagnostic tree". For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Click on each node to expand detailed information.
++
+### View cross-metrics diagnostic insights using "Metrics graph"
+
+Sometimes, it's hard to analyze an issue by checking the abnormal status of a single metric, and you need to correlate multiple metrics together. Customers are able to configure a "Metrics graph" which indicates the relations between metrics.
+By leveraging above cross-dimension diagnostic result, the root cause is limited into specific dimension value. Then use "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
+After clicking the link, you will be pivoted to the incident analysis page which analyzes on corresponding anomaly, with a bunch of diagnostics insights. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
-Based on these, you can already get a straightforward view of what is happening and the impact of the incident as well as the most potential root cause. So that immediate action could be taken to resolve incident as soon as possible.
But you can also pivot across more diagnostics insights leveraging additional features to drill down anomalies by dimension, view similar anomalies and do comparison across metrics. Please find more at [How to: diagnose an incident](../how-tos/diagnose-an-incident.md).
If you'd like to get alerted when an anomaly is detected in your data, you can c
### Create a web hook
-A web hook is the entry point to get anomaly noticed by a programmatic way from the Metrics Advisor service, which calls a user-provided API when an alert is triggered.For details on how to create a hook, please refer to the **Create a hook** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#create-a-hook).
+A web hook is the entry point to get anomaly noticed by a programmatic way from the Metrics Advisor service, which calls a user-provided API when an alert is triggered.For details on how to create a hook, refer to the **Create a hook** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#create-a-hook).
### Configure alert settings
-After creating a hook, an alert setting determines how and which alert notifications should be sent. You can set multiple alert settings for each metric. two important settings are **Alert for** which specifies the anomalies to be included, and **Filter anomaly options** which defines which anomalies to include in the alert. See the **Add or Edit alert settings** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#add-or-edit-alert-settings) for more details.
+After creating a hook, an alert setting determines how and which alert notifications should be sent. You can set multiple alert settings for each metric. two important settings are **Alert for** which specifies the anomalies to be included, and **Filter anomaly options**, which define which anomalies to include in the alert. See the **Add or Edit alert settings** section in [How-to: Configure alerts and get notifications using a hook](../how-tos/alerts.md#add-or-edit-alert-settings) for more details.
## Next steps
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/whats-new.md
If you want to learn about the latest updates to Metrics Advisor client SDKs see
### Updated articles
-* [Updated metrics onboarding flow](how-tos/onboard-your-data.md#add-a-data-feed-using-the-web-based-workspace)
+* [Updated metrics onboarding flow](how-tos/onboard-your-data.md)
* [Enriched guidance when adding data feeds from different sources](data-feeds-from-different-sources.md) * [Updated new notification channel using Microsoft Teams](how-tos/alerts.md#teams-hook) * [Updated incident diagnostic experience](how-tos/diagnose-an-incident.md)
communication-services Ui Library Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/ui-library-overview.md
Developers can easily instantiate the Composite using an Azure Communication Ser
| Composite | Use Cases | | | - |
-| [CallComposite](https://azure.github.io/communication-ui-library/?path=/docs/composites-callcomposite--basic-example) | Calling experience that allows users to start or join a call. Inside the experience users can configure their devices, participate in the call with video, and see other participants, including those participants with video turn on. For Teams Interop is includes lobby functionality for user to wait to be admitted. |
-| [ChatComposite](https://azure.github.io/communication-ui-library/?path=/docs/composites-chatcomposite--basic-example) | Chat experience where user can send and receive messages. Thread events like typing, reads, participants entering and leaving are displayed to the user as part of the chat thread. |
+| [CallComposite](https://azure.github.io/communication-ui-library/?path=/story/composites-call--basic-example) | Calling experience that allows users to start or join a call. Inside the experience users can configure their devices, participate in the call with video, and see other participants, including those participants with video turn on. For Teams Interop is includes lobby functionality for user to wait to be admitted. |
+| [ChatComposite](https://azure.github.io/communication-ui-library/?path=/story/composites-chat--basic-example) | Chat experience where user can send and receive messages. Thread events like typing, reads, participants entering and leaving are displayed to the user as part of the chat thread. |
## UI Component overview
UI Components support customization to give the components the right feel and lo
| Area | Component | Description | | - | | -- |
-| Calling | [Grid Layout](https://azure.github.io/communication-ui-library/?path=/story/ui-components-gridlayout--grid-layout-component) | Grid component to organize Video Tiles into an NxN grid |
-| | [Video Tile](https://azure.github.io/communication-ui-library/?path=/story/ui-components-videotile--video-tile-component) | Component that displays video stream when available and a default static component when not |
-| | [Control Bar](https://azure.github.io/communication-ui-library/?path=/story/ui-components-controlbar--control-bar-component) | Container to organize DefaultButtons to hook up to specific call actions like mute or share screen |
+| Calling | [Grid Layout](https://azure.github.io/communication-ui-library/?path=/story/ui-components-gridlayout--grid-layout) | Grid component to organize Video Tiles into an NxN grid |
+| | [Video Tile](https://azure.github.io/communication-ui-library/?path=/story/ui-components-videotile--video-tile) | Component that displays video stream when available and a default static component when not |
+| | [Control Bar](https://azure.github.io/communication-ui-library/?path=/story/ui-components-controlbar--control-bar) | Container to organize DefaultButtons to hook up to specific call actions like mute or share screen |
| | [VideoGallery](https://azure.github.io/communication-ui-library/?path=/story/ui-components-video-gallery--video-gallery) | Turn-key video gallery component which dynamically changes as participants are added |
-| Chat | [Message Thread](https://azure.github.io/communication-ui-library/?path=/story/ui-components-messagethread--message-thread-component) | Container that renders chat messages, system messages, and custom messages |
-| | [Send Box](https://azure.github.io/communication-ui-library/?path=/story/ui-components-sendbox--send-box-component) | Text input component with a discrete send button |
-| | [Message Status Indicator](https://azure.github.io/communication-ui-library/?path=/story/ui-components-message-status-indicator--message-status-indicator) | Multi-state read receipt component to show state of sent message |
-| | [Typing indicator](https://azure.github.io/communication-ui-library/?path=/story/ui-components-typingindicator--typing-indicator-component) | Text component to render the participants who are actively typing on a thread |
-| Common | [Participant Item](https://azure.github.io/communication-ui-library/?path=/story/ui-components-participantitem--participant-item-component) | Common component to render a call or chat participant including avatar and display name |
-| | [Participant List](https://azure.github.io/communication-ui-library/?path=/story/ui-components-participant-list--participant-list) | Common component to render a call or chat participant list including avatar and display name |
-
+| Chat | [Message Thread](https://azure.github.io/communication-ui-library/?path=/story/ui-components-messagethread--message-thread) | Container that renders chat messages, system messages, and custom messages |
+| | [Send Box](https://azure.github.io/communication-ui-library/?path=/story/ui-components-sendbox--send-box) | Text input component with a discrete send button |
+| | [Message Status Indicator](https://azure.github.io/communication-ui-library/?path=/story/ui-components-messagestatusindicator--message-status-indicator) | Multi-state read receipt component to show state of sent message |
+| | [Typing indicator](https://azure.github.io/communication-ui-library/?path=/story/ui-components-typingindicator--typing-indicator) | Text component to render the participants who are actively typing on a thread |
+| Common | [Participant Item](https://azure.github.io/communication-ui-library/?path=/story/ui-components-participantitem--participant-item) | Common component to render a call or chat participant including avatar and display name |
+| | [Participant List](https://azure.github.io/communication-ui-library/?path=/story/ui-components-participantlist--participant-list) | Common component to render a call or chat participant list including avatar and display name |
## What UI artifact is best for my project?
cosmos-db Cosmos Db Advanced Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-threat-protection.md
An email notification is also sent with the alert details and recommended action
## Next steps * Learn more about [Diagnostic logging in Azure Cosmos DB](cosmosdb-monitor-resource-logs.md)
-* Learn more about [Azure Security Center](../security-center/security-center-introduction.md)
+* Learn more about [Azure Security Center](../security-center/security-center-introduction.md)
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-cmk.md
This feature is currently available only for new accounts.
### Is it possible to use customer-managed keys in conjunction with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
-Yes, Azure Synapse Link only supports configuring customr-managed keys usings your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account.
+Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account.
### Is there a plan to support finer granularity than account-level keys?
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet.md
ms.devlang: dotnet Previously updated : 06/15/2021 Last updated : 06/18/2021
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
Synapse Link enables you to run near real-time analytics over your mission-criti
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
-* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customr-managed keys usings your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
+* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
* **Secure key management** - Accessing the data in analytical store from Synapse Spark and Synapse serverless SQL pools requires managing Azure Cosmos DB keys within Synapse Analytics workspaces. Instead of using the Azure Cosmos DB account keys inline in Spark jobs or SQL scripts, Azure Synapse Link provides more secure capabilities.
data-factory Compute Optimized Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-retire.md
+
+ Title: Compute optimized retirement
+description: Data flow compute optimized option is being retired
++++ Last updated : 06/09/2021++
+# Retirement of data flow compute optimized option
++
+Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mechanism to transform data in ETL jobs at scale using a graphical design paradigm. Data flows execute on the Azure Data Factory and Azure Synapse Analytics serverless Integration Runtime facility. The scalable nature of Azure Data Factory and Azure Synapse Analytics Integration Runtimes enabled three different compute options for the Azure Databricks Spark environment that is utilized to execute data flows at scale: Memory Optimized, General Purpose, and Compute Optimized. Memory Optimized and General Purpose are the recommended classes of data flow compute to use with your Integration Runtime for production workloads. Because Compute Optimized will often not suffice for common use cases with data flows, we recommend using General Purpose or Memory Optimized data flows in production workloads.
+
+## Comparison between different compute options
+
+| Compute Option | Performance |
+| :-- | :-- |
+| General Purpose Data Flows | Best performing runtime for data flows when working with large datasets and many calculations |
+| Memory Optimized Data Flows | Good for general use cases in production workloads |
+| Compute Optimized Data Flows | Not recommended for production workloads |
+
+## Migration steps
+
+Your Compute Optimized data flows will continue to work in pipelines as-is. However, new Azure Integration Runtimes and data flow activities will not be able to use Compute Optimized. When creating a new data flow activity:
+
+1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type.
+2. Set your data flow activity using either of those compute types.
+
+ ![Compute types](media/data-flow/compute-types.png)
+
+[Find more detailed information at the data flows FAQ here](https://aka.ms/dataflowsqa)
+[Post questions and find answers on data flows on Microsoft Q&A](https://aka.ms/datafactoryqa)
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
Previously updated : 11/19/2019 Last updated : 06/19/2021 # Pipelines and activities in Azure Data Factory
See the following tutorials for step-by-step instructions for creating pipelines
- [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md) - [Build a pipeline with a data transformation activity](tutorial-transform-data-spark-powershell.md)+
+How to achieve CI/CD (continuous integration and delivery) using Azure Data Factory
+- [Continuous integration and delivery in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment)
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
description: Learn how to troubleshoot external control activities in Azure Data
Previously updated : 12/30/2020 Last updated : 04/30/2020
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
description: Learn how to troubleshoot Azure Data Factory UX issues.
Previously updated : 06/01/2020 Last updated : 06/01/2021
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
- Previously updated : 04/01/2021 Last updated : 05/10/2021 # Data transformation expressions in mapping data flow
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/introduction.md
Previously updated : 09/30/2019 Last updated : 06/08/2021 # What is Azure Data Factory?
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-rest-api.md
ms.devlang: rest-api Previously updated : 01/18/2021 Last updated : 05/31/2021
event-grid Custom Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-disaster-recovery.md
The following sample code is a simple .NET publisher that will always attempt to
using System; using System.Net.Http; using System.Collections.Generic;
-using Microsoft.Azure.EventGrid;
-using Microsoft.Azure.EventGrid.Models;
-using Newtonsoft.Json;
+using System.Threading.Tasks;
+using Azure;
+using Azure.Messaging.EventGrid;
namespace EventGridFailoverPublisher { // This captures the "Data" portion of an EventGridEvent on a custom topic class FailoverEventData {
- [JsonProperty(PropertyName = "teststatus")]
public string TestStatus { get; set; } } class Program {
- static void Main(string[] args)
+ static async Task Main(string[] args)
{ // TODO: Enter the endpoint each topic. You can find this topic endpoint value // in the "Overview" section in the "Event Grid Topics" blade in Azure Portal..
namespace EventGridFailoverPublisher
string primaryTopicKey = "<your-primary-topic-key>"; string secondaryTopicKey = "<your-secondary-topic-key>";
- string primaryTopicHostname = new Uri( primaryTopic).Host;
- string secondaryTopicHostname = new Uri(secondaryTopic).Host;
+ Uri primaryTopicUri = new Uri(primaryTopic);
+ Uri secondaryTopicUri = new Uri(secondaryTopic);
- Uri primaryTopicHealthProbe = new Uri("https://" + primaryTopicHostname + "/api/health");
- Uri secondaryTopicHealthProbe = new Uri("https://" + secondaryTopicHostname + "/api/health");
+ Uri primaryTopicHealthProbe = new Uri($"https://{primaryTopicUri.Host}/api/health");
+ Uri secondaryTopicHealthProbe = new Uri($"https://{secondaryTopicUri.Host}/api/health");
var httpClient = new HttpClient(); try {
- TopicCredentials topicCredentials = new TopicCredentials(primaryTopicKey);
- EventGridClient client = new EventGridClient(topicCredentials);
+ var client = new EventGridPublisherClient(primaryTopicUri, new AzureKeyCredential(primaryTopicKey));
- client.PublishEventsAsync(primaryTopicHostname, GetEventsList()).GetAwaiter().GetResult();
+ await client.SendEventsAsync(GetEventsList());
Console.Write("Published events to primary Event Grid topic."); HttpResponseMessage health = httpClient.GetAsync(secondaryTopicHealthProbe).Result; Console.Write("\n\nSecondary Topic health " + health); }
- catch (Microsoft.Rest.Azure.CloudException e)
+ catch (RequestFailedException ex)
{
- TopicCredentials topicCredentials = new TopicCredentials(secondaryTopicKey);
- EventGridClient client = new EventGridClient(topicCredentials);
+ var client = new EventGridPublisherClient(secondaryTopicUri, new AzureKeyCredential(secondaryTopicKey));
- client.PublishEventsAsync(secondaryTopicHostname, GetEventsList()).GetAwaiter().GetResult();
- Console.Write("Published events to secondary Event Grid topic. Reason for primary topic failure:\n\n" + e);
+ await client.SendEventsAsync(GetEventsList());
+ Console.Write("Published events to secondary Event Grid topic. Reason for primary topic failure:\n\n" + ex);
- HttpResponseMessage health = httpClient.GetAsync(primaryTopicHealthProbe).Result;
- Console.Write("\n\nPrimary Topic health " + health);
+ HttpResponseMessage health = await httpClient.GetAsync(primaryTopicHealthProbe);
+ Console.WriteLine($"Primary Topic health {health}");
} Console.ReadLine();
namespace EventGridFailoverPublisher
for (int i = 0; i < 5; i++) {
- eventsList.Add(new EventGridEvent()
- {
- Id = Guid.NewGuid().ToString(),
- EventType = "Contoso.Failover.Test",
- Data = new FailoverEventData()
+ eventsList.Add(new EventGridEvent(
+ subject: "test" + i,
+ eventType: "Contoso.Failover.Test",
+ dataVersion: "2.0",
+ data: new FailoverEventData
{ TestStatus = "success"
- },
- EventTime = DateTime.Now,
- Subject = "test" + i,
- DataVersion = "2.0"
- });
+ }));
} return eventsList;
event-grid Forward Events Event Grid Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/forward-events-event-grid-cloud.md
In order to complete this tutorial, you will need:
* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one. * **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one. ## Create event grid topic and subscription in cloud Create an event grid topic and subscription in the cloud by following [this tutorial](../custom-event-quickstart-portal.md). Note down `topicURL`, `sasKey`, and `topicName` of the newly created topic that you'll use later in the tutorial.
For example, if you created a topic named `testegcloudtopic` in West US, the val
## Create Event Grid subscription at the edge 1. Create subscription3.json with the following content. See our [API documentation](api.md) for details about the payload.
event-grid Forward Events Iothub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/forward-events-iothub.md
In order to complete this tutorial, you will need:
* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one. * **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one. ## Create topic
As a publisher of an event, you need to create an event grid topic. The topic re
Subscribers can register for events published to a topic. To receive any event, they'll need to create an Event grid subscription on a topic of interest. 1. Create subscription4.json with the below content. Refer to our [API documentation](api.md) for details about the payload.
event-grid Pub Sub Events Webhook Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/pub-sub-events-webhook-cloud.md
In order to complete this tutorial, you will need:
* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one. * **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one. ## Create an Azure function in the Azure portal
As a publisher of an event, you need to create an event grid topic. Topic refers
Subscribers can register for events published to a topic. To receive any event, the subscribers will need to create an Event grid subscription on a topic of interest. 1. Create subscription2.json with the following content. Refer to our [API documentation](api.md) for details about the payload.
event-grid Pub Sub Events Webhook Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/pub-sub-events-webhook-local.md
A deployment manifest is a JSON document that describes which modules to deploy,
* **Image URI**: `mcr.microsoft.com/azure-event-grid/iotedge:latest` * **Container Create Options**:
- [!INCLUDE [event-grid-edge-module-version-update](../../../includes/event-grid-edge-module-version-update.md)]
+ [!INCLUDE [event-grid-edge-module-version-update](../includes/event-grid-edge-module-version-update.md)]
```json {
As a publisher of an event, you need to create an event grid topic. In Azure Eve
Subscribers can register for events published to a topic. To receive any event, you'll need to create an Event Grid subscription for a topic of interest. 1. Create subscription.json with the following content. For details about the payload, see our [API documentation](api.md)
event-grid Event Domains Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-domains-use-cases.md
Last updated 03/04/2021
This article describes a few use cases for using event domains in Azure Event Grid. ## Use case 1 ## Use case 2 There is a limit of 500 event subscriptions when using system topics. If you need more than 500 event subscriptions for a system topic, you could use domains.
event-grid Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-domains.md
An event domain is a management tool for large numbers of Event Grid topics rela
Event domains provide you the same architecture used by Azure services like Storage and IoT Hub to publish their events. They allow you to publish events to thousands of topics. Domains also give you authorization and authentication control over each topic so you can partition your tenants. ## Example use case ## Access management
event-grid Event Grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-grid-event-hubs-integration.md
# Tutorial: Stream big data into a data warehouse Azure [Event Grid](overview.md) is an intelligent event routing service that enables you to react to notifications or events from apps and services. For example, it can trigger an Azure Function to process Event Hubs data that's captured to a Blob storage or Data Lake Storage. This [sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) shows you how to use Event Grid and Azure Functions to migrate captured Event Hubs data from blob storage to Azure Synapse Analytics, specifically a dedicated SQL pool. ## Next steps
event-grid Event Schema Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-communication-services.md
Title: Azure Communication Services as an Event Grid source description: This article describes how to use Azure Communication Services as an Event Grid event source. Previously updated : 02/11/2021 Last updated : 06/11/2021
Azure Communication Services integrates with [Azure Event Grid](https://azure.mi
Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](overview.md). > [!NOTE] > To learn more about how data residency relates to event handling, visit the [Data Residency conceptual documentation](../communication-services/concepts/privacy.md)
event-grid Event Schema Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-service-bus.md
Last updated 02/12/2021
This article provides the properties and schema for Service Bus events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). ## Tutorials and how-tos |Title |Description |
event-grid Handler Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/handler-event-hubs.md
See the following examples:
| [Quickstart: Route custom events to Azure Event Hubs with Azure CLI](custom-event-to-eventhub.md) | Sends a custom event to an event hub for processing by an application. | | [Resource Manager template: Create an Event Grid custom topic and send events to an event hub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid-event-hubs-handler)| A Resource Manager template that creates a subscription for a custom topic. It sends events to an Azure Event Hubs. | ## REST examples (for PUT)
event-grid Handler Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/handler-service-bus.md
az eventgrid event-subscription create \
--endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/topics/topic1 ``` When sending an event to a Service Bus queue or topic as a brokered message, the `messageid` of the brokered message is an internal system ID.
event-grid Batch Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/batch-event-delivery.md
# Event Grid on Kubernetes - Batch event delivery Event Grid on Kubernetes with Azure Arc has support to deliver more than one event in a single delivery request. This feature makes it possible to increase the overall delivery throughput without having the HTTP per-request overheads. Batch event delivery is turned off by default and can be turned on using the event subscription configuration. > [!WARNING] > The maximum allowed duration to process each delivery request does not change, even though the event handler code potentially has to do more work per batched request. Delivery timeout defaults to 60 seconds.
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/concepts.md
# Event Grid on Kubernetes - Concepts This article describes the main concepts in Event Grid on Kubernetes with Azure Arc (Preview). ## Events An event is a data record that announces a fact about the operation of a software system. Typically, an event announces a state change because of a signal raised by the system or a signal observed by the system. Events contain two types of information:
event-grid Create Topic Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/create-topic-subscription.md
# Route cloud events to Webhooks with Azure Event Grid on Kubernetes In this quickstart, you'll create a topic in Event Grid on Kubernetes, create a subscription for the topic, and then send a sample event to the topic to test the scenario. ## Prerequisites
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/delivery-retry.md
Event Grid on Kubernetes with Azure Arc tries to deliver each message at least o
By default, Event Grid on Kubernetes delivers one event at a time to the subscriber. However, the payload of the delivery request is an array with a single event. It can deliver more than one event at a time if you enable the output batching feature. For details about this feature, see [Batch event delivery](batch-event-delivery.md). > [!NOTE] > During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate).
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/event-handlers.md
The way to configure Event Grid to send events to a destination is through the c
In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-prem or anywhere that Event Grid can reach. Through Webhooks, Event Grid supports the following destinations **hosted on a Kubernetes cluster**:
event-grid Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/event-schemas.md
# Event schemas in Event Grid on Kubernetes Event Grid on Kubernetes accepts and delivers events in JSON format. It supports the [Cloud Events 1.0 schema specification](https://github.com/cloudevents/spec/blob/v1.0/spec.md) and that's the schema that should be used when publishing events to Event Grid.
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/features.md
Event Grid on Kubernetes offers a rich set of features that help you integrate y
Although Event Grid on Kubernetes and Azure Event Grid share many features and the goal is to provide the same user experience, there are some differences given the unique requirements they seek to meet and the stage in which they are on their software lifecycle. For example, the only type of topic available in Event Grid on Kubernetes are Event Grid Topics that sometimes are also referred as custom topics. Other types of topics (see below) are either not applicable or support for them is not yet available. The main differences between the two editions of Event Grid are presented in the table below. ## Event Grid on Kubernetes vs. Event Grid on Azure
event-grid Filter Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/filter-events.md
Event Grid on Kubernetes allows specifying filters on any property in the json p
- Key - The json path to the property on which to apply the filter. - Value - The reference value against which the filter is run (or) Values - The set of reference values against which the filter is run.
event-grid Install K8s Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/install-k8s-extension.md
This article guides you through the steps to install Event Grid on an [Azure Arc
For brevity, this article refers to "Event Grid on Kubernetes extension" as "Event Grid on Kubernetes" or just "Event Grid". ## Supported Kubernetes distributions
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/overview.md
# Event Grid on Kubernetes with Azure Arc (Preview) - overview This article provides an overview of Event Grid on Kubernetes, use cases for its use, features it offers, and how it differs from Azure Event Grid. ## What is Event Grid? Event Grid is an event broker used to integrate workloads that use event-driven architectures. An event-driven architecture uses events to communicate occurrences in system state changes and is a common integration approach in decoupled architectures such as those that use microservices. Event Grid offers a pub-sub, which is also described as a push-push, communication model where subscribers are sent (pushed) events and those subscribers are not necessarily aware of the publisher that is sending the events. This model contrasts with classic push-pull models, such as the ones used by Azure Service Bus or Azure Event Hubs, where clients pull messages from message brokers and as a consequence, there is a stronger coupling between message brokers and consuming clients.
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/receive-events.md
Finally, test that your function can now handle your custom event type:
You can also test this functionality live by [sending a custom event with CURL from the Portal](./custom-event-quickstart-portal.md) or by [posting to a custom topic](./post-to-custom-topic.md) using any service or application that can POST to an endpoint such as [Postman](https://www.getpostman.com/). Create a custom topic and an event subscription with the endpoint set as the Function URL. ## Next steps
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dedicated-overview.md
The self-serve experience to [create an Event Hubs cluster](event-hubs-dedicated
## FAQs ## Next steps
event-hubs Event Hubs Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-diagnostic-logs.md
Diagnostic logs are disabled by default. To enable diagnostic logs, follow these
For more information about configuring diagnostics, see the [overview of Azure diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md). ## Diagnostic logs categories
event-hubs Event Hubs Dotnet Framework Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-framework-getstarted-send.md
In Visual Studio, create a new Visual C# Desktop App project using the **Console
## Receive events In this section, you write a .NET Framework console application that receives messages from an event hub using the [Event Processor Host](event-hubs-event-processor-host.md). The [Event Processor Host](event-hubs-event-processor-host.md) is a .NET class that simplifies receiving events from event hubs by managing persistent checkpoints and parallel receives from those event hubs. Using the Event Processor Host, you can split events across multiple receivers, even when hosted in different nodes. ### Create a console application
event-hubs Event Hubs Dotnet Standard Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-get-started-send-legacy.md
This section shows how to write a .NET Core console application that receives me
> [!NOTE] > You can download this quickstart as a sample from the [GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/SampleEphReceiver), replace `EventHubConnectionString` and `EventHubName`, `StorageAccountName`, `StorageAccountKey`, and `StorageContainerName` strings with your event hub values, and run it. Alternatively, you can follow the steps in this tutorial to create your own. ### Create a console application
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
You don't have to create publisher names ahead of time, but they must match the
[Event Hubs Capture](event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs and save it to your choice of either a Blob storage account, or an Azure Data Lake Service account. You can enable Capture from the Azure portal, and specify a minimum size and time window to perform the capture. Using Event Hubs Capture, you specify your own Azure Blob Storage account and container, or Azure Data Lake Service account, one of which is used to store the captured data. Captured data is written in the Apache Avro format. ## Partitions ## SAS tokens
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-ip-filtering.md
This section shows you how to use the Azure portal to create IP firewall rules f
> [!NOTE] > To restrict access to specific virtual networks, see [Allow access from specific networks](event-hubs-service-endpoints.md). ## Use Resource Manager template
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-premium-overview.md
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
## FAQs ## Next steps
event-hubs Event Hubs Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-quickstart-cli.md
Title: Create an event hub using Azure CLI - Azure Event Hubs | Microsoft Docs description: This quickstart describes how to create an event hub using Azure CLI and then send and receive events using Java. Previously updated : 06/23/2020 Last updated : 06/18/2021
event-hubs Event Hubs Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-quotas.md
Last updated 05/11/2021
The following tables provide quotas and limits specific to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/). For information about Event Hubs pricing, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/). ## Common limits for all tiers ## Basic vs. standard vs. premium vs. dedicated tiers ## Next steps
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-scalability.md
To learn about configuring PUs for a premium tier namespace, see [Configure proc
> To learn more about quotas and limits, see [Azure Event Hubs - quotas and limits](event-hubs-quotas.md). ## Partitions
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-service-endpoints.md
This section shows you how to use Azure portal to add a virtual network service
> [!NOTE] > To restrict access to specific IP addresses or ranges, see [Allow access from specific IP addresses or ranges](event-hubs-ip-filtering.md). ## Use Resource Manager template The following sample Resource Manager template adds a virtual network rule to an existing Event Hubs namespace. For the network rule, it specifies the ID of a subnet in a virtual network.
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/monitor-event-hubs-reference.md
Azure Event Hubs supports the following dimensions for metrics in Azure Monitor.
|Entity Name| Name of the event hub.| ## Resource logs
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/private-link-service.md
If you already have an Event Hubs namespace, you can create a private link conne
![Private endpoint created](./media/private-link-service/private-endpoint-created.png) To allow trusted services to access your namespace, switch to the **Firewalls and Virtual networks** tab on the **Networking** page, and select **Yes** for **Allow trusted Microsoft services to bypass this firewall?**.
event-hubs Store Captured Data Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/store-captured-data-data-warehouse.md
# Tutorial: Migrate captured Event Hubs data to Azure Synapse Analytics using Event Grid and Azure Functions Azure Event Hubs [Capture](./event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs in an Azure Blob storage or Azure Data Lake Storage. This tutorial shows you how to migrate captured Event Hubs data from Storage to Azure Synapse Analytics by using an Azure function that's triggered by [Event Grid](../event-grid/overview.md). ## Next steps You can use powerful data visualization tools with your data warehouse to achieve actionable insights.
event-hubs Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/troubleshooting-guide.md
Verify that the connection string you are using is correct. See [Get connection
For Kafka clients, verify that producer.config or consumer.config files are configured properly. For more information, see [Send and receive messages with Kafka in Event Hubs](event-hubs-quickstart-kafka-enabled-event-hubs.md#send-and-receive-messages-with-kafka-in-event-hubs). ### Verify that AzureEventGrid service tag is allowed in your network security groups If your application is running inside a subnet and there is an associated network security group, confirm whether the internet outbound is allowed or AzureEventGrid service tag is allowed. See [Virtual network service tags](../virtual-network/service-tags-overview.md) and search for `EventHub`.
iot-hub Iot Hub Raspberry Pi Kit C Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
One way to monitor messages received by your IoT hub from your device is to use
For more ways to process data sent by your device, continue on to the next section.
+## Clean up resources
+
+You can use the resources created in this topic with other tutorials and quickstarts in this document set. If you plan to continue on to work with other quickstarts or with the tutorials, do not clean up the resources created in this topic. If you do not plan to continue, use the following steps to delete all resources created by this topic in the Azure portal.
+
+1. From the left-hand menu in the Azure portal, select **All resources** and then select the IoT Hub you created.
+1. At the top of the IoT Hub overview pane, click **Delete**.
+1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT Hub.
++ ## Next steps YouΓÇÖve run a sample application to collect sensor data and send it to your IoT hub.
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
ms.devlang: nodejs Previously updated : 03/13/2020 Last updated : 06/18/2021
[!INCLUDE [iot-hub-get-started-device-selector](../../includes/iot-hub-get-started-device-selector.md)]
-In this tutorial, you begin by learning the basics of working with Raspberry Pi that's running Raspbian. You then learn how to seamlessly connect your devices to the cloud by using [Azure IoT Hub](about-iot-hub.md). For Windows 10 IoT Core samples, go to the [Windows Dev Center](https://www.windowsondevices.com/).
+In this tutorial, you begin by learning the basics of working with Raspberry Pi that's running Raspberry Pi OS. You then learn how to seamlessly connect your devices to the cloud by using [Azure IoT Hub](about-iot-hub.md). For Windows 10 IoT Core samples, go to the [Windows Dev Center](https://www.windowsondevices.com/).
Don't have a kit yet? Try [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md). Or buy a new kit [here](https://azure.microsoft.com/develop/iot/starter-kits).
The following items are optional:
## Set up Raspberry Pi
-### Install the Raspbian operating system for Pi
+### Install the Raspberry Pi OS
-Prepare the microSD card for installation of the Raspbian image.
+Prepare the microSD card for installation of the Raspberry Pi OS image.
-1. Download Raspbian.
+1. Download Raspberry Pi OS with desktop.
- a. [Raspbian Buster with desktop](https://www.raspberrypi.org/software/) (the .zip file).
+ a. [Raspberry Pi OS with desktop](https://www.raspberrypi.org/software/) (the .zip file).
- b. Extract the Raspbian image to a folder on your computer.
+ b. Extract the Raspberry Pi OS with desktop image to a folder on your computer.
-2. Install Raspbian to the microSD card.
+2. Install Raspberry Pi OS with desktop to the microSD card.
a. [Download and install the Etcher SD card burner utility](https://etcher.io/).
- b. Run Etcher and select the Raspbian image that you extracted in step 1.
+ b. Run Etcher and select the Raspberry Pi OS with desktop image that you extracted in step 1.
c. Select the microSD card drive. Etcher may have already selected the correct drive.
- d. Click Flash to install Raspbian to the microSD card.
+ d. Click Flash to install Raspberry Pi OS with desktop to the microSD card.
e. Remove the microSD card from your computer when installation is complete. It's safe to remove the microSD card directly because Etcher automatically ejects or unmounts the microSD card upon completion.
Prepare the microSD card for installation of the Raspbian image.
1. Connect Pi to the monitor, keyboard, and mouse.
-2. Start Pi and then sign into Raspbian by using `pi` as the user name and `raspberry` as the password.
+2. Start Pi and then sign into Raspberry Pi OS by using `pi` as the user name and `raspberry` as the password.
3. Click the Raspberry icon > **Preferences** > **Raspberry Pi Configuration**.
- ![The Raspbian Preferences menu](./media/iot-hub-raspberry-pi-kit-node-get-started/1-raspbian-preferences-menu.png)
+ ![The Raspberry Pi OS with Preferences menu](./media/iot-hub-raspberry-pi-kit-node-get-started/1-raspbian-preferences-menu.png)
-4. On the **Interfaces** tab, set **I2C** and **SSH** to **Enable**, and then click **OK**. If you don't have physical sensors and want to use simulated sensor data, this step is optional.
+4. On the **Interfaces** tab, set **SSH** and **I2C** to **Enable**, and then click **OK**.
+
+ | Interface | Description |
+ | | -- |
+ | *SSH* | Secure Shell (SSH) is used to remote into the Raspberry Pi with a remote command-line. This is the preferred method for issuing the commands to your Raspberry Pi remotely in this document. |
+ | *I2C* | Inter-integrated Circuit (I2C) is a communications protocol used to interface with hardware such as sensors. This interface is required for interfacing with physical sensors in this topic.|
+
+ If you don't have physical sensors and want to use simulated sensor data from your Raspberry Pi device, you can leave **I2C** disabled.
![Enable I2C and SSH on Raspberry Pi](./media/iot-hub-raspberry-pi-kit-node-get-started/2-enable-i2c-ssh-on-raspberry-pi.png)
Turn on Pi by using the micro USB cable and the power supply. Use the Ethernet c
If the version is lower than 10.x, or if there is no Node.js on your Pi, install the latest version. ```bash
- curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash
+ curl -sSL https://deb.nodesource.com/setup_16.x | sudo -E bash
sudo apt-get -y install nodejs ```
One way to monitor messages received by your IoT hub from your device is to use
For more ways to process data sent by your device, continue on to the next section.
+## Clean up resources
+
+You can use the resources created in this topic with other tutorials and quickstarts in this document set. If you plan to continue on to work with other quickstarts or with the tutorials, do not clean up the resources created in this topic. If you do not plan to continue, use the following steps to delete all resources created by this topic in the Azure portal.
+
+1. From the left-hand menu in the Azure portal, select **All resources** and then select the IoT Hub you created.
+1. At the top of the IoT Hub overview pane, click **Delete**.
+1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT Hub.
+ ## Next steps You've run a sample application to collect sensor data and send it to your IoT hub.
iot-hub Iot Hub Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-connectivity.md
#Customer intent: As an operator for Azure IoT Hub, I need to know how to find out when devices are disconnecting unexpectedly and troubleshoot resolve those issues right away.
-# Monitor, diagnose, and troubleshoot disconnects with Azure IoT Hub
+# Monitor, diagnose, and troubleshoot Azure IoT Hub disconnects
Connectivity issues for IoT devices can be difficult to troubleshoot because there are many possible points of failure. Application logic, physical networks, protocols, hardware, IoT Hub, and other cloud services can all cause problems. The ability to detect and pinpoint the source of an issue is critical. However, an IoT solution at scale could have thousands of devices, so it's not practical to check individual devices manually. IoT Hub integrates with two Azure services to help you:
-* **Azure Monitor** To help you detect, diagnose, and troubleshoot these issues at scale, use the monitoring capabilities IoT Hub provides through Azure Monitor. This includes setting up alerts to trigger notifications and actions when disconnects occur and configuring logs that you can use to discover the conditions that caused disconnects.
+* **Azure Monitor** Azure Monitor enables you to collect, analyze, and act on telemetry from IoT Hub. To help you detect, diagnose, and troubleshoot these issues at scale, use the monitoring capabilities IoT Hub provides through Azure Monitor. This includes setting up alerts to trigger notifications and actions when disconnects occur and configuring logs that you can use to discover the conditions that caused disconnects.
-* **Azure Event Grid** For critical infrastructure and per-device disconnects, use Azure Event Grid to subscribe to device connect and disconnect events emitted by IoT Hub.
+* **Azure Event Grid** For critical infrastructure and per-device disconnects, use Azure Event Grid to subscribe to device connect and disconnect events emitted by IoT Hub. Azure Event Grid enables you to use any of the following event handlers:
-In both cases, these capabilities are limited to what IoT Hub can observe, so we also recommend that you follow monitoring best practices for your devices and other Azure services.
+ - Azure Functions
+ - Logic Apps
+ - Azure Automation
+ - WebHooks
+ - Queue Storage
+ - Hybrid Connections
+ - Event Hubs
## Event Grid vs. Azure Monitor
Consider the following when deciding whether to use Event Grid or Azure Monitor
* Lightweight setup: Azure Monitor metric alerts provide a lightweight setup experience that doesn't require integrating with other services to deliver notifications through Email, SMS, Voice, and other notifications. With Event Grid, you need to integrate with other Azure services to deliver notifications. Both services can integrate with other services to trigger more complex actions.
-Due to its low-latency, per-device capabilities, for production environments, we highly recommend using Event Grid to monitor connections. Of course, the choice is not exclusive, you can use both Azure Monitor metric alerts and Event Grid. Regardless of your choice for tracking disconnects, you will likely use Azure Monitor resource logs to help troubleshoot the reasons for unexpected device disconnects. The following sections discuss each of these options in more detail.
+## Event Grid: Monitor connect and disconnect events
-## Event Grid: Monitor device connect and disconnect events
-
-To monitor device connect and disconnect events in production, we recommend subscribing to the [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) in Event Grid to trigger alerts and monitor device connection state. Event Grid provides much lower event latency than Azure Monitor, and you can monitor on a per-device basis, rather than for the total number of connected devices. These factors make Event Grid the preferred method for monitoring critical devices and infrastructure.
+To monitor device connect and disconnect events in production, we recommend subscribing to the [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) in Event Grid to trigger alerts and monitor device connection state. Event Grid provides much lower event latency than Azure Monitor, and you can monitor on a per-device basis. These factors make Event Grid the preferred method for monitoring critical devices and infrastructure.
When you use Event Grid to monitor or trigger alerts on device disconnects, make sure you build in a way of filtering out the periodic disconnects due to SAS token renewal on devices that use the Azure IoT SDKs. To learn more, see [MQTT device disconnect behavior with Azure IoT SDKs](#mqtt-device-disconnect-behavior-with-azure-iot-sdks).
We recommend creating a diagnostic setting as early as possible after you create
To learn more about routing logs to a destination, see [Collection and routing](monitor-iot-hub.md#collection-and-routing). For detailed instructions to create a diagnostic setting, see the [Use metrics and logs tutorial](tutorial-use-metrics-and-diags.md).
-## Azure Monitor: Set up metric alerts for device disconnect at scale
+## Azure Monitor: Set up metric alerts for device disconnects
You can set up alerts based on the platform metrics emitted by IoT Hub. With metric alerts, you can notify individuals that a condition of interest has occurred and also trigger actions that can respond to that condition automatically.
The [*Connected devices (preview)*](monitor-iot-hub-reference.md#device-metrics)
:::image type="content" source="media/iot-hub-troubleshoot-connectivity/configure-alert-logic.png" alt-text="Alert logic settings for connected devices metric.":::
-You can use metric alert rules to monitor for device disconnect anomalies at-scale. That is, when a significant number of devices unexpectedly disconnect. When such an occurrence is detected, you can look at logs to help troubleshoot the issue. To monitor per-device disconnects and disconnects for critical devices; however, you must use Event Grid. Event Grid also provides a more real-time experience than Azure metrics.
+You can use metric alert rules to monitor for device disconnect anomalies at-scale. That is, use alerts to determine when a significant number of devices unexpectedly disconnect. When this is detected, you can look at logs to help troubleshoot the issue. To monitor per-device disconnects and disconnects for critical devices in near real time, however, you must use Event Grid.
To learn more about alerts with IoT Hub, see [Alerts in Monitor IoT Hub](monitor-iot-hub.md#alerts). For a walk-through of creating alerts in IoT Hub, see the [Use metrics and logs tutorial](tutorial-use-metrics-and-diags.md). For a more detailed overview of alerts, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md) in the Azure Monitor documentation. ## Azure Monitor: Use logs to resolve connectivity errors
-When you detect device disconnects, whether it's with Azure Monitor metric alerts or with Event Grid, you can use logs to help troubleshoot the reason. This section describes how to look for common issues in Azure Monitor Logs. The steps below assume you've already created a [diagnostic setting](#azure-monitor-route-connection-events-to-logs) to send IoT Hub Connections logs to a Log Analytics workspace.
+When you detect device disconnects by using Azure Monitor metric alerts or Event Grid, you can use logs to help troubleshoot the reason. This section describes how to look for common issues in Azure Monitor Logs. The steps below assume you've already created a [diagnostic setting](#azure-monitor-route-connection-events-to-logs) to send IoT Hub Connections logs to a Log Analytics workspace.
After you've created a diagnostic setting to route IoT Hub resource logs to Azure Monitor Logs, follow these steps to view the logs in Azure portal.
After you've created a diagnostic setting to route IoT Hub resource logs to Azur
| where ( ResourceType == "IOTHUBS" and Category == "Connections" and Level == "Error") ```
-1. If there are results, look for `OperationName`, `ResultType` (error code), and `ResultDescription` (error message) to get more detail on the error.
+1. If there are results, look for `OperationName`, `ResultType` (error code), and `ResultDescription` (error message) to get more detail.
![Example of error log](./media/iot-hub-troubleshoot-connectivity/diag-logs.png)
-Once you've identified the error, follow the problem resolution guides for help with the most common errors:
+Use the following problem resolution guides for help with the most common errors:
* [400027 ConnectionForcefullyClosedOnNewConnection](iot-hub-troubleshoot-error-400027-connectionforcefullyclosedonnewconnection.md)
By default, the token lifespan is 60 minutes for all SDKs; however, it can be ch
| SDK | Token lifespan | Token renewal | Renewal behavior | |--|-|||
-| .NET | 60 minutes, configurable | 85% of lifespan, configurable | SDK connects and disconnects at token lifespan plus a 10-minute grace period. Informational events and errors generated in logs. |
-| Java | 60 minutes, configurable | 85% of lifespan, not configurable | SDK connects and disconnects at token lifespan plus a 10-minute grace period. Informational events and errors generated in logs. |
-| Node.js | 60 minutes, configurable | configurable | SDK connects and disconnects at token renewal. Only informational events are generated in logs. |
-| Python | 60 minutes, not configurable | -- | SDK connects and disconnects at token lifespan. |
+| .NET | 60 minutes, configurable | 85% of lifespan, configurable | SDK disconnects and reconnects at token lifespan plus a 10-minute grace period. Informational events and errors generated in logs. |
+| Java | 60 minutes, configurable | 85% of lifespan, not configurable | SDK disconnects and reconnects at token lifespan plus a 10-minute grace period. Informational events and errors generated in logs. |
+| Node.js | 60 minutes, configurable | configurable | SDK disconnects and reconnects at token renewal. Only informational events are generated in logs. |
+| Python | 60 minutes, configurable | 120 seconds prior to expiration | SDK disconnects and reconnects at token lifespan. |
The following screenshots show the token renewal behavior in Azure Monitor Logs for different SDKs. The token lifespan and renewal threshold have been changed from their defaults as noted.
The following screenshots show the token renewal behavior in Azure Monitor Logs
:::image type="content" source="media/iot-hub-troubleshoot-connectivity/node-mqtt.png" alt-text="Error behavior for token renewal over MQTT in Azure Monitor Logs with Node SDK.":::
-The following query was used to collect the results. The query extracts the SDK name and version from the property bag; to learn more, see [SDK version in IoT Hub logs](monitor-iot-hub.md#sdk-version-in-iot-hub-logs).
+The following query was used to collect the results. The query extracts the SDK name and version from the property bag. To learn more, see [SDK version in IoT Hub logs](monitor-iot-hub.md#sdk-version-in-iot-hub-logs).
```kusto AzureDiagnostics
AzureDiagnostics
As an IoT solutions developer or operator, you need to be aware of this behavior in order to interpret connect/disconnect events and related errors in logs. If you want to change the token lifespan or renewal behavior for devices, check to see whether the device implements a device twin setting or a device method that makes this possible.
-If you're monitoring device connections with Event Hub, make sure you build in a way of filtering out the periodic disconnects due to SAS token renewal; for example, by not triggering actions based on disconnects as long as the disconnect event is followed by a connect event within a certain time span.
+If you're monitoring device connections with Event Hub, make sure you build in a way of filtering out the periodic disconnects due to SAS token renewal. For example, do not trigger actions based on disconnects as long as the disconnect event is followed by a connect event within a certain time span.
> [!NOTE] > IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection.
To help improve the documentation for everyone, leave a comment in the feedback
* To learn more about resolving transient issues, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
-* To learn more about Azure IoT SDK and managing retries, see [How to manage connectivity and reliable messaging using Azure IoT Hub device SDKs](iot-hub-reliability-features-in-sdks.md#connection-and-retry).
+* To learn more about Azure IoT SDK and managing retries, see [How to manage connectivity and reliable messaging using Azure IoT Hub device SDKs](iot-hub-reliability-features-in-sdks.md#connection-and-retry).
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/best-practices.md
tags: azure-key-vault
Previously updated : 06/01/2021 Last updated : 06/21/2021 # Customer intent: As a developer using Managed HSM I want to know the best practices so I can implement them.
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
- [Soft Delete](soft-delete-overview.md) is on by default. You can choose a retention period between 7 and 90 days. - Turn on purge protection to prevent immediate permanent deletion of HSM or keys. When purge protection is on HSM or keys will remain in deleted state until the retention days have passed.
+## Generate and import keys from on-premise HSM
+
+> [!NOTE]
+> Keys created or imported into Managed HSM are not exportable.
+
+- To ensure long term portability and key durability, generate keys in your on-premise HSM and [import them to Managed HSM](hsm-protected-keys-byok.md). You will have a copy of your key securely stored in your on-premise HSM for future use.
+ ## Next steps - See [Full backup/restore](backup-restore.md) for information on full HSM backup/restore.
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/key-management.md
If you don't have an Azure subscription, create a [free account](https://azure.m
To complete the steps in this article, you must have the following items: * A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).
-* The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
For more information on login options via the CLI, see [sign in with Azure CLI](
## Create an HSM key
+> [!NOTE]
+> Key generated or imported into Managed HSM cannot be exported. Refer to recommended best practices for key portability and durability.
+ Use `az keyvault key create` command to create a key. ### Create an RSA key
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/logging.md
Use this tutorial to help you get started with Managed HSM logging. You'll creat
To complete the steps in this article, you must have the following items: * A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).
-* The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/overview.md
Previously updated : 04/01/2021 Last updated : 06/21/2021 #Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Managed HSM is and if it offers anything that could be used in my organization.
Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant,
- **Centralized key management**: Manage critical, high-value keys across your organization in one place. With granular per key permissions, control access to each key on the 'least privileged access' principle. - **Isolated access control**: Managed HSM "local RBAC" access control model allows designated HSM cluster administrators to have complete control over the HSMs that even management group, subscription, or resource group administrators cannot override.
+- **Private endpoints**: Use private endpoints to securely and privately connect to Managed HSM from your application running in a virtual network.
- **FIPS 140-2 Level 3 validated HSMs**: Protect your data and meet compliance requirements with FIPS ((Federal Information Protection Standard)) 140-2 Level 3 validated HSMs. Managed HSMs use Marvell LiquidSecurity HSM adapters. - **Monitor and audit**: fully integrated with Azure monitor. Get complete logs of all activity via Azure Monitor. Use Azure Log Analytics for analytics and alerts. - **Data residency**: Managed HSM doesn't store/process customer data outside the region the customer deploys the HSM instance in.
Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant,
## Next steps - See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to create and activate a managed HSM - See [Best Practices using Azure Key Vault Managed HSM](best-practices.md)
+- [Managed HSM Status](https://status.azure.com)
+- [Managed HSM Service Level Agreement](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/)
+- [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/private-link.md
+
+ Title: Configure Azure Key Vault Managed HSM with private endpoints
+description: Learn how to integrate Azure Key Vault Managed HSM with Azure Private Link Service
++ Last updated : 06/21/2021+++++++
+# Integrate Managed HSM with Azure Private Link (preview)
+
+>[!NOTE]
+> Azure private endpoints feature for Managed HSM is currently available as **a preview** in following regions: **UK South, Europe West, Canada Central, Australia Central**, and **Asia East**. It will be available in all the [other regions](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault&regions=all) in next few days.
+
+Azure Private Link Service enables you to access Azure Services (for example, Managed HSM, Azure Storage, and Azure Cosmos DB etc.) and Azure hosted customer/partner services over a Private Endpoint in your virtual network.
+
+An Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+
+For more information, see [What is Azure Private Link?](../../private-link/private-link-overview.md)
+
+## Prerequisites
+
+To integrate a managed HSM with Azure Private Link, you will need the following:
+
+- A Managed HSM. See [Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) for more details.
+- An Azure virtual network.
+- A subnet in the virtual network.
+- Owner or contributor permissions for both the managed HSM and the virtual network.
+- The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it will automatically filter only virtual networks that are in that region. Your HSM can be in a different region.
+
+Your private endpoint uses a private IP address in your virtual network.
++
+## Establish a private link connection to Managed HSM using CLI (Initial Setup)
+
+```azurecli
+az login # Login to Azure CLI
+az account set --subscription {SUBSCRIPTION ID} # Select your Azure Subscription
+az group create -n {RESOURCE GROUP} -l {REGION} # Create a new Resource Group
+az provider register -n Microsoft.KeyVault # Register KeyVault as a provider
+az keyvault update-hsm --hsm-name {HSM NAME} -g {RG} --default-action deny # Turn on firewall
+
+az network vnet create -g {RG} -n {vNet NAME} --location {REGION} # Create a Virtual Network
+
+ # Create a Subnet
+az network vnet subnet create -g {RG} --vnet-name {vNet NAME} --name {subnet NAME} --address-prefixes {addressPrefix}
+
+ # Disable Virtual Network Policies
+az network vnet subnet update --name {subnet NAME} --resource-group {RG} --vnet-name {vNet NAME} --disable-private-endpoint-network-policies true
+
+ # Create a Private DNS Zone
+az network private-dns zone create --resource-group {RG} --name privatelink.managedhsm.azure.net
+
+ # Link the Private DNS Zone to the Virtual Network
+az network private-dns link vnet create --resource-group {RG} --virtual-network {vNet NAME} --zone-name privatelink.managedhsm.azure.net --name {dnsZoneLinkName} --registration-enabled true
+
+```
+
+### Allow trusted services to access Managed HSM
+
+When the firewall is turned on, all access to the HSM from any location that are not using a private endpoints connection will be denied, including public Internet and Azure services. Use `--baypss AzureServices` option if you want to allow Microsoft services to access your keys in your Managed HSM. The individual entities (such as an Azure Storage account or a Azure SQL Server) still need to have specific role assignments in place to be able to access a key.
+
+> [!NOTE]
+> Only specific trusted services usage scenarios are supported. Refer to the [list of trusted services usage scenarios](../general/overview-vnet-service-endpoints.md#trusted-services) for more details.
+
+```azurecli
+az keyvault update-hsm --hsm-name {HSM NAME} -g {RG} --default-action deny --bypass AzureServices
+```
+
+### Create a Private Endpoint (Automatically Approve)
+```azurecli
+az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/managedHSMs/{HSM NAME}" --group-id managedhsm --connection-name {Private Link Connection Name} --location {AZURE REGION}
+```
+
+> [!NOTE]
+> If you delete this HSM the private endpiont will stop working. If your recover (undelete) this HSM later, you must re-create a new private endpoint.
+
+### Create a Private Endpoint (Manually Request Approval)
+```azurecli
+az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/managedHSMs/{HSM NAME}" --group-id managedhsm --connection-name {Private Link Connection Name} --location {AZURE REGION} --manual-request
+```
+
+### Manage Private Link Connections
+
+```azurecli
+# Show Connection Status
+az network private-endpoint show --resource-group {RG} --name {Private Endpoint Name}
+
+# Approve a Private Link Connection Request
+az keyvault private-endpoint-connection approve --approval-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
+
+# Deny a Private Link Connection Request
+az keyvault private-endpoint-connection reject --rejection-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
+
+# Delete a Private Link Connection Request
+az keyvault private-endpoint-connection delete --resource-group {RG} --hsm-name {HSM NAME} --name {PRIVATE LINK CONNECTION NAME}
+```
+
+### Add Private DNS Records
+```azurecli
+# Determine the Private Endpoint IP address
+az network private-endpoint show -g {RG} -n {PE NAME} # look for the property networkInterfaces then id; the value must be placed on {PE NIC} below.
+az network nic show --ids {PE NIC} # look for the property ipConfigurations then privateIpAddress; the value must be placed on {NIC IP} below.
+
+# https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
+az network private-dns zone list -g {RG}
+az network private-dns record-set a add-record -g {RG} -z "privatelink.managedhsm.azure.net" -n {HSM NAME} -a {NIC IP}
+az network private-dns record-set list -g {RG} -z "privatelink.managedhsm.azure.net"
+
+# From home/public network, you wil get a public IP. If inside a vnet with private zone, nslookup will resolve to the private ip.
+nslookup {HSM NAME}.managedhsm.azure.net
+nslookup {HSM NAME}.privatelink.managedhsm.azure.net
+```
+++
+## Validate that the private link connection works
+
+You should validate that the resources within the same subnet of the private endpoint resource are connecting to your HSM over a private IP address, and that they have the correct private DNS zone integration.
+
+First, create a virtual machine by following the steps in [Create a Windows virtual machine in the Azure portal](../../virtual-machines/windows/quick-create-portal.md)
+
+In the "Networking" tab:
+
+1. Specify Virtual network and Subnet. You can create a new virtual network or select an existing one. If selecting an existing one, make sure the region matches.
+1. Specify a Public IP resource.
+1. In the "NIC network security group", select "None".
+1. In the "Load balancing", select "No".
+
+Open the command line and run the following command:
+
+```console
+nslookup <your-HSM-name>.managedhsm.azure.net
+```
+
+If you run the ns lookup command to resolve the IP address of a managed HSM over a public endpoint, you will see a result that looks like this:
+
+```console
+c:\ >nslookup <your-hsm-name>.managedhsm.azure.net
+
+Non-authoritative answer:
+Name:
+Address: (public IP address)
+Aliases: <your-hsm-name>.managedhsm.azure.net
+```
+
+If you run the ns lookup command to resolve the IP address of a managed HSM over a private endpoint, you will see a result that looks like this:
+
+```console
+c:\ >nslookup your_hsm_name.managedhsm.azure.net
+
+Non-authoritative answer:
+Name:
+Address: 10.1.0.5 (private IP address)
+Aliases: <your-hsm-name>.managed.azure.net
+ <your-hsm-name>.privatelink.managedhsm.azure.net
+```
+
+## Troubleshooting Guide
+
+* Check to make sure the private endpoint is in the approved state.
+ 1. Use ```az keyvault private-endpoint-connections show``` subcommand to see the status of a private endpoint connection.
+ 2. Make sure connection state is Approved and provisioning state is Succeeded.
+ 3. Make sure the virtual network matches the one you are using.
+
+* Check to make sure you have a Private DNS Zone resource.
+ 1. You must have a Private DNS Zone resource with the exact name: privatelink.managedhsm.azure.net.
+ 2. To learn how to set this up please see the following link. [Private DNS Zones](../../dns/private-dns-privatednszone.md)
+
+* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you are still getting the public IP address returned.
+ 1. If the Private Zone DNS is not linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the HSM.
+ 2. Navigate to the Private DNS Zone resource in the Azure portal and click the virtual network links option.
+ 4. The virtual network that will perform calls to the HSM must be listed.
+ 5. If it's not there, add it.
+ 6. For detailed steps, see the following document [Link Virtual Network to Private DNS Zone](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network)
+
+* Check to make sure the Private DNS Zone is not missing an A record for the HSM.
+ 1. Navigate to the Private DNS Zone page.
+ 2. Click Overview and check if there is an A record with the simple name of your HSM. Do not specify any suffix.
+ 3. Make sure you check the spelling, and either create or fix the A record. You can use a TTL of 3600 (1 hour).
+ 4. Make sure you specify the correct private IP address.
+
+* Check to make sure the A record has the correct IP Address.
+ 1. You can confirm the IP address by opening the Private Endpoint resource in Azure portal.
+ 2. Navigate to the Microsoft.Network/privateEndpoints resource, in the Azure portal
+ 3. In the overview page look for Network interface and click that link.
+ 4. The link will show the Overview of the NIC resource, which contains the property Private IP address.
+ 5. Verify that this is the correct IP address that is specified in the A record.
+
+## Limitations and Design Considerations
+
+> [!NOTE]
+> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to akv-privatelink@microsoft.com. We will approve these requests on a case by case basis.
+
+**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+
+**Maximum Number of Private Endpoints per Managed HSM**: 64.
+
+**Default Number of Managed HSM with Private Endpoints per Subscription**: 400.
+
+For more, see [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
+
+## Next Steps
+
+- Learn more about [Azure Private Link](../../private-link/private-link-service-overview.md)
+- Learn more about [Managed HSM](overview.md)
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/quick-create-cli.md
tags: azure-resource-manager
Previously updated : 06/01/2021 Last updated : 06/21/2021 #Customer intent:As a security admin who is new to Azure, I want to provision and activate a managed HSM
Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguards cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. For more information on Managed HSM you may review the [Overview](overview.md).
-In this quickstart, you create and activate a managed HSM with Azure CLI. Once that you have completed that, you will store a secret.
+In this quickstart, you create and activate a managed HSM with Azure CLI.
## Prerequisites To complete the steps in this article, you must have the following items: * A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).
-* The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
In this quickstart you created a Key Vault and stored a secret in it. To learn m
- Read an [Overview of Managed HSM](overview.md) - Learn about [Managing keys in a managed HSM](key-management.md)
+- Learn about [Role management for a managed HSM](role-management.md)
- Review [Managed HSM best practices](best-practices.md)
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/recovery.md
For more information about Managed HSM, see [Managed HSM overview](overview.md)
* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet) * [PowerShell module](/powershell/azure/install-az-ps).
-* [Azure CLI](/cli/azure/install-azure-cli)
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A Managed HSM - you can create one using [Azure CLI](./quick-create-cli.md), or [Azure PowerShell](./quick-create-powershell.md) * The user will need the following permissions to perform operations on soft-deleted HSMs or keys:
key-vault Role Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/role-management.md
For a list of all Managed HSM built-in roles and the operations they permit, see
To use the Azure CLI commands in this article, you must have the following items: * A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).
-* The Azure CLI version 2.21.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
This tutorial will walk you through a simple example that shows how to achieve s
To complete the steps in this article, you must have the following items: * A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).
-* The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
* A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
key-vault Third Party Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/third-party-solutions.md
+
+ Title: Azure Key Vault Managed HSM - Third-party solutions | Microsoft Docs
+description: Learn about third-party solutions integrated with Managed HSM.
++
+editor: ''
++++ Last updated : 06/21/2021++++
+# Third-party solutions
+
+Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. [Learn more](overview.md).
+
+Several vendors have worked closely with Microsoft to integrate their solutions with Managed HSM. The table below lists these solutions with a brief description (provided by vendor). Links to their Azure Marketplace offering and documentation are also provided.
++
+## Third-party solutions integrated with Managed HSM
+
+| Vendor name | Solution description |
+|-|-|
+|[Cloudflare](https://cloudflare.com)|CloudflareΓÇÖs Keyless SSL enables your websites to use CloudflareΓÇÖs SSL service while keeping custody of their private keys in Managed HSM. This service, coupled with Managed HSM helps a high level of protection by safeguarding your private keys, performing signing and encryption operations internally, providing access controls, and storing keys in a tamper-resistant FIPS 140-2 Level 3 HSM. <br>[Documentation](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm)
+|[NewNet Communication Technologies](https://newnet.com/)|NewNetΓÇÖs Secure Transaction Cloud(STC) is an Industry first Cloud based secure payment routing, switching, transport solution augmented with Cloud based virtualized HSM, handling Mobile, Web, In-Store payments. STC enables cloud transformation for payment entities & rapid deployment for green field payment providers.<br/>[Azure Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/newnetcommunicationtechnologies1589991852134.secure_transaction_cloud?tab=overview)<br/>[Documentation](https://newnet.com/business-units/secure-transactions/products/secure-transaction-cloud-stc/)|
+|[PrimeKey](https://www.primekey.com)|EJBCA Enterprise, world's most use PKI (public key infrastructure), provides with the basic security services for trusted identities and secure communication for any use case. A single instance of EJBCA Enterprise supports multiple CAs and levels to enable you to build complete infrastructure(s) for multiple use cases.<br>[Azure Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/primekey.ejbca_enterprise_cloud_2)<br/>[Documentation]()|
+++
+## Next steps
+* [Managed HSM overview](overview.md)
+* [Managed HSM best practices](best-practices.md)
+
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
For more information about security in Azure, see these topics:
Inbound calls that a logic app receives through a request-based trigger, such as the [Request](../connectors/connectors-native-reqres.md) trigger or [HTTP Webhook](../connectors/connectors-native-webhook.md) trigger, support encryption and are secured with [Transport Layer Security (TLS) 1.2 at minimum](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL). Logic Apps enforces this version when receiving an inbound call to the Request trigger or a callback to the HTTP Webhook trigger or action. If you get TLS handshake errors, make sure that you use TLS 1.2. For more information, see [Solving the TLS 1.0 problem](/security/solving-tls1-problem).
-Inbound calls support these cipher suites:
+For inbound calls, use the following cipher suites:
* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
Inbound calls support these cipher suites:
* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+> [!NOTE]
+> For backward compatibility, Azure Logic Apps currently supports some older cipher suites. However, *don't use* older cipher suites when you develop new apps because such suites *might not* be supported in the future.
+>
+> For example, you might find the following cipher suites if you inspect the TLS handshake messages while using the Azure Logic Apps service or by using a security tool on your logic app's URL. Again, *don't use* these older suites:
+>
+>
+> * TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+> * TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+> * TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+> * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+> * TLS_RSA_WITH_AES_256_GCM_SHA384
+> * TLS_RSA_WITH_AES_128_GCM_SHA256
+> * TLS_RSA_WITH_AES_256_CBC_SHA256
+> * TLS_RSA_WITH_AES_128_CBC_SHA256
+> * TLS_RSA_WITH_AES_256_CBC_SHA
+> * TLS_RSA_WITH_AES_128_CBC_SHA
+> * TLS_RSA_WITH_3DES_EDE_CBC_SHA
+ The following list includes more ways that you can limit access to triggers that receive inbound calls to your logic app so that only authorized clients can call your logic app: * [Generate shared access signatures (SAS)](#sas)
For more information about isolation, review the following documentation:
* [Azure security baseline for Azure Logic Apps](../logic-apps/security-baseline.md) * [Automate deployment for Azure Logic Apps](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md)
-* [Monitor logic apps](../logic-apps/monitor-logic-apps-log-analytics.md)
+* [Monitor logic apps](../logic-apps/monitor-logic-apps-log-analytics.md)
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
Previously updated : 09/29/2020 Last updated : 06/18/2021 #Customer intent: As a data scientist, I want to understand what a compute target is and why I need it.
To learn more about isolation, see [Isolation in the Azure public cloud](../secu
## Unmanaged compute
-An unmanaged compute target is *not* managed by Azure Machine Learning. You create this type of compute target outside Azure Machine Learning and then attach it to your workspace. Unmanaged compute resources can require additional steps for you to maintain or to improve performance for machine learning workloads.
+An unmanaged compute target is *not* managed by Azure Machine Learning. You create this type of compute target outside Azure Machine Learning and then attach it to your workspace. Unmanaged compute resources can require additional steps for you to maintain or to improve performance for machine learning workloads.
+
+Azure Machine Learning supports the following unmanaged compute types:
+
+* Your local computer
+* Remote virtual machines
+* Azure HDInsight
+* Azure Batch
+* Azure Databricks
+* Azure Data Lake Analytics
+* Azure Container Instance
+* Azure Kubernetes Service & Azure Arc enabled Kubernetes (preview)
+
+For more information, see [set up compute targets for model training and deployment](how-to-attach-compute-targets.md)
## Next steps
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Prebuilt Docker container images for inference [(preview)](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) are used when deploying a model with Azure Machine Learning. The images are prebuilt with popular machine learning frameworks and Python packages. You can also extend the packages to add other packages by using one of the following methods: * [Add Python packages](how-to-prebuilt-docker-images-inference-python-extensibility.md).
-* [Use the prebuilt package as a base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md). Using this method, you can install both **Python packages and apt packages**.
+* [Use prebuilt inference image as base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md). Using this method, you can install both **Python packages and apt packages**.
## Why should I use prebuilt images?
Prebuilt Docker container images for inference [(preview)](https://azure.microso
* Improves model deployment success rate. * Avoid unnecessary image build during model deployment. * Only have required dependencies and access right in the image/container. 
-* The inference process in the deployment runs as non-root.
## List of prebuilt Docker images for inference
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-arc-kubernetes.md
+
+ Title: Azure Arc enabled machine learning (preview)
+description: Configure Azure Arc enabled Kubernetes cluster to train machine learning models in Azure Machine Learning
+++++ Last updated : 06/18/2021+++
+# Configure Azure Arc enabled machine learning (preview)
+
+Learn how to configure Azure Arc enabled machine learning for training.
+
+## What is Azure Arc enabled machine learning?
+
+Azure Arc enables you to run Azure services in any Kubernetes environment, whether itΓÇÖs on-premises, multicloud, or at the edge.
+
+Azure Arc enabled machine learning lets you to configure and use an Azure Arc enabled Kubernetes clusters to train and manage machine learning models in Azure Machine Learning.
+
+Azure Arc enabled machine learning supports the following training scenarios:
+
+* Train models with 2.0 CLI
+ * Distributed training
+ * Hyperparameter sweeping
+* Train models with Azure Machine Learning Python SDK
+ * Hyperparameter tuning
+* Build and use machine learning pipelines
+* Train model on-premise with outbound proxy server
+* Train model on-premise with NFS datastore
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription [create a free account](https://aka.ms/AMLFree) before you begin.
+* Azure Arc enabled Kubernetes cluster. For more information, see the [Connect an existing Kubernetes cluster to Azure Arc quickstart guide](/azure-arc/kubernetes/quickstart-connect-cluster.md).
+* Fulfill [Azure Arc enabled Kubernetes cluster extensions prerequisites](/azure-arc/kubernetes/extensions#prerequisites).
+ * Azure CLI version >= 2.24.0
+ * Azure CLI k8s-extension extension version >= 0.4.3
+* An Azure Machine Learning workspace. [Create a workspace](how-to-manage-workspace.md?tabs=python) before you begin if you don't have one already.
+ * Azure Machine Learning Python SDK version >= 1.30
+
+## Deploy Azure Machine Learning extension
+
+Azure Arc enabled Kubernetes has a cluster extension functionality that enables you to install various agents including Azure policy, monitoring, machine learning, and many others. Azure Machine Learning requires the use of the *Microsoft.AzureML.Kubernetes* cluster extension to deploy the Azure Machine Learning agent on the Kubernetes cluster. Once the Azure Machine Learning extension is installed, you can attach the cluster to an Azure Machine Learning workspace and use it for training.
+
+Use the `k8s-extension` Azure CLI extension to deploy the Azure Machine Learning extension to your Azure Arc-enabled Kubernetes cluster.
+
+1. Login to Azure
+
+ ```azurecli
+ az login
+ az account set --subscription <your-subscription-id>
+ ```
+
+1. Deploy Azure Machine Learning extension
+
+ ```azurecli
+ az k8s-extension create --name amlarc-compute --extension-type Microsoft.AzureML.Kubernetes --configuration-settings enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group> --scope cluster
+ ```
+
+ >[!IMPORTANT]
+ > To enabled Azure Arc-enabled cluster for training, `enableTraining` must be set to **True**. Running this command creates an Azure Service Bus and Azure Relay resource under the same resource group as the Arc cluster. These resources are used to communicate with the cluster. Modifying them will break attached clusters used as training compute targets.
+
+ You can also configure the following settings when you deploy the Azure Machine Learning extension for model training:
+
+ |Configuration Setting Key Name |Description |
+ |--|--|
+ | ```enableTraining``` | Default `False`. Set to `True` to create an extension instance for training machine learning models. |
+ |```logAnalyticsWS``` | Default `False`. The Azure Machine Learning extension integrates with Azure LogAnalytics Workspace. Set to `True` to provide log viewing and analysis capability through LogAnalytics Workspace. LogAnalytics Workspace cost may apply. |
+ |```installNvidiaDevicePlugin``` | Default `True`. Nvidia Device Plugin is required for training on Nvidia GPU hardware. The Azure Machine Learning extension installs the Nvidia Device Plugin by default during the Azure Machine Learning instance creation regardless of whether the Kubernetes cluster has GPU hardware or not. Set to `False` if you don't plan on using a GPU for training or Nvidia Device Plugin is already installed. |
+ |```installBlobfuseSysctl``` | Default `True` if "enableTraining=True". Blobfuse 1.3.7 is required for training. Azure Machine Learning installs Blobfuse by default when the extension instance is created. Set this configuration setting to `False` if Blobfuse 1.37 is already installed on your Kubernetes cluster. |
+ |```installBlobfuseFlexvol``` | Default `True` if "enableTraining=True". Blobfuse Flexvolume is required for training. Azure Machine Learning installs Blobfuse Flexvolume by default to your default path. Set this configuration setting to `False` if Blobfuse Flexvolume is already installed on your Kubernetes cluster. |
+ |```volumePluginDir``` | Host path for Blobfuse Flexvolume to be installed. Applicable only if "enableTraining=True". By default, Azure Machine Learning installs Blobfuse Flexvolume under default path */etc/kubernetes/volumeplugins*. Specify a custom installation location by specifying this configuration setting.``` |
+
+ > [!WARNING]
+ > If Nvidia Device Plugin, Blobfuse, and Blobfuse Flexvolume are already installed in your cluster, reinstalling them may result in an extension installation error. Set `installNvidiaDevicePlugin`, `installBlobfuseSysctl`, and `installBlobfuseFlexvol` to `False` to prevent installation errors.
+
+1. Verify your AzureML extension deployment
+
+ ```azurecli
+ az k8s-extension show --name amlarc-compute --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
+ ```
+
+ In the response, look for `"extensionType": "amlarc-compute"` and `"installState": "Installed"`. Note it might show `"installState": "Pending"` for the first few minutes.
+
+ When the `installState` shows **Installed**, run the following command on your machine with the kubeconfig file pointed to your cluster to check that all pods under *azureml* namespace are in *Running* state:
+
+ ```bash
+ kubectl get pods -n azureml
+ ```
+
+## Attach Arc cluster (studio)
+
+Attaching an Azure Arc enabled Kubernetes cluster makes it available to your workspace for training.
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
+1. Under **Manage**, select **Compute**.
+1. Select the **Attached computes** tab.
+1. Select **+New > Kubernetes (preview)**
+
+ ![Attach Kubernetes cluster](./media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png)
+
+1. Enter a compute name and select your Azure Arc enabled Kubernetes cluster from the dropdown.
+
+ ![Configure Kubernetes cluster](./media/how-to-attach-arc-kubernetes/configure-kubernetes-cluster.png)
+
+1. (Optional) For advanced scenarios, browse and upload a configuration file.
+
+ ![Upload configuration file](./media/how-to-attach-arc-kubernetes/upload-configuration-file.png)
+
+1. Select **Attach**
+
+ In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
+
+ ![Provision resources](./media/how-to-attach-arc-kubernetes/provision-resources.png)
+
+### Advanced attach scenario
+
+Use a JSON configuration file to configure advanced compute target capabilities on Azure Arc enabled Kubernetes clusters.
+
+The following is an example configuration file:
+
+```json
+{
+ "namespace": "amlarc-testing",
+ "defaultInstanceType": "gpu_instance",
+ "instanceTypes": {
+ "gpu_instance": {
+ "nodeSelector": {
+ "accelerator": "nvidia-tesla-k80"
+ },
+ "resources": {
+ "requests": {
+ "cpu": "2",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "1"
+ },
+ "limits": {
+ "cpu": "2",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "1"
+ }
+ }
+ },
+ "big_cpu_sku": {
+ "nodeSelector": {
+ "VMSizes": "VM-64vCPU-256GB"
+ },
+ "resources": {
+ "requests": {
+ "cpu": "4",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "0"
+ },
+ "limits": {
+ "cpu": "4",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "0"
+ }
+ }
+ }
+ }
+}
+```
+
+The following custom compute target properties can be configured using a configuration file:
+
+* `namespace` - Defaults to `default` namespace. This is the namespace where jobs and pods run under. Note that when setting a namespace other than the default, the namespace must already exist. Creating namespaces requires cluster administrator privileges.
+
+* `defaultInstanceType` - The type of instance where training jobs run on by default. Required `defaultInstanceType` if `instanceTypes` property is specified. The value of `defaultInstanceType` must be one of values defined in the `instanceTypes` property.
+
+ > [!IMPORTANT]
+ > Currently, only job submissions using computer target name are supported. Therefore, the configuration will always default to defaultInstanceType.
+
+* `instanceTypes` - List of instance types used for training jobs. Each instance type is defined by `nodeSelector` and `resources requests/limits` properties:
+
+ * `nodeSelector` - One or more node labels used to identify nodes in a cluster. Cluster administrator privileges are needed to create labels for cluster nodes. If this property is specified, training jobs are scheduled to run on nodes with the specified node labels. You can use `nodeSelector` to target a subset of nodes for training workload placement. This can be useful in scenarios where a cluster has different SKUs, or different types of nodes such as CPU or GPU nodes. For example, you could create node labels for all GPU nodes and define an `instanceType` for the GPU node pool. Doing so targets the GPU node pool exclusively when scheduling training jobs.
+
+ * `resources requests/limits` - Specifies resources requests and limits a training job pod to run. Defaults to 1 CPU and 4GB of of memory.
+
+ >[!IMPORTANT]
+ > By default, a cluster resource is deployed with 1 CPU and 4 GB of memory. If a cluster is configured with lower resources, the job run will fail. To ensure successful job completion, we recommend to always specify resources requests/limits according to training job needs. The following is an example default configuration file:
+ >
+ > ```json
+ > {
+ > "namespace": "default",
+ > "defaultInstanceType": "defaultInstanceType",
+ > "instanceTypes": {
+ > "defaultInstanceType": {
+ > "nodeSelector": "null",
+ > "resources": {
+ > "requests": {
+ > "cpu": "1",
+ > "memory": "4Gi",
+ > "nvidia.com/gpu": "0"
+ > },
+ > "limits": {
+ > "cpu": "1",
+ > "memory": "4Gi",
+ > "nvidia.com/gpu": "0"
+ > }
+ > }
+ > }
+ > }
+ > }
+ > ```
+
+## Attach Arc cluster (Python SDK)
+
+The following Python code shows how to attach an Azure Arc enabled Kubernetes cluster and use it as a compute target for training:
+
+```python
+from azureml.core.compute import KubernetesCompute
+from azureml.core.compute import ComputeTarget
+import os
+
+ws = Workspace.from_config()
+
+# choose a name for your Azure Arc-enabled Kubernetes compute
+amlarc_compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "amlarc-compute")
+
+# resource ID for your Azure Arc-enabled Kubernetes cluster
+resource_id = "/subscriptions/123/resourceGroups/rg/providers/Microsoft.Kubernetes/connectedClusters/amlarc-cluster"
+
+if amlarc_compute_name in ws.compute_targets:
+ compute_target = ws.compute_targets[amlarc_compute_name]
+ if compute_target and type(compute_target) is KubernetesCompute:
+ print("found compute target: " + amlarc_compute_name)
+else:
+ print("creating new compute target...")
+
+ amlarc_attach_configuration = KubernetesCompute.attach_configuration(resource_id)
+ amlarc_compute = ComputeTarget.attach(ws, amlarc_compute_name, amlarc_attach_configuration)
+
+
+ amlarc_compute.wait_for_completion(show_output=True)
+
+ # For a more detailed view of current KubernetesCompute status, use get_status()
+ print(amlarc_compute.get_status().serialize())
+```
+
+### Advanced attach scenario
+
+The following code shows how to configure advanced compute target properties like namespace, nodeSelector, or resources requests/limits:
+
+```python
+from azureml.core.compute import KubernetesCompute
+from azureml.core.compute import ComputeTarget
+import os
+
+ws = Workspace.from_config()
+
+# choose a name for your Azure Arc-enabled Kubernetes compute
+amlarc_compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "amlarc-compute")
+
+# resource ID for your Azure Arc-enabled Kubernetes cluster
+resource_id = "/subscriptions/123/resourceGroups/rg/providers/Microsoft.Kubernetes/connectedClusters/amlarc-cluster"
+
+if amlarc_compute_name in ws.compute_targets:
+ compute_target = ws.compute_targets[amlarc_compute_name]
+ if compute_target and type(compute_target) is KubernetesCompute:
+ print("found compute target: " + amlarc_compute_name)
+else:
+ print("creating new compute target...")
+ ns = "amlarc-testing"
+
+ instance_types = {
+ "gpu_instance": {
+ "nodeSelector": {
+ "accelerator": "nvidia-tesla-k80"
+ },
+ "resources": {
+ "requests": {
+ "cpu": "2",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "1"
+ },
+ "limits": {
+ "cpu": "2",
+ "memory": "16Gi",
+ "nvidia.com/gpu": "1"
+ }
+ }
+ },
+ "big_cpu_sku": {
+ "nodeSelector": {
+ "VMSizes": "VM-64vCPU-256GB"
+ }
+ }
+ }
+
+ amlarc_attach_configuration = KubernetesCompute.attach_configuration(resource_id = resource_id, namespace = ns, default_instance_type="gpu_instance", instance_types = instance_types)
+
+ amlarc_compute = ComputeTarget.attach(ws, amlarc_compute_name, amlarc_attach_configuration)
+
+
+ amlarc_compute.wait_for_completion(show_output=True)
+
+ # For a more detailed view of current KubernetesCompute status, use get_status()
+ print(amlarc_compute.get_status().serialize())
+```
+
+## Next steps
+
+- [Train models with 2.0 CLI](how-to-train-cli.md)
+- [Configure and submit training runs](how-to-set-up-training-targets.md)
+- [Tune hyperparameters](how-to-tune-hyperparameters.md)
+- [Train a model using Scikit-learn](how-to-train-scikit-learn.md)
+- [Train a TensorFlow model](how-to-train-tensorflow.md)
+- [Train a PyTorch model](how-to-train-pytorch.md)
+- [Train using Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)
+- [Train model on-premise with outbound proxy server](/azure-arc/kubernetes/quickstart-connect-cluster.md#5-connect-using-an-outbound-proxy-server)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
Previously updated : 10/02/2020 Last updated : 06/18/2021
In this article, learn how to set up your workspace to use these compute resourc
* Azure Databricks * Azure Data Lake Analytics * Azure Container Instance
+* Azure Kubernetes Service & Azure Arc enabled Kubernetes (preview)
To use compute targets managed by Azure Machine Learning, see: - * [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) * [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md) * [Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md)
For a more detailed example, see an [example notebook](https://aka.ms/pl-adla) o
Azure Container Instances (ACI) are created dynamically when you deploy a model. You cannot create or attach ACI to your workspace in any other way. For more information, see [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md).
-## Azure Kubernetes Service
+## <a id="kubernetes"></a>Kubernetes (preview)
+
+Azure Machine Learning provides you with the following options to attach your own Kubernetes clusters for training:
+
+* [Azure Kubernetes Service](/azure/aks/intro-kubernetes). Azure Kubernetes Service provides a managed cluster in Azure.
+* [Azure Arc Kubernetes](/azure/azure-arc/kubernetes/overview). Use Azure Arc enabled Kubernetes clusters if your cluster is hosted outside of Azure.
++
+To detach a Kubernetes cluster from your workspace, use the following method:
-Azure Kubernetes Service (AKS) allows for various configuration options when used with Azure Machine Learning. For more information, see [How to create and attach Azure Kubernetes Service](how-to-create-attach-kubernetes.md).
+```python
+compute_target.detach()
+```
+
+> [!WARNING]
+> Detaching a cluster **does not delete the cluster**. To delete an Azure Kubernetes Service cluster, see [Use the Azure CLI with AKS](/aks/kubernetes-walkthrough.md#delete-the-cluster) or to delete an Azure Arc enabled Kubernetes cluster, see [Azure Arc quickstart](/azure-arc/kubernetes/quickstart-connect-cluster#clean-up-resources).
## Notebook examples
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-studio.md
Previously updated : 08/06/2020 Last updated : 06/18/2021
Follow the previous steps to view the list of compute targets. Then use these st
1. Fill out the form for your compute type:
- * [Compute instance](#compute-instance)
- * [Compute clusters](#amlcompute)
- * [Inference clusters](#inference-clusters)
- * [Attached compute](#attached-compute)
+ * [Compute instance](#compute-instance)
+ * [Compute clusters](#amlcompute)
+ * [Inference clusters](#inference-clusters)
+ * [Attached compute](#attached-compute)
1. Select __Create__.
Use the [steps above](#portal-create) to attach a compute. Then fill out the fo
* Azure Databricks (for use in machine learning pipelines) * Azure Data Lake Analytics (for use in machine learning pipelines) * Azure HDInsight
+ * Kubernetes (preview)
1. Fill out the form and provide values for the required properties.
Use the [steps above](#portal-create) to attach a compute. Then fill out the fo
> * [Create and use SSH keys on Linux or macOS](../virtual-machines/linux/mac-create-ssh-keys.md) > * [Create and use SSH keys on Windows](../virtual-machines/linux/ssh-from-windows.md)
-1. Select __Attach__.
+1. Select __Attach__.
+
+> [!IMPORTANT]
+> To attach an Azure Kubernetes Services (AKS) or Arc enabled Kubernetes cluster, you must be subscription owner or have permission to access AKS cluster resources under the subscription. Otherwise, the cluster list on "attach new compute" page will be blank.
+
+To detach your compute use the following steps:
+
+1. In Azure Machine Learning studio, select __Compute__, __Attached compute__, and the compute you wish to remove.
+1. Use the __Detach__ link to detach your compute.
## Next steps
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
With your Azure Machine Learning workspace linked with your Azure Synapse worksp
You can link your ML workspace and Synapse workspace via the [Python SDK](#link-sdk) or the [Azure Machine Learning studio](#link-studio).
-You can also link workspaces and attach a Synapse Spark pool with a single [Azure Resource Manager (ARM) template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-machine-learning-linkedservice-create/azuredeploy.json).
+You can also link workspaces and attach a Synapse Spark pool with a single [Azure Resource Manager (ARM) template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json).
>[!IMPORTANT] > The Azure Machine Learning and Azure Synapse integration is in public preview. The functionalities presented from the `azureml-synapse` package are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
Once you retrieve the linked service, attach a Synapse Apache Spark pool as a de
You can attach Apache Spark pools via, * Azure Machine Learning studio
-* [Azure Resource Manager (ARM) templates](https://github.com/Azure/azure-quickstart-templates/blob/master/101-machine-learning-linkedservice-create/azuredeploy.json)
+* [Azure Resource Manager (ARM) templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)
* The Azure Machine Learning Python SDK ### Attach a pool via the studio
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
Previously updated : 09/28/2020 Last updated : 06/18/2021
Select the compute target where your training script will run on. If no compute
The example code in this article assumes that you have already created a compute target `my_compute_target` from the "Prerequisites" section. >[!Note]
->Azure Databricks is not supported as a compute target for model training. You can use Azure Databricks for data preparation and deployment tasks.
+>Azure Databricks is not supported as a compute target for model training. You can use Azure Databricks for data preparation and deployment tasks.
+ ## Create an environment Azure Machine Learning [environments](concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-cli.md
Previously updated : 06/08/2021 Last updated : 06/18/2021
Note that you are not charged for compute at this point as `cpu-cluster` and `gp
Use `az ml compute create -h` for more details on compute create options. + ## Basic Python training job With `cpu-cluster` created you can run the basic training job, which outputs a model and accompanying metadata. Let's review the job YAML file in detail:
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-synapsesparkstep.md
You create and administer your Apache Spark pools in an Azure Synapse Analytics
Once your Azure Machine Learning workspace and your Azure Synapse Analytics workspaces are linked, you can attach an Apache Spark pool via * [Azure Machine Learning studio](how-to-link-synapse-ml-workspaces.md#attach-a-pool-via-the-studio) * Python SDK ([as elaborated below](#attach-your-apache-spark-pool-as-a-compute-target-for-azure-machine-learning))
-* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-machine-learning-linkedservice-create/azuredeploy.json)).
+* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)).
* You can use the command line to follow the ARM template, add the linked service, and attach the Apache Spark pool with the following code: ```azurecli az deployment group create --name --resource-group <rg_name> --template-file "azuredeploy.json" --parameters @"azuredeploy.parameters.json"
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/monitor-azure-machine-learning.md
See [Azure Machine Learning monitoring data reference](monitor-resource-referenc
Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations. When you need to manage multiple Azure Machine Learning workspaces, you could route logs for all workspaces into the same logging destination and query all logs from a single place.
See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Machine Learning are listed in [Azure Machine Learning monitoring data reference](monitor-resource-reference.md#resource-logs).
Following are queries that you can use to help you monitor your Azure Machine Le
| distinct NodeId ```
+When you connect multiple Azure Machine Learning workspaces to the same Log Analytics workspace, you can query across all resources.
+++ Get number of running nodes across workspaces and clusters in the last day:+
+ ```Kusto
+ AmlComputeClusterEvent
+ | where TimeGenerated > ago(1d)
+ | summarize avgRunningNodes=avg(TargetNodeCount), maxRunningNodes=max(TargetNodeCount)
+ by Workspace=tostring(split(_ResourceId, "/")[8]), ClusterName, ClusterType, VmSize, VmPriority
+ ```
+ ## Alerts You can access alerts for Azure Machine Learning by opening **Alerts** from the **Azure Monitor** menu. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
machine-learning Deploy Models In Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/deploy-models-in-production.md
There are various approaches and platforms to put models into production. Here a
- [Where to deploy models with Azure Machine Learning](../how-to-deploy-and-where.md) - [Deployment of a model in SQL-server](/sql/advanced-analytics/tutorials/sqldev-py6-operationalize-the-model)-- [Microsoft Machine Learning Server](/sql/advanced-analytics/r/r-server-standalone)
+- [Azure Synapse Analytics](/azure/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook)
>[!NOTE] >Prior to deployment, one has to insure the latency of model scoring is low enough to use in production.
When multiple models are in production, [A/B testing](https://en.wikipedia.org/w
## Next steps
-Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application.
+Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application.
machine-learning Execute Data Science Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/execute-data-science-tasks.md
Typical data science tasks include data exploration, modeling, and deployment. T
- [Azure Machine Learning](../index.yml) - [SQL-Server with ML services](/sql/advanced-analytics/r/r-services)-- [Microsoft Machine Learning Server](/machine-learning-server/what-is-machine-learning-server)
+- [Azure Synapse Analytics](/azure/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook)
## 1. <a name='DataQualityReportUtility-1'></a> Exploration
machine-learning Platforms And Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/platforms-and-tools.md
editor: marktab
Previously updated : 01/10/2020 Last updated : 06/18/2021
Microsoft provides a full spectrum of analytics resources for both cloud or on-p
The analytics resources available to data science teams using the TDSP include: -- Data Science Virtual Machines (both Windows and Linux CentOS)
+- Azure Machine Learning
+- Data Science Virtual Machines (either Windows and Linux Ubuntu)
- HDInsight Spark Clusters - Azure Synapse Analytics - Azure Data Lake
The analytics resources available to data science teams using the TDSP include:
In this document, we briefly describe the resources and provide links to the tutorials and walkthroughs the TDSP teams have published. They can help you learn how to use them step by step and start using them to build your intelligent applications. More information on these resources is available on their product pages. +
+## Azure Machine Learning
+
+Azure Machine Learning is our primary recommended platform for data science development. This PaaS (platform as a service) provides both standalone operation, or integration with any of the other platforms and tools mentioned on this webpage. [Azure Machine Learning](../overview-what-is-azure-ml.md) (AzureML) is an end-to-end platform that encompasses:
+++ Fully Managed Compute
+ + Compute Instances
+ + Compute Clusters for distributed ML tasks
+ + Inference Clusters for real-time scoring
++ Datastores (for example Blob, ADLS Gen2, SQL DB)++ Experiment tracking++ Model management++ Notebooks++ Environments (manage conda and R dependencies)++ Labeling++ Pipelines (automate End-to-End Data science workflows)++ ## Data Science Virtual Machine (DSVM)
-The data science virtual machine offered on both Windows and Linux by Microsoft, contains popular tools for data science modeling and development activities. It includes tools such as:
+The data science virtual machine offered on both Windows and Linux by Microsoft, contains popular tools for data science modeling and development activities.
-- Microsoft R Server Developer Edition -- Anaconda Python distribution-- Jupyter notebooks for Python and R -- Visual Studio Community Edition with Python and R Tools on Windows / Eclipse on Linux-- Power BI desktop for Windows-- SQL Server 2016 Developer Edition on Windows / Postgres on Linux
+The Data Science Virtual Machine is an easy way to explore data and do machine learning in the cloud. The Data Science Virtual Machines are pre-configured with the complete operating system, security patches, drivers, and popular data science and development software. You can choose the hardware environment, ranging from lower-cost CPU-centric machines to very powerful machines with multiple GPUs, NVMe storage, and large amounts of memory. For machines with GPUs, all drivers are installed, all machine learning frameworks are version-matched for GPU compatibility, and acceleration is enabled in all application software that supports GPUs.
-It also includes **ML and AI tools** like xgboost, mxnet, and Vowpal Wabbit.
+The Data Science Virtual Machine comes with the most useful data-science tools pre-installed. See [Tools included on the Data Science Virtual Machine](/azure/machine-learning/data-science-virtual-machine/tools-included) for the most recent list of tools and versions.
-Currently DSVM is available in **Windows** and **Linux CentOS** operating systems. Choose the size of your DSVM (number of CPU cores and the amount of memory) based on the needs of the data science projects that you are planning to execute on it.
+Currently DSVM is available in **Windows** and **Linux Ubuntu** operating systems. Choose the size of your DSVM (number of CPU cores and the amount of memory) based on the needs of the data science projects that you are planning to execute on it.
For more information on Windows edition of DSVM, see [Microsoft Data Science Virtual Machine](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019) on the Azure Marketplace. For the Linux edition of the DSVM, see [Linux Data Science Virtual Machine](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804).
The TDSP team from Microsoft has published two end-to-end walkthroughs that show
### Install Git Credential Manager on Windows
-If you are following the TDSP on **Windows**, you need to install the **Git Credential Manager (GCM)** to communicate with the Git repositories. To install GCM, you first need to install **Chocolaty**. To install Chocolaty and the GCM, run the following commands in Windows PowerShell as an **Administrator**:
+If you are following the TDSP on **Windows**, you need to install the **Git Credential Manager (GCM)** to communicate with the Git repositories. To install GCM, you first need to install **Chocolatey**. To install Chocolatey and the GCM, run the following commands in Windows PowerShell as an **Administrator**:
```powershell iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex
If you are using Linux (CentOS) machines to run the git commands, you need to ad
Full end-to-end walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) topic. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application.
-For examples that show how to execute steps in the Team Data Science Process by using Azure Machine Learning Studio (classic), see the [With Azure ML](./index.yml) learning path.
+For examples that show how to execute steps in the Team Data Science Process by using Azure Machine Learning Studio (classic), see the [With Azure ML](./index.yml) learning path.
marketplace Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/anomaly-detection.md
To help ensure that your customers are billed correctly, use the **Anomaly detec
## View and manage metered usage anomalies 1. Sign-in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. In the left-navigation menu, select **Commercial Marketplace** > **Analyze**.
+1. In the left-navigation menu, select **Commercial Marketplace** > **Analyze** > **Usage**.
1. Select the **Metered usage anomalies** tab. [![Illustrates the Metered usage anomalies tab on the Usage page.](./media/anomaly-detection/metered-usage-anomalies.png)](./media/anomaly-detection/metered-usage-anomalies.png#lightbox)<br>
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/customer-dashboard.md
To access the Customers dashboard in Partner Center, under **Commercial Marketpl
## Customers dashboard
-The Customers dashboard displays data for customers who have acquired your offers. You can view graphical representations of the following items:
+The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displays data for customers who have acquired your offers. You can view graphical representations of the following items:
- Active and churned customersΓÇÖ trend - Customer growth trend including existing, new, and churned customers - Customers by orders and usage-- Customers percentile
+- Customers percentile
- Customer type by orders and usage - Customers by geography - Customers details table
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/downloads-dashboard.md
To access the Downloads dashboard, open the **[Analyze](https://partner.microsof
## Downloads dashboard
-The **Downloads** dashboard of the **Analyze** menu displays requests for any downloads that contain over 1000 rows of customer or order data.
+The [Downloads dashboard](https://go.microsoft.com/fwlink/?linkid=2165766) displays requests for any downloads that contain over 1000 rows of customer or order data.
You will receive a pop-up notification containing a link to the **Downloads** dashboard whenever you request a download with over 1000 rows of data. These data downloads will be available for a 30-day period and then removed.
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/insights-dashboard.md
For detailed definitions of analytics terminology, see [Commercial marketplace a
## Marketplace Insights dashboard
-The Marketplace Insights dashboard presents an overview of the Azure Marketplace and AppSource offersΓÇÖ business performance. This dashboard provides a broad overview of the following:
+The [Marketplace Insights dashboard](https://go.microsoft.com/fwlink/?linkid=2165936) presents an overview of the Azure Marketplace and AppSource offersΓÇÖ business performance. This dashboard provides a broad overview of the following:
- Page visits trend - Call to actions trend
marketplace Manage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/manage-account.md
Last updated 04/07/2021
- Owner - Manager
-Once you've [created a Partner Center account](./create-account.md), you can use the [commercial marketplace dashboard](https://go.microsoft.com/fwlink/?linkid=2165290) to manage your account and offers.
+Once you've [created a Partner Center account](./create-account.md), you can use the [commercial marketplace dashboard](https://go.microsoft.com/fwlink/?linkid=2166002) to manage your account and offers.
## Access your account settings
A payout profile is the bank account to which proceeds are sent from your sales.
To set up your payout profile:
-1. Go to the [commercial marketplace overview page](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) in Partner Center.
+1. Go to the [commercial marketplace overview](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) page in Partner Center.
2. In the **Profile** section, next to **Payout Profile**, select **Update**. 3. **Choose a payment method**: Bank account or PayPal. 4. **Add payment information**: This may include choosing an account type (checking or savings), entering the account holder name, account number, and routing number, billing address, phone number, or PayPal email address. For more information about using PayPal as your account payment method and to find out whether it is supported in your market or region, see [PayPal info](/windows/uwp/publish/setting-up-your-payout-account-and-tax-forms#paypal-info).
marketplace Marketplace Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-iot-edge.md
To help create your offer more easily, prepare these items ahead of time. All ar
## Next steps -- Sign in to [Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer. - [Create an IoT Edge module offer](./iot-edge-offer-setup.md) in Partner Center.
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/orders-dashboard.md
To access the Orders dashboard in the Partner Center, under **Commercial Marketp
## Orders dashboard
-The Orders dashboard displays the current orders for all your software as a service (SaaS) offers. You can view graphical representations of the following items:
+The [Orders dashboard](https://go.microsoft.com/fwlink/?linkid=2165914) displays the current orders for all your software as a service (SaaS) offers. You can view graphical representations of the following items:
- Orders trend - Orders per seat and site trend
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/summary-dashboard.md
To access the Summary dashboard in Partner Center, under **Commercial Marketplac
## Summary dashboard
-The Summary dashboard presents an overview of Azure Marketplace and Microsoft AppSource offersΓÇÖ business performance. The dashboard provides a broad overview of the following:
+The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) presents an overview of Azure Marketplace and Microsoft AppSource offersΓÇÖ business performance. The dashboard provides a broad overview of the following:
- Customers' orders - Customers
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/usage-dashboard.md
To access the Usage dashboard in Partner Center, under **Commercial Marketplace*
## Usage dashboard
-The **Usage** dashboard in the **Analyze** menu displays the current orders for all your software as a service (SaaS) offers. You can view graphical representations of the following items:
+The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays the current orders for all your software as a service (SaaS) offers. You can view graphical representations of the following items:
- Usage trend - Normalized usage by offers
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
Previously updated : 8/21/2020 Last updated : 6/19/2021 # Azure Database for MySQL - Flexible Server (Preview) Azure Database for MySQL powered by the MySQL community edition is available in two deployment modes:-- Single Server +
+- Single Server
- Flexible Server (Preview)
-In this article, we will provide an overview and introduction to core concepts of flexible server deployment model. For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](./../select-right-deployment-type.md).
+In this article, we'll provide an overview and introduction to core concepts of flexible server deployment model. For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](./../select-right-deployment-type.md).
## Overview
-Azure Database for MySQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with ability to stop/start your server and burstable skus, ideal for workloads that do not need full compute capacity continuously. The service currently supports community version of MySQL 5.7 and 8.0. The service is currently in preview, available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Azure Database for MySQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with ability to stop/start your server and burstable skus, ideal for workloads that don't need full compute capacity continuously. The service currently supports community version of MySQL 5.7 and 8.0. The service is currently in preview, available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+
+Flexible servers are best suited for:
-Flexible servers are best suited for
- Application developments requiring better control and customizations. - Zone redundant high availability - Managed maintenance windows
Flexible servers are best suited for
## High availability within and across availability zones
-The flexible server deployment model is designed to support high availability within single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a Linux virtual machine, while data files reside on remote Azure premium storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability at all times.
+The flexible server deployment model is designed to support high availability within single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a Linux virtual machine, while data files reside on remote Azure premium storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability at all times.
Within a single availability zone, if the server goes down due to planned or unplanned events, the service maintains high availability of the servers using following automated procedure:
Within a single availability zone, if the server goes down due to planned or unp
2. The storage with data files is mapped to the new Virtual Machine 3. MySQL database engine is brought online on the new Virtual Machine. 4. Client applications can reconnect once the server is ready to accept connections.
-
-If zone redundant high availability is configured, the service provisions and maintains a hot standby server across availability zone within the same Azure region. The data changes on the source server is synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. The typical failover time ranges from 60-120 seconds. This allows the service to support high availability and provide improved resiliency with tolerance for single availability zone failures in a given Azure region.
+
+If zone redundant high availability is configured, the service provisions and maintains a hot standby server across availability zone within the same Azure region. The data changes on the source server are synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. The typical failover time ranges from 60-120 seconds. This allows the service to support high availability and provide improved resiliency with tolerance for single availability zone failures in a given Azure region.
-See [high availability concepts](concepts-high-availability.md) for more details.
+For more information, see [high availability concepts](concepts-high-availability.md).
## Automated patching with managed maintenance window
See [Scheduled Maintenance](concepts-maintenance.md) for more details.
## Automatic backups
-The flexible server service automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption.
+The flexible server service automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption.
See [Backup concepts](concepts-backup-restore.md) to learn more.
See [Backup concepts](concepts-backup-restore.md) to learn more.
You have two networking options to connect to your Azure Database for MySQL Flexible Server. The options are **private access (VNet integration)** and **public access (allowed IP addresses)**.
-* **Private access (VNet Integration)** ΓÇô You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
+- **Private access (VNet Integration)** ΓÇô You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
- Choose the VNet Integration option if you want the following capabilities:
- * Connect from Azure resources in the same virtual network to your flexible server using private IP addresses
- * Use VPN or ExpressRoute to connect from non-Azure resources to your flexible server
- * No public endpoint
+ Choose the VNet Integration option if you want the following capabilities:
-* **Public access (allowed IP addresses)** ΓÇô You can deploy your flexible server with a public endpoint. The public endpoint is a publicly resolvable DNS address. The phrase "allowed IP addresses" refers to a range of IPs you choose to give permission to access your server. These permissions are called **firewall rules**.
+ - Connect from Azure resources in the same virtual network to your flexible server using private IP addresses
+ - Use VPN or ExpressRoute to connect from non-Azure resources to your flexible server
+ - No public endpoint
+
+- **Public access (allowed IP addresses)** ΓÇô You can deploy your flexible server with a public endpoint. The public endpoint is a publicly resolvable DNS address. The phrase "allowed IP addresses" refers to a range of IPs you choose to give permission to access your server. These permissions are called **firewall rules**.
See [Networking concepts](concepts-networking.md) to learn more. ## Adjust performance and scale within seconds
-The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads that do not need full compute capacity continuously. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
+The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full compute capacity continuously. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
-## Scale-out your read workload with up to 10 read replicas
+## Scale out your read workload with up to 10 read replicas
MySQL is one of the popular database engines for running internet-scale web and mobile applications. Many of our customers use it for their online education services, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government, and healthcare websites. These services are required to serve and scale as the traffic on the web or mobile application increases. On the applications side, the application is typically developed in Java or php and migrated to run on [Azure virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) or [Azure App Services](../../app-service/overview.md) or are containerized to run on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
-The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate from the source server to **up to 10 replicas**. Replicas are updated asynchronously using the MySQL engine's native [binary log (binlog) file position-based replication technology](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). You can use a load balancer proxy solution like [ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to seamlessly scale-out your application workload to read replicas without any application refactoring cost.
-
-See [Read Replica concepts](concepts-read-replicas.md) to learn more.
+The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate from the source server to **up to 10 replicas**. Replicas are updated asynchronously using the MySQL engine's native [binary log (binlog) file position-based replication technology](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). You can use a load balancer proxy solution like [ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to seamlessly scale out your application workload to read replicas without any application refactoring cost.
+For more information, see [Read Replica concepts](concepts-read-replicas.md).
## Stop/Start server to optimize cost
-The flexible server service allows you to stop and start server on-demand to optimize cost. The compute tier billing is stopped immediately when the server is stopped. This can allow you to have significant cost savings during development, testing and for time-bound predictable production workloads. The server remains in stopped state for seven days unless re-started sooner.
+The flexible server service allows you to stop and start server on-demand to optimize cost. The compute tier billing is stopped immediately when the server is stopped. This can allow you to have significant cost savings during development, testing and for time-bound predictable production workloads. The server remains in stopped state for seven days unless re-started sooner.
-See [Server concepts](concept-servers.md) to learn more.
+For more information, see [Server concepts](concept-servers.md).
## Enterprise grade security and privacy
-The flexible server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default).
+The flexible server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default).
-The service encrypts data in-motion with transport layer security enforced by default. Flexible Servers only supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied.
+The service encrypts data in-motion with transport layer security enforced by default. Flexible Server only supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied.
-See [how to use encrypted connections to flexible servers](https://docs.mongodb.com/manual/tutorial/configure-ssl) to learn more.
+For more information, see [how to use encrypted connections to flexible servers](https://docs.mongodb.com/manual/tutorial/configure-ssl).
-Flexible servers allows full private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (VNet) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers cannot be reached using public endpoints.
-
-See [Networking concepts](concepts-networking.md) to learn more.
+Flexible Server allows full private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (VNet) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers cannot be reached using public endpoints.
+For more information, see [Networking concepts](concepts-networking.md).
## Monitoring and alerting
-The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resources utilization and allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance.
+The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resources utilization and allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance.
-See [Monitoring concepts](concepts-monitoring.md) to learn more.
+For more information, see [Monitoring concepts](concepts-monitoring.md).
## Migration
-The service runs the community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to single server service. The migration to the single server can be performed using one of the following options:
+The service runs the community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing applications developed on MySQL engine to Flexible Server. Migration to Flexible Server can be performed using the following option:
-- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See Migrate using dump and restore for details. -- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to single server with minimal downtime, [Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-online.md) can be leveraged.
+- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](/concepts-migrate-dump-restore.md) for details.
## Azure regions
-One of the advantage of running your workload in Azure is it's global reach. The flexible server for Azure Database for MySQL is available today in following Azure regions:
+One advantage of running your workload in Azure is its global reach. The flexible server for Azure Database for MySQL is available today in following Azure regions:
-| Region | Availability | Zone redundant HA |
+| Region | Availability | Zone redundant HA |
| | | | | West Europe | :heavy_check_mark: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: |
-| UK South | :heavy_check_mark: | :heavy_check_mark: |
+| UK South | :heavy_check_mark: | :heavy_check_mark: |
| East US 2 | :heavy_check_mark: | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: |
-| Central US | :heavy_check_mark: | :x: |
+| Central US | :heavy_check_mark: | :x: |
| East US | :heavy_check_mark: | :heavy_check_mark: |
-| Canada Central | :heavy_check_mark: | :x: |
+| Canada Central | :heavy_check_mark: | :x: |
| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: | :x: |
-| Japan East | :heavy_check_mark: | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: | :x: |
+| Japan East | :heavy_check_mark: | :heavy_check_mark: |
| Australia East | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :x:| | Brazil South | :heavy_check_mark: | :x: | | Switzerland North | :heavy_check_mark: | :x: | - ## Contacts
-For any questions or suggestions you might have on Azure Database for MySQL flexible server, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address is not a technical support alias.
+
+For any questions or suggestions you might have on Azure Database for MySQL flexible server, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
In addition, consider the following points of contact as appropriate:
In addition, consider the following points of contact as appropriate:
- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597982-azure-database-for-mysql). ## Next steps
-Now that you've read an introduction to Azure Database for MySQL single server deployment mode, you're ready to:
-- Create your first server.
+Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to:
+
+- Create your first server.
- [Create an Azure Database for MySQL flexible server using Azure portal](quickstart-create-server-portal.md) - [Create an Azure Database for MySQL flexible server using Azure CLI](quickstart-create-server-cli.md) - [Manage an Azure Database for MySQL Flexible Server using the Azure CLI](how-to-manage-server-portal.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Title: What's new in Azure Database for MySQL Flexible Server
-description: Learn about recent updates to Azure Database for MySQL - Flexible server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
+ Title: What's new in Azure Database for MySQL - Flexible Server
+description: Learn about recent updates to Azure Database for MySQL - Flexible Server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
Last updated 06/18/2021
-# What's new in Azure Database for MySQL - Flexible Server?
+
+# What's new in Azure Database for MySQL - Flexible Server (Preview)?
[Azure Database for MySQL - Flexible Server](./overview.md#azure-database-for-mysqlflexible-server-preview) is a deployment mode that's designed to provide more granular control and flexibility over database management functions and configuration settings than does the Single Server deployment mode. The service currently supports community version of MySQL 5.7 and 8.0.
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Free 12-month offer**
- Beginning June 15, 2021, new Azure users can take advantage of our 12-month [Azure free account](https://azure.microsoft.com/free/), which provides up to 750 hours of Azure Database for MySQL ΓÇô Flexible Server and 32 GB of storage per month. Customers can take advantage of this offer to develop and deploy applications that use Azure Database for MySQL ΓÇô Flexible Server (Preview).
+ As of June 15, 2021, the [Azure free account](https://azure.microsoft.com/free/) provides customers with up to 12 months of free access to Azure Database for MySQL ΓÇô Flexible Server with 750 hours of usage and 32 GB of storage per month. Customers can take advantage of this offer to develop and deploy applications that use Azure Database for MySQL ΓÇô Flexible Server. [Learn more](https://go.microsoft.com/fwlink/?linkid=2165892).
- **Storage auto-grow**
mysql Howto Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-migrate-online.md
Previously updated : 10/30/2020 Last updated : 6/19/2021 # Minimal-downtime migration to Azure Database for MySQL+ [!INCLUDE[applies-to-single-flexible-server](includes/applies-to-single-flexible-server.md)]
-You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
+You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using Data-in replication, which limits the amount of downtime that is incurred by the application.
You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure. ## Overview
-Azure DMS performs an initial load of your on-premises to Azure Database for MySQL, and then continuously syncs any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you are finished, your application will be live on Azure!
+Using Data-in replication, you can configure the source as your primary and the target as your replica, so that there's continuous synching of any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you're finished, your application will be live on Azure!
## Next steps+ - For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201), which contains a demo showing how to migrate MySQL apps to Azure Database for MySQL.-- See the tutorial [Migrate MySQL to Azure Database for MySQL online using DMS](../dms/tutorial-mysql-azure-mysql-online.md).
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/single-server-overview.md
Previously updated : 8/20/2020 Last updated : 6/19/2021 # Azure Database for MySQL Single Server [Azure Database for MySQL](overview.md) powered by the MySQL community edition is available in two deployment modes:-- Single Server +
+- Single Server
- Flexible Server (Preview)
-In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, refer [flexible server overview](flexible-server/index.yml). For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](select-right-deployment-type.md).
+In this article, we'll provide an overview and introduction to core concepts of the Single Server deployment model. To learn about flexible server deployment mode, refer [flexible server overview](flexible-server/index.yml). For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](select-right-deployment-type.md).
## Overview
-Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of MySQL 5.6, 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Single Server is a fully managed database service with minimal requirements for customizations of the database. The Single Server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. It supports community version of MySQL 5.6, 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom MySQL configuration settings. ## High availability
-The single server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
+The Single Server deployment model is optimized for built-in high availability, and elasticity at reduced cost. The architecture separates compute and storage. The database engine runs on a proprietary compute container, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
During planned or unplanned failover events, if the server goes down, the servic
3. MySQL database engine is brought online on the new compute container 4. Gateway service ensures transparent failover ensuring no application side changes requires.
-The typical failover time ranges from 60-120 seconds. The cloud native design of the single server service allows it to support 99.99% of availability eliminating the cost of passive hot standby.
+The typical failover time ranges from 60-120 seconds. The cloud native design of Single Server allows it to support 99.99% of availability eliminating the cost of passive hot standby.
Azure's industry leading 99.99% availability service level agreement (SLA), powered by a global network of Microsoft-managed datacenters, helps keep your applications running 24/7.
-## Automated Patching
+## Automated Patching
-The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There is no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification](concepts-monitoring.md) to receive notification of the upcoming maintenance 72 hours before the event.
+The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are automatic and included as part of the patching cycle. There's no user action or configuration settings required for patching. The patching frequency is service managed based on the criticality of the payload. In general, the service follows monthly release schedule as part of the continuous integration and release. Users can subscribe to the [planned maintenance notification](concepts-monitoring.md) to receive notification of the upcoming maintenance 72 hours before the event.
## Automatic Backups
-The single server service automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. Refer to [Backups](concepts-backup.md) for details.
+Single Server automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to any point-in-time within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured up to 35 days. All backups are encrypted using AES 256-bit encryption. Refer to [Backups](concepts-backup.md) for details.
## Adjust performance and scale within seconds
-The single server service is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
+Single Server is available in three SKU tiers: Basic, General Purpose, and Memory Optimized. The Basic tier is best suited for low-cost development and low concurrency workloads. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
## Enterprise grade Security, Compliance, and Governance
-The single server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md).
+Single Server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md).
The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Azure Defender for open-source relational databases](../security-center/defender-for-databases-introduction.md) plan. Azure Defender detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
-In addition to native authentication, the single server service supports [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) authentication. Azure AD authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
+In addition to native authentication, Single Server supports [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) authentication. Azure AD authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
[Audit logging](concepts-audit-logs.md) is available to track all database level activity.
-The single server service is complaint with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center](https://www.microsoft.com/trustcenter/security) for information about Azure's platform security.
+Single Server is complaint with all the industry-leading certifications like FedRAMP, HIPAA, PCI DSS. Visit the [Azure Trust Center](https://www.microsoft.com/trustcenter/security) for information about Azure's platform security.
For more information about Azure Database for MySQL security features, see the [security overview](concepts-security.md). ## Monitoring and alerting
-The single server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](concepts-monitoring.md) for details.
+Single Server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service allows configuring slow query logs and comes with a differentiated [Query store](concepts-query-store.md) feature. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. See [Monitoring](concepts-monitoring.md) for details.
## Migration
-The service runs community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to single server service. The migration to the single server can be performed using one of the following options:
+The service runs community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing application developed on MySQL engine to Single Server. The migration to the single server can be performed using one of the following options:
-- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](concepts-migrate-dump-restore.md) for details. -- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to single server with minimal downtime, [Azure Database Migration Service](../dms/tutorial-mysql-azure-mysql-online.md) can be leveraged.
+- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like mysqldump/mydumper can provide fastest way to migrate. See [Migrate using dump and restore](concepts-migrate-dump-restore.md) for details.
- **Data-in replication** ΓÇô For minimal downtime migrations, data-in replication, which relies on binlog based replication can also be leveraged. Data-in replication is preferred for minimal downtime migrations by hands-on experts looking for more control over migration. See [data-in replication](concepts-data-in-replication.md) for details. ## Contacts
-For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address is not a technical support alias.
+
+For any questions or suggestions you might have about working with Azure Database for MySQL, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
In addition, consider the following points of contact as appropriate:
In addition, consider the following points of contact as appropriate:
- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597982-azure-database-for-mysql). ## Next steps
-Now that you've read an introduction to Azure Database for MySQL single server deployment mode, you're ready to:
-- Create your first server.
+Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to:
+
+- Create your first server.
- [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) - [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md) - [Azure CLI samples for Azure Database for MySQL](sample-scripts-azure-cli.md)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-solutions/apache-kafka-confluent-cloud/overview.md
To reduce the burden of cross-platform management, Microsoft partnered with Conf
Previously, you had to purchase the Confluent Cloud offering in the Marketplace and separately set up the account in Confluent Cloud. To manage configurations and resources, you had to navigate between the portals for Azure and Confluent Cloud.
-Now, you provision the Confluent Cloud resources through a resource provider named **Microsoft.Confluent**. You create and manage Confluent Cloud organization resources through the [Azure portal](https://portal.azure.com/), [Azure CLI](/cli/azure/), or [Azure SDKs](/azure/index.yml#languages-and-tools). Confluent Cloud owns and runs the software as a service (SaaS) application, including the environments, clusters, topics, API keys, and managed connectors.
+Now, you provision the Confluent Cloud resources through a resource provider named **Microsoft.Confluent**. You create and manage Confluent Cloud organization resources through the [Azure portal](https://portal.azure.com/), [Azure CLI](/cli/azure/), or [Azure SDKs](/azure#languages-and-tools). Confluent Cloud owns and runs the software as a service (SaaS) application, including the environments, clusters, topics, API keys, and managed connectors.
## Capabilities
For support and terms, see:
## Next steps
-To create an instance of Apache Kafka for Confluent Cloud, see [QuickStart: Get started with Confluent Cloud on Azure](create.md).
+To create an instance of Apache Kafka for Confluent Cloud, see [QuickStart: Get started with Confluent Cloud on Azure](create.md).
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-capacity-planning.md
Previously updated : 04/06/2021 Last updated : 06/18/2021 # Estimate and manage capacity of an Azure Cognitive Search service Before [provisioning a search service](search-create-service-portal.md) and locking in a specific pricing tier, take a few minutes to understand how capacity works and how you might adjust replicas and partitions to accommodate workload fluctuation.
-Capacity is a function of the [service tier](search-sku-tier.md), establishing maximum storage per service, per partition, and the maximum limits on the number of objects you can create. The Basic tier is designed for apps having modest storage requirements (one partition only) but with the ability to run in a high availability configuration (3 replicas). Other tiers are designed for specific workloads or patterns, such as multitenancy. Internally, services created on those tiers benefit from hardware that helps those scenarios.
+In Azure Cognitive Search, capacity is based on *replicas* and *partitions*. Replicas are copies of the search engine.
+Partitions are units of storage. Each new search service starts with one each, but you can scale up each resource independently to accommodate fluctuations in indexing and query workloads. Adding either resource is an [added cost](search-sku-manage-costs.md).
-The scalability architecture in Azure Cognitive Search is based on flexible combinations of replicas and partitions so that you can vary capacity depending on whether you need more query or indexing power. Once a service is created, you can increase or decrease the number of replicas or partitions independently. Costs will go up with each additional physical resource, but once large workloads are finished, you can reduce scale to lower your bill. Depending on the tier and the size of the adjustment, adding or reducing capacity can take anywhere from 15 minutes to several hours.
+The internal characteristics of replicas and partitions, meaning the underlying physical hardware, varies by [service tier](search-sku-tier.md). If you provisioned on Standard, replicas and partitions will be faster and greater than those of Basic.
-When modifying the allocation of replicas and partitions, we recommend using the Azure portal. The portal enforces limits on allowable combinations that stay below maximum limits of a tier. However, if you require a script-based or code-based provisioning approach, the [Azure PowerShell](search-manage-powershell.md) or the [Management REST API](/rest/api/searchmanagement/services) are alternative solutions.
+Changing capacity is not instantaneous. It can take up to an hour to commission or decommission partitions, especially on services with large amounts of data.
+
+When scaling your search service, you can choose from the following tools and approaches:
+++ [Azure portal](#adjust-capacity)++ [Azure PowerShell](search-manage-powershell.md)++ [Azure CLI](/cli/azure/search)++ [Management REST API](/rest/api/searchmanagement/services) ## Concepts: search units, replicas, partitions, shards
The Free tier and preview features are not covered by [service-level agreements
## When to add capacity
-Initially, a service is allocated a minimal level of resources consisting of one partition and one replica. The [tier you choose](search-sku-tier.md) determines partition size and speed, and each tier is optimized around a set of characteristics that fit various scenarios. If you choose a higher-end tier, you might need fewer partitions than if you go with S1. One of the questions you'll need to answer through self-directed testing is whether a larger and more expensive partition yields better performance than two cheaper partitions on a service provisioned at a lower tier.
+Initially, a service is allocated a minimal level of resources consisting of one partition and one replica. The [tier you choose](search-sku-tier.md) determines partition size and speed, and each tier is optimized around a set of characteristics that fit various scenarios. If you choose a higher-end tier, you might [need fewer partitions](search-performance-tips.md#service-capacity) than if you go with S1. One of the questions you'll need to answer through self-directed testing is whether a larger and more expensive partition yields better performance than two cheaper partitions on a service provisioned at a lower tier.
A single service must have sufficient resources to handle all workloads (indexing and queries). Neither workload runs in the background. You can schedule indexing for times when query requests are naturally less frequent, but the service will not otherwise prioritize one task over another. Additionally, a certain amount of redundancy smooths out query performance when services or nodes are updated internally.
Finally, larger indexes take longer to query. As such, you might find that every
:::image type="content" source="media/search-capacity-planning/1-initial-values.png" alt-text="Scale page showing current values" border="true":::
-1. Use the slider to increase or decrease the number of partitions. The formula at the bottom indicates how many search units are being used. Select **Save**.
+1. Use the slider to increase or decrease the number of partitions. Select **Save**.
This example adds a second replica and partition. Notice the search unit count; it is now four because the billing formula is replicas multiplied by partitions (2 x 2). Doubling capacity more than doubles the cost of running the service. If the search unit cost was $100, the new monthly bill would now be $400.
Finally, larger indexes take longer to query. As such, you might find that every
> [!NOTE] > After a service is provisioned, it cannot be upgraded to a higher tier. You must create a search service at the new tier and reload your indexes. See [Create an Azure Cognitive Search service in the portal](search-create-service-portal.md) for help with service provisioning.
->
-> Additionally, partitions and replicas are managed exclusively and internally by the service. There is no concept of processor affinity, or assigning a workload to a specific node.
->
+
+## How scale requests are handled
+
+Upon receipt of a scale request, the search service:
+
+1. Checks whether the request is valid.
+1. Starts backing up data and system information.
+1. Checks whether the service is already in a provisioning state (currently adding or eliminating either replicas or partitions).
+1. Starts provisioning.
+
+Scaling a service can take as little as 15 minutes or well over an hour, depending on the size of the service and the scope of the request. Backup can take several minutes, depending on the amount of data and number of partitions and replicas.
+
+The above steps are not entirely consecutive. For example, the system starts provisioning when it can safely do so, which could be while backup is winding down.
+
+## Errors during scaling
+
+The error message "Service update operations are not allowed at this time because we are processing a previous request" is caused by repeating a request to scale down or up when the service is already processing a previous request.
+
+Resolve this error by checking service status to verify provisioning status:
+
+1. Use the [Management REST API](/rest/api/searchmanagement/services), [Azure PowerShell](search-manage-powershell.md), or [Azure CLI](/cli/azure/search) to get service status.
+1. Call [Get Service](/rest/api/searchmanagement/services/get)
+1. Check the response for ["provisioningState": "provisioning"](/rest/api/searchmanagement/services/get#provisioningstate)
+
+If status is "Provisioning", then wait for the request to complete. Status should be either "Succeeded" or "Failed" before another request is attempted. There is no status for backup. Backup is an internal operation and it's unlikely to be a factor in any disruption of a scale exercise.
<a id="chart"></a>
All Standard and Storage Optimized search services can assume the following comb
SUs, pricing, and capacity are explained in detail on the Azure website. For more information, see [Pricing Details](https://azure.microsoft.com/pricing/details/search/). > [!NOTE]
-> The number of replicas and partitions divides evenly into 12 (specifically, 1, 2, 3, 4, 6, 12). This is because Azure Cognitive Search pre-divides each index into 12 shards so that it can be spread in equal portions across all partitions. For example, if your service has three partitions and you create an index, each partition will contain four shards of the index. How Azure Cognitive Search shards an index is an implementation detail, subject to change in future releases. Although the number is 12 today, you shouldn't expect that number to always be 12 in the future.
+> The number of replicas and partitions divides evenly into 12 (specifically, 1, 2, 3, 4, 6, 12). Azure Cognitive Search pre-divides each index into 12 shards so that it can be spread in equal portions across all partitions. For example, if your service has three partitions and you create an index, each partition will contain four shards of the index. How Azure Cognitive Search shards an index is an implementation detail, subject to change in future releases. Although the number is 12 today, you shouldn't expect that number to always be 12 in the future.
> ## Next steps
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
This connection string does not require an account key, but you must follow the
**Full access storage account connection string**: `{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
-You can get the connection string from the Azure portal by navigating to the storage account blade > Settings > Keys (for Classic storage accounts) or Settings > Access keys (for Azure Resource Manager storage accounts).
+You can get the connection string from the Azure portal by navigating to the storage account blade > Settings > Keys (for Classic storage accounts) or Security + networking > Access keys (for Azure Resource Manager storage accounts).
**Storage account shared access signature** (SAS) connection string: `{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }`
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-ip-restricted.md
Last updated 10/14/2020
-# Configure IP firewall rules to allow indexer connections (Azure Cognitive Search)
+# Configure IP firewall rules to allow indexer connections in Azure Cognitive Search
IP firewall rules on Azure resources such as storage accounts, Cosmos DB accounts, and Azure SQL Servers only permit traffic originating from specific IP ranges to access data.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-securing-resources.md
Last updated 10/14/2020
-# Indexer access to content protected by Azure network security features (Azure Cognitive Search)
+# Indexer access to content protected by Azure network security features
-Azure Cognitive Search indexers can make outbound calls to various Azure resources during execution. This article explains the concepts behind indexer access to content that is protected by IP firewalls, private endpoints, or other Azure network-level security mechanisms. An indexer makes outbound calls in two situations: connecting to data sources during indexing, and connecting to encapsulated code through a skillset. A list of all possible resource types that an indexer might access in a typical run are listed in the table below.
+Azure Cognitive Search indexers can make outbound calls to various Azure resources during execution. This article explains the concepts behind indexer access to content that is protected by IP firewalls, private endpoints, or other Azure network-level security mechanisms.
+
+An indexer makes outbound calls in two situations:
+
+- Connecting to external data sources during indexing
+- Connecting to external, encapsulated code through a skillset
+
+A list of all possible resource types that an indexer might access in a typical run are listed in the table below.
| Resource | Purpose within indexer run | | | |
Customers can secure these resources via several network isolation mechanisms of
> [!NOTE] > In addition to the options listed above, for network-secured Azure Storage accounts, customers can leverage the fact that Azure Cognitive Search is a [trusted Microsoft service](../storage/common/storage-network-security.md#trusted-microsoft-services). This means that a specific search service can bypass virtual network or IP restrictions on the storage account and can access data in the storage account, if the appropriate role-based access control is enabled on the storage account. For more information, see [Indexer connections using the trusted service exception](search-indexer-howto-access-trusted-service-exception.md). This option can be utilized instead of the IP restriction route, in case either the storage account or the search service cannot be moved to a different region.
-When choosing which secure access mechanism that an indexer should use, consider the following constraints:
+When choosing a secure access mechanism, consider the following constraints:
- An indexer cannot connect to a [virtual network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md). Public endpoints with credentials, private endpoints, trusted service, and IP addressing are the only supported methodologies for indexer connections.-- A search service cannot be provisioned into a specific virtual network, running natively on a virtual machine. This functionality will not be offered by Azure Cognitive Search.
+- A search service always runs in the cloud and cannot be provisioned into a specific virtual network, running natively on a virtual machine. This functionality will not be offered by Azure Cognitive Search.
- When indexers utilize (outbound) private endpoints to access resources, additional [private link charges](https://azure.microsoft.com/pricing/details/search/) may apply. ## Indexer execution environment
When choosing which secure access mechanism that an indexer should use, consider
Azure Cognitive Search indexers are capable of efficiently extracting content from data sources, adding enrichments to the extracted content, optionally generating projections before writing the results to the search index. Depending on the number of responsibilities assigned to an indexer, it can run in one of two environments: - An environment private to a specific search service. Indexers running in such environments share resources with other workloads (such as other customer-initiated indexing or querying workloads). Typically, only indexers that perform text-based indexing (for example, do not use a skillset) run in this environment.+ - A multi-tenant environment hosting indexers that are resource intensive, such as those with skillsets. This environment is used to offload computationally intensive processing, leaving service-specific resources available for routine operations. This multi-tenant environment is managed and secured by Microsoft, at no extra cost to the customer. For any given indexer run, Azure Cognitive Search determines the best environment in which to run the indexer. If you are using an IP firewall to control access to Azure resources, knowing about execution environments will help you set up an IP range that is inclusive of both.
For more information about this connectivity option, see [Indexer connections th
## Granting access via private endpoints Indexers can utilize [private endpoints](../private-link/private-endpoint-overview.md) to access resources, access to which are locked down either to select virtual networks or do not have any public access enabled.+ This functionality is only available in billable search services, with limits on the number of private endpoints that be created. For more information, see [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits). ### Step 1: Create a private endpoint to the secure resource
Azure Cognitive Search will validate that callers of this API have Azure RBAC pe
### Step 2: Approve the private endpoint connection When the (asynchronous) operation that creates a shared private link resource completes, a private endpoint connection will be created in a "Pending" state. No traffic flows over the connection yet.+ The customer is then expected to locate this request on their secure resource and "Approve" it. Typically, this can be done either via the Azure portal or via the [REST API](/rest/api/virtualnetwork/privatelinkservices/updateprivateendpointconnection). ### Step 3: Force indexers to run in the "private" environment An approved private endpoint allows outgoing calls from the search service to a resource that has some form of network level access restrictions (for example a storage account data source that is configured to only be accessed from certain virtual networks) to succeed.+ This means any indexer that is able to reach out to such a data source over the private endpoint will succeed. If the private endpoint is not approved, or if the indexer does not utilize the private endpoint connection then the indexer run will end up in `transientFailure`.
search Search Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-logs.md
Previously updated : 06/30/2020 Last updated : 01/27/2021 # Collect and analyze log data for Azure Cognitive Search Diagnostic or operational logs provide insight into the detailed operations of Azure Cognitive Search and are useful for monitoring service health and processes. Internally, Microsoft preserves system information on the backend for a short period of time (about 30 days), sufficient for investigation and analysis if you file a support ticket. However, if you want ownership over operational data, you should configure a diagnostic setting to specify where logging information is collected.
-Diagnostic logging is enabled through integration with [Azure Monitor](../azure-monitor/index.yml).
-
-When you set up diagnostic logging, you will be asked to specify a storage mechanism. The following table enumerates options for collecting and persisting data.
+Diagnostic logging is enabled through back-end integration with [Azure Monitor](../azure-monitor/index.yml). When you set up diagnostic logging, you will be asked to specify a storage option for persisting the log. The following table enumerates your options.
| Resource | Used for | |-|-|
Diagnostic settings specify how logged events and metrics are collected.
1. Save the setting.
-1. After logging has been enabled, use your search service to start generating logs and metrics. It will take time before logged events and metrics become available.
+1. After logging is enabled, your search service will start generating logs and metrics. It can take some time before logged events and metrics become available.
-For Log Analytics, it will be several minutes before data is available, after which you can run Kusto queries to return data. For more information, see [Monitor query requests](search-monitor-logs.md).
+For Log Analytics, expect to wait several minutes before data is available, after which you can run Kusto queries to return data. For more information, see [Monitor query requests](search-monitor-logs.md).
For Blob storage, it takes one hour before the containers will appear in Blob storage. There is one blob, per hour, per container. Containers are only created when there is an activity to log or measure. When the data is copied to a storage account, the data is formatted as JSON and placed in two containers:
Two tables contain logs and metrics for Azure Cognitive Search: **AzureDiagnosti
1. Under **Monitoring**, select **Logs**.
-1. Enter **AzureMetrics** in the query window. Run this simple query to get acquainted with the data collected in this table. Scroll across the table to view metrics and values. Notice the record count at the top, and if your service has been collecting metrics for a while, you might want to adjust the time interval to get a manageable data set.
+1. In the query window, type **AzureMetrics**, check the scope (your search service) and time range, and then click **Run** to get acquainted with the data collected in this table.
+
+ Scroll across the table to view metrics and values. Notice the record count at the top. If your service has been collecting metrics for a while, you might want to adjust the time interval to get a manageable data set.
![AzureMetrics table](./media/search-monitor-usage/azuremetrics-table.png "AzureMetrics table")
Correlate query request with indexing operations, and render the data points acr
AzureDiagnostics | summarize OperationName, Count=count() | where OperationName in ('Query.Search', 'Indexing.Index')
-| summarize Count=count(), AvgLatency=avg(DurationMs) by bin(TimeGenerated, 1h), OperationName
+| summarize Count=count(), AvgLatency=avg(durationMs) by bin(TimeGenerated, 1h), OperationName
| render timechart ```
search Search Monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-queries.md
Previously updated : 02/18/2020 Last updated : 01/26/2021 # Monitor query requests in Azure Cognitive Search This article explains how to measure query performance and volume using metrics and resource logging. It also explains how to collect the input terms used in queries - necessary information when you need to assess the utility and effectiveness of your search corpus.
-Historical data that feeds into metrics is preserved for 30 days. For longer retention, or to report on operational data and query strings, be sure to enable a [diagnostic setting](search-monitor-logs.md) that specifies a storage option for persisting logged events and metrics.
+The Azure portal shows basic metrics about query latency, query load (QPS), and throttling. Historical data that feeds into these metrics is preserved for 30 days. For longer retention, or to report on operational data and query strings, you must enable a [diagnostic setting](search-monitor-logs.md) that specifies a storage option for persisting logged events and metrics.
Conditions that maximize the integrity of data measurement include:
For deeper exploration, open metrics explorer from the **Monitoring** menu so th
1. Zoom into an area of interest on the line chart. Put the mouse pointer at the beginning of the area, click and hold the left mouse button, drag to the other side of area, and release the button. The chart will zoom in on that time range.
-## Identify strings used in queries
+## Return query strings entered by users
When you enable resource logging, the system captures query requests in the **AzureDiagnostics** table. As a prerequisite, you must have already enabled [resource logging](search-monitor-logs.md), specifying a log analytics workspace or another storage option.
When you enable resource logging, the system captures query requests in the **Az
1. Run the following expression to search Query.Search operations, returning a tabular result set consisting of the operation name, query string, the index queried, and the number of documents found. The last two statements exclude query strings consisting of an empty or unspecified search, over a sample index, which cuts down the noise in your results.
- ```
- AzureDiagnostics
+ ```kusto
+ AzureDiagnostics
| project OperationName, Query_s, IndexName_s, Documents_d | where OperationName == "Query.Search" | where Query_s != "?api-version=2020-06-30&search=*"
Add the duration column to get the numbers for all queries, not just those that
1. Under the Monitoring section, select **Logs** to query for log information.
-1. Run the following query to return queries, sorted by duration in milliseconds. The longest-running queries are at the top.
+1. Run the following basic query to return queries, sorted by duration in milliseconds. The longest-running queries are at the top.
- ```
+ ```Kusto
AzureDiagnostics | project OperationName, resultSignature_d, DurationMs, Query_s, Documents_d, IndexName_s | where OperationName == "Query.Search"
When pushing the limits of a particular replica-partition configuration, setting
If you specified an email notification, you will receive an email from "Microsoft Azure" with a subject line of "Azure: Activated Severity: 3 `<your rule name>`".
-<!-- ## Report query data
-
-Power BI is an analytical reporting tool useful for visualizing data, including log information. If you are collecting data in Blob storage, a Power BI template makes it easy to spot anomalies or trends. Use this link to download the template. -->
- ## Next steps If you haven't done so already, review the fundamentals of search service monitoring to learn about the full range of oversight capabilities.
search Search Monitor Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-usage.md
Previously updated : 06/30/2020 Last updated : 01/27/2021 # Monitor operations and activity of Azure Cognitive Search
-This article is an overview of monitoring concepts and tools for Azure Cognitive Search. For holistic monitoring, you can use a combination of built-in functionality and add-on services like Azure Monitor.
+This article is an overview of monitoring concepts and tools for Azure Cognitive Search. For holistic monitoring, you should use a combination of built-in functionality and add-on services like Azure Monitor.
Altogether, you can track the following:
-* Service: health/availability and changes to service configuration.
-* Storage: both used and available, with counts for each content type relative to the quota allowed for the service tier.
+* Search service: health and changes to service configuration.
+* Storage consumption: both used and available.
+* Object limits on indexes, indexers, and other objects, with counts for each type, relative to the [maximum allowed](search-limits-quotas-capacity.md) for the service tier.
* Query activity: volume, latency, and throttled or dropped queries. Logged query requests require [Azure Monitor](#add-azure-monitor). * Indexing activity: requires [diagnostic logging](#add-azure-monitor) with Azure Monitor.
-A search service does not support per-user authentication, so no identity information will be found in the logs.
+A search service does not support per-user authentication, so no user identity information will be found in the logs.
## Built-in monitoring
The following screenshot helps you locate monitoring information in the portal.
<a name="monitoring-apis"> </a>
-### APIs useful for monitoring
+### REST APIs useful for monitoring
-You can use the following APIs to retrieve the same information found in the Monitoring and Usage tabs in the portal.
+You can use [Postman](search-get-started-rest.md) and the following APIs to retrieve the same information found in the Monitoring and Usage tabs in the portal. You will need to provide an [admin API key](search-security-api-keys.md) to get system information.
* [GET Service Statistics](/rest/api/searchservice/get-service-statistics) * [GET Index Statistics](/rest/api/searchservice/get-index-statistics)
The following illustration is for the free service, which is capped at 3 objects
Many services, including Azure Cognitive Search, integrate with [Azure Monitor](../azure-monitor/index.yml) for additional alerts, metrics, and logging diagnostic data.
-[Enable diagnostic logging](search-monitor-logs.md) for a search service if you want control over data collection and storage.
-Logged events captured by Azure Monitor are stored in the **AzureDiagnostics** table and consists of operational data related to queries and indexing.
+[Enable diagnostic logging](search-monitor-logs.md) for a search service if you want control over data collection and storage. Logged events captured by Azure Monitor are stored in the **AzureDiagnostics** table and consists of operational data related to queries and indexing.
Azure Monitor provides several storage options, and your choice determines how you can consume the data:
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-manage-encryption-keys.md
Previously updated : 05/28/2021 Last updated : 06/18/2021 # Configure customer-managed keys for data encryption in Azure Cognitive Search
-Azure Cognitive Search automatically encrypts indexed content at rest with [service-managed keys](../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components). If more protection is needed, you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault. This article walks you through the steps of setting up customer-managed key encryption.
+Azure Cognitive Search automatically encrypts indexed content at rest with [service-managed keys](../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components). If more protection is needed, you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault.
-Customer-managed key encryption is dependent on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault's APIs to generate encryption keys. With Azure Key Vault, you can also audit key usage if you [enable logging](../key-vault/general/logging.md).
+This article walks you through the steps of setting up customer-managed key encryption. Here are some points to keep in mind:
-Encryption with customer-managed keys is applied to individual indexes or synonym maps when those objects are created, and is not specified on the search service level itself. Only new objects can be encrypted. You cannot encrypt content that already exists.
++ Customer-managed key encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault's APIs to generate encryption keys. With Azure Key Vault, you can also audit key usage if you [enable logging](../key-vault/general/logging.md).
-Keys don't all need to be in the same key vault. A single search service can host multiple encrypted indexes or synonym maps, each encrypted with their own customer-managed encryption keys, stored in different key vaults. You can also have indexes and synonym maps in the same service that are not encrypted using customer-managed keys.
++ Encryption with customer-managed keys is applied when objects are created, and is not specified on the search service level itself. Only new objects can be encrypted. You cannot encrypt content that already exists.+++ Keys can be in different key vaults. A single search service can host multiple encrypted indexes and other objects, each encrypted with their own customer-managed encryption keys, stored in different key vaults. >[!Important] > If you implement customer-managed keys, be sure to follow strict procedures during routine rotation of key vault keys and Active Directory application secrets and registration. Always update all encrypted content to use new secrets and keys before deleting the old ones. If you miss this step, your content cannot be decrypted. ## Double encryption
-Double encryption is an extension of customer-managed keys (CMK). It is understood to be two-fold encryption (once by CMK, and again by service-managed keys), and comprehensive in scope, encompassing long-term storage that is written to a data disk, as well as short-term storage written to temporary disks. There is no configuration required. When you apply CMK to objects, double encryption is invoked automatically.
+Double encryption is an extension of customer-managed key (CMK) encryption. CMK encryption applies to long-term storage that is written to a data disk. The term *double encryption* refers to the additional encryption of short-term storage (of content written to temporary disks). There is no configuration required. When you apply CMK to objects, double encryption is invoked automatically.
Although double encryption is available in all regions, support was rolled out in two phases. The first roll out was in August 2020 and included the five regions listed below. The second roll out in May 2021 extended double encryption to all remaining regions. If you are using CMK on an older service and want double encryption, you will need to create a new search service in your region of choice.
Skip this step if you already have a key in Azure Key Vault.
:::image type="content" source="media/search-manage-encryption-keys/cmk-key-identifier.png" alt-text="Create a new key vault key":::
-## 3 - Register an app in Active Directory
+## 3 - Register an app
1. In [Azure portal](https://portal.azure.com), find the Azure Active Directory resource for your subscription.
Skip this step if you already have a key in Azure Key Vault.
:::image type="content" source="media/search-manage-encryption-keys/cmk-application-secret.png" alt-text="Application secret":::
-## 4 - Grant key access permissions
+## 4 - Grant permissions
In this step, you will create an access policy in Key Vault. This policy gives the application you registered with Active Directory permission to use your customer-managed key.
This example uses the REST API, with values for Azure Key Vault and Azure Active
> [!Note] > None of these key vault details are considered secret and could be easily retrieved by browsing to the relevant Azure Key Vault key page in Azure portal.
-## Example: Index encryption
-
-Create an encrypted index using the [Create Index Azure Cognitive Search REST API](/rest/api/searchservice/create-index). Use the `encryptionKey` property to specify which encryption key to use.
-> [!Note]
-> None of these key vault details are considered secret and could be easily retrieved by browsing to the relevant Azure Key Vault key page in Azure portal.
- ## REST examples This section shows the full JSON for an encrypted index and synonym map
The details of creating a new index via the REST API could be found at [Create I
{"name": "ParkingIncluded", "type": "Edm.Boolean", "filterable": true, "sortable": true, "facetable": true}, {"name": "LastRenovationDate", "type": "Edm.DateTimeOffset", "filterable": true, "sortable": true, "facetable": true}, {"name": "Rating", "type": "Edm.Double", "filterable": true, "sortable": true, "facetable": true},
- {"name": "Location", "type": "Edm.GeographyPoint", "filterable": true, "sortable": true},
+ {"name": "Location", "type": "Edm.GeographyPoint", "filterable": true, "sortable": true}
], "encryptionKey": { "keyVaultUri": "https://demokeyvault.vault.azure.net",
Create an encrypted synonym map using the [Create Synonym Map Azure Cognitive Se
You can now send the synonym map creation request, and then start using it normally.
-## Example: Data source encryption
+### Data source encryption
Create an encrypted data source using the [Create Data Source (Azure Cognitive Search REST API)](/rest/api/searchservice/create-data-source). Use the `encryptionKey` property to specify which encryption key to use.
Create an encrypted data source using the [Create Data Source (Azure Cognitive S
You can now send the data source creation request, and then start using it normally.
-## Example: Skillset encryption
+### Skillset encryption
Create an encrypted skillset using the [Create Skillset Azure Cognitive Search REST API](/rest/api/searchservice/create-skillset). Use the `encryptionKey` property to specify which encryption key to use. ```json {
- "name" : "datasource1",
- "type" : "azureblob",
- "credentials" :
- { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=datasource;AccountKey=accountkey;EndpointSuffix=core.windows.net"
- },
- "container" : { "name" : "containername" },
- "encryptionKey": {
- "keyVaultUri": "https://demokeyvault.vault.azure.net",
- "keyVaultKeyName": "myEncryptionKey",
- "keyVaultKeyVersion": "eaab6a663d59439ebb95ce2fe7d5f660",
- "accessCredentials": {
- "applicationId": "00000000-0000-0000-0000-000000000000",
- "applicationSecret": "myApplicationSecret"
+ "name": "skillset1",
+ "skills": [ omitted for brevity ],
+ "cognitiveServices": { omitted for brevity },
+ "knowledgeStore": { omitted for brevity },
+ "encryptionKey": (optional) {
+ "keyVaultKeyName": "myEncryptionKey",
+ "keyVaultKeyVersion": "eaab6a663d59439ebb95ce2fe7d5f660",
+ "keyVaultUri": "https://demokeyvault.vault.azure.net",
+ "accessCredentials": {
+ "applicationId": "00000000-0000-0000-0000-000000000000",
+ "applicationSecret": "myApplicationSecret"}
}
- }
} ``` You can now send the skillset creation request, and then start using it normally.
-## Example: Indexer encryption
+### Indexer encryption
Create an encrypted indexer using the [Create Indexer Azure Cognitive Search REST API](/rest/api/searchservice/create-indexer). Use the `encryptionKey` property to specify which encryption key to use.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-request.md
Previously updated : 05/27/2021 Last updated : 06/18/2021 # Create a query that invokes semantic ranking and returns semantic captions
Set any other parameters that you want in the request. Parameters such as [spell
Highlight styling is applied to captions in the response. You can use the default style, or optionally customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
+## Query using Azure SDKs
+
+Beta versions of the Azure SDKs include support for semantic search. Because the SDKs are beta versions, there is no documentation or samples, but you can refer to the REST API section above for insights on how the APIs should work.
+
+| Azure SDK | Package |
+|--||
+| .NET | [Azure.Search.Documents package 11.3.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.3.0-beta.2) |
+| Java | [com.azure:azure-search-documents 11.4.0-beta.2](https://search.maven.org/artifact/com.azure/azure-search-documents/11.4.0-beta.2/jar) |
+| JavaScript | [azure/search-documents 11.2.0-beta.2](https://www.npmjs.com/package/@azure/search-documents/v/11.2.0-beta.2)|
+| Python | [azure-search-documents 11.2.0b3](https://pypi.org/project/azure-search-documents/11.2.0b3/) |
+ ## Evaluate the response As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. It includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
To use semantic capabilities in queries, you'll need to make small modifications
Semantic search is a newer technology so it's important to set expectations about what it can and cannot do. It improves the quality of search results in two ways:
-* First, it promotes any matches are semantically closer to the intent of original query is a significant benefit.
+* First, it promotes matches that are semantically closer to the intent of original query.
* Second, it makes results more easily consumable when captions, and potentially answers, are present on the page.
-At all times, the engine is working with existing content, and the features work best on searchable content that is structured as prose. Language models used in semantic search are designed to extract an intact string from your content that looks like an answer, but won't try to compose a new string as an answer to a query, or as a caption for a matching document.
+At all times, the engine is working with existing content, and the language models work best on searchable content that is structured as prose. Language models used in semantic search are designed to extract an intact string from your content that looks like an answer, but won't try to compose a new string as an answer to a query, or as a caption for a matching document.
Semantic search cannot correlate or infer information from different pieces of content within the document or corpus of documents. For example, given a query for "resort hotels in a desert" absent any geographical input, the engine won't produce matches for hotels located in Arizona or Nevada, even though both states have deserts. Similarly, if the query includes the clause "in the last 5 years", the engine won't calculate a time interval based on the current date to return.
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/antimalware.md
When you deploy and enable Microsoft Antimalware for Azure for your applications
* **Active protection** - reports telemetry metadata about detected threats and suspicious resources to Microsoft Azure to ensure rapid response to the evolving threat landscape, as well as enabling real-time synchronous signature delivery through the Microsoft Active Protection System (MAPS). * **Samples reporting** - provides and reports samples to the Microsoft Antimalware service to help refine the service and enable troubleshooting. * **Exclusions** ΓÇô allows application and service administrators to configure exclusions for files, processes, and drives.
-* **Antimalware event collection** - records the antimalware service health, suspicious activities, and remediation actions taken in the operating system event log and collects them into the customerΓÇÖs Azure Storage account.
+* **Antimalware event collection** - records the antimalware service health, suspicious activities, and remediation actions taken in the operating system event log and collects them into the customer's Azure Storage account.
> [!NOTE] > Microsoft Antimalware can also be deployed using Azure Security Center. Read [Install Endpoint Protection in Azure Security Center](../../security-center/security-center-services.md#supported-endpoint-protection-solutions-) for more information.
The Azure service administrator can enable Antimalware for Azure with a default
The Azure portal or PowerShell cmdlets push the Antimalware extension package file to the Azure system at a pre-determined fixed location. The Azure Guest Agent (or the Fabric Agent) launches the Antimalware Extension, applying the Antimalware configuration settings supplied as input. This step enables the Antimalware service with either default or custom configuration settings. If no custom configuration is provided, then the antimalware service is enabled with the default configuration settings. Refer to the *Antimalware configuration* section in the [Microsoft Antimalware for Azure ΓÇô Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure Cloud Services and VMs Code Samples") for more details.
-Once running, the Microsoft Antimalware client downloads the latest protection engine and signature definitions from the Internet and loads them on the Azure system. The Microsoft Antimalware service writes service-related events to the system OS events log under the ΓÇ£Microsoft AntimalwareΓÇ¥ event source. Events include the Antimalware client health state, protection and remediation status, new and old configuration settings, engine updates and signature definitions, and others.
+Once running, the Microsoft Antimalware client downloads the latest protection engine and signature definitions from the Internet and loads them on the Azure system. The Microsoft Antimalware service writes service-related events to the system OS events log under the "Microsoft Antimalware" event source. Events include the Antimalware client health state, protection and remediation status, new and old configuration settings, engine updates and signature definitions, and others.
-You can enable Antimalware monitoring for your Cloud Service or Virtual Machine to have the Antimalware event log events written as they are produced to your Azure storage account. The Antimalware Service uses the Azure Diagnostics extension to collect Antimalware events from the Azure system into tables in the customerΓÇÖs Azure Storage account.
+You can enable Antimalware monitoring for your Cloud Service or Virtual Machine to have the Antimalware event log events written as they are produced to your Azure storage account. The Antimalware Service uses the Azure Diagnostics extension to collect Antimalware events from the Azure system into tables in the customer's Azure Storage account.
The deployment workflow including configuration steps and options supported for the above scenarios are documented in [Antimalware deployment scenarios](#antimalware-deployment-scenarios) section of this document.
The deployment workflow including configuration steps and options supported for
The default configuration settings are applied to enable Antimalware for Azure Cloud Services or Virtual Machines when you do not provide custom configuration settings. The default configuration settings have been pre-optimized for running in the Azure environment. Optionally, you can customize these default configuration settings as required for your Azure application or service deployment and apply them for other deployment scenarios.
-The following table summarizes the configuration settings available for the Antimalware service. The default configuration settings are marked under the column labeled ΓÇ£DefaultΓÇ¥ below.
+The following table summarizes the configuration settings available for the Antimalware service. The default configuration settings are marked under the column labeled "Default" below.
![Table 1](./media/antimalware/sec-azantimal-tb18.png)
To enable and configure Microsoft Antimalware for Azure Virtual Machines using t
14. Back in the **Settings** section, choose **Ok**. 15. In the **Create** screen, choose **Ok**.
-See this [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/anti-malware-extension-windows-vm/azuredeploy.json#L261) for deployment of Antimalware VM extension for Windows.
+See this [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/anti-malware-extension-windows-vm/azuredeploy.json#L259) for deployment of Antimalware VM extension for Windows.
#### Deployment using the Visual Studio virtual machine configuration
To **enable** antimalware event collection for a virtual machine using the Azure
2. Click the Diagnostics command on Metric blade 3. Select **Status** ON and check the option for Windows event system 4. . You can choose to uncheck all other options in the list, or leave them enabled per your application service needs.
-5. The Antimalware event categories ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, ΓÇ£InformationalΓÇ¥, etc., are captured in your Azure Storage account.
+5. The Antimalware event categories "Error", "Warning", "Informational", etc., are captured in your Azure Storage account.
Antimalware events are collected from the Windows event system logs to your Azure Storage account. You can configure the Storage Account for your Virtual Machine to collect Antimalware events by selecting the appropriate storage account.
service-bus-messaging How To Use Java Message Service 20 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/how-to-use-java-message-service-20.md
To learn more about how to prepare your developer environment for Java on Azure,
## What JMS features are supported? ## Downloading the Java Message Service (JMS) client library
service-bus-messaging Migrate Jms Activemq To Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/migrate-jms-activemq-to-servicebus.md
Even so, there are some differences between the two, as the following table show
### Current supported and unsupported features ### Considerations
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/private-link-service.md
If you already have an existing namespace, you can create a private endpoint by
![Private endpoint created](./media/private-link-service/private-endpoint-created.png) ## Add a private endpoint using PowerShell The following example shows you how to use Azure PowerShell to create a private endpoint connection to a Service Bus namespace.
service-bus-messaging Service Bus Amqp Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-amqp-overview.md
All supported Service Bus client libraries available via the Azure SDK use AMQP
- [Azure Service Bus Modules for JavaScript and TypeScript](/javascript/api/overview/azure/service-bus?preserve-view=true) - [Azure Service Bus libraries for Python](/python/api/overview/azure/servicebus?preserve-view=true) In addition, you can use Service Bus from any AMQP 1.0 compliant protocol stack:
service-bus-messaging Service Bus Create Namespace Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-create-namespace-portal.md
Last updated 06/23/2020
A namespace is a scoping container for all messaging components. Multiple queues and topics can reside within a single namespace, and namespaces often serve as application containers. This article provides instructions for creating a namespace in the Azure portal. Congratulations! You have now created a Service Bus Messaging namespace.
service-bus-messaging Service Bus Dotnet Multi Tier App Using Service Bus Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-multi-tier-app-using-service-bus-queues.md
In this tutorial, you'll use Azure Active Directory (Azure AD) authentication to
The first step is to create a *namespace*, and obtain a [Shared Access Signature (SAS)](service-bus-sas.md) key for that namespace. A namespace provides an application boundary for each application exposed through Service Bus. A SAS key is generated by the system when a namespace is created. The combination of namespace name and SAS key provides the credentials for Service Bus to authenticate access to an application. ## Create a web role
To learn more about Service Bus, see the following resources:
* [Get started using Service Bus queues][sbacomqhowto] * [Service Bus service page][sbacom]
-To learn more about multi-tier scenarios, see:
-
-* [.NET Multi-Tier Application Using Storage Tables, Queues, and Blobs][mutitierstorage]
-- [sbacom]: https://azure.microsoft.com/services/service-bus/ [sbacomqhowto]: service-bus-dotnet-get-started-with-queues.md
-[mutitierstorage]: https://code.msdn.microsoft.com/Windows-Azure-Multi-Tier-eadceb36
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-ip-filtering.md
This section shows you how to use the Azure portal to create IP firewall rules f
> [!NOTE] > To restrict access to specific virtual networks, see [Allow access from specific networks](service-bus-service-endpoints.md). ## Use Resource Manager template This section has a sample Azure Resource Manager template that adds a virtual network and a firewall rule to an existing Service Bus namespace.
service-bus-messaging Service Bus Java How To Use Queues Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-java-how-to-use-queues-legacy.md
# Quickstart: Use Azure Service Bus queues with Java to send and receive messages In this tutorial, you learn how to create Java applications to send messages to and receive messages from an Azure Service Bus queue. > [!WARNING]
service-bus-messaging Service Bus Manage With Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-manage-with-ps.md
There are some alternate ways to manage Service Bus entities, as described in th
* [How to create Service Bus queues, topics and subscriptions using a PowerShell script](/archive/blogs/paolos/how-to-create-service-bus-queues-topics-and-subscriptions-using-a-powershell-script) * [How to create a Service Bus Namespace and an Event Hub using a PowerShell script](/archive/blogs/paolos/how-to-create-a-service-bus-namespace-and-an-event-hub-using-a-powershell-script)
-* [Service Bus PowerShell Scripts](https://code.msdn.microsoft.com/Service-Bus-PowerShell-a46b7059)
<!--Anchors-->
service-bus-messaging Service Bus Php How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-php-how-to-use-queues.md
Last updated 06/23/2020
# Quickstart: How to use Service Bus queues with PHP In this tutorial, you learn how to create PHP applications to send messages to and receive messages from a Service Bus queue.
service-bus-messaging Service Bus Php How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-php-how-to-use-topics-subscriptions.md
Last updated 06/23/2020
# Quickstart: How to use Service Bus topics and subscriptions with PHP This article shows you how to use Service Bus topics and subscriptions. The samples are written in PHP and use the [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php). The scenarios covered include:
service-bus-messaging Service Bus Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-quickstart-portal.md
This quickstart shows you how to create a Service Bus namespace and a queue usin
To complete this quickstart, make sure you have an Azure subscription. If you don't have an Azure subscription, you can create a [free account][] before you begin. ## Next steps In this article, you created a Service Bus namespace and a queue in the namespace. To learn how to send/receive messages to/from the queue, see one of the following quickstarts in the **Send and receive messages** section.
service-bus-messaging Service Bus Quickstart Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md
register multiple subscriptions to a topic. When a message is sent to a topic, i
Service Bus topics and subscriptions enable you to scale to process a large number of messages across a large number of users and applications. > [!NOTE] > You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to connect to a Service Bus namespace and administer messaging entities in an easy manner. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-samples.md
Title: Azure Service Bus messaging samples overview
description: The Service Bus messaging samples demonstrate key features in Azure Service Bus messaging. Provides links to samples on GitHub. Previously updated : 10/14/2020 Last updated : 06/18/2021
The Service Bus messaging samples demonstrate key features in [Service Bus messa
| Package | Samples location | | - | - |
-| Azure.Messaging.ServiceBus (latest) | https://docs.microsoft.com/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/ |
+| Azure.Messaging.ServiceBus (latest) | /samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/ |
| Microsoft.Azure.ServiceBus (legacy) | https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus | ## Java samples | Package | Samples location | | - | - |
-| azure-messaging-servicebus (latest) | https://docs.microsoft.com/samples/azure/azure-sdk-for-java/servicebus-samples/ |
+| azure-messaging-servicebus (latest) | /samples/azure/azure-sdk-for-java/servicebus-samples/ |
| azure-servicebus (legacy) | https://github.com/Azure/azure-service-bus/tree/master/samples/Java | ## Python samples | Package | Samples location | | -- | -- |
-| azure.servicebus | https://docs.microsoft.com/samples/azure/azure-sdk-for-python/servicebus-samples/ |
+| azure.servicebus | /samples/azure/azure-sdk-for-python/servicebus-samples/ |
## TypeScript samples | Package | Samples location | | - | - |
-| @azure/service-bus | https://docs.microsoft.com/samples/azure/azure-sdk-for-js/service-bus-typescript/ |
+| @azure/service-bus | /samples/azure/azure-sdk-for-js/service-bus-typescript/ |
## JavaScript samples | Package | Samples location | | - | - |
-| @azure/service-bus | https://docs.microsoft.com/samples/azure/azure-sdk-for-js/service-bus-javascript/ |
+| @azure/service-bus | /samples/azure/azure-sdk-for-js/service-bus-javascript/ |
## Go samples | Package | Samples location |
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-sas.md
The following recommendations for using shared access signatures can help mitiga
## Configuration for Shared Access Signature authentication
-You can configure the Shared Access Authorization Policy on Service Bus namespaces, queues, or topics. Configuring it on a Service Bus subscription is currently not supported, but you can use rules configured on a namespace or topic to secure access to subscriptions. For a working sample that illustrates this procedure, see the [Using Shared Access Signature (SAS) authentication with Service Bus Subscriptions](https://code.msdn.microsoft.com/Using-Shared-Access-e605b37c) sample.
+You can configure the Shared Access Authorization Policy on Service Bus namespaces, queues, or topics. Configuring it on a Service Bus subscription is currently not supported, but you can use rules configured on a namespace or topic to secure access to subscriptions.
![SAS](./media/service-bus-sas/service-bus-namespace.png)
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-service-endpoints.md
This section shows you how to use Azure portal to add a virtual network service
> [!NOTE] > For instructions on allowing access from specific IP addresses or ranges, see [Allow access from specific IP addresses or ranges](service-bus-ip-filtering.md). ## Use Resource Manager template The following sample Resource Manager template adds a virtual network rule to an existing Service Bus namespace. For the network rule, it specifies the ID of a subnet in a virtual network.
service-bus-messaging Service Bus To Event Grid Integration Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-to-event-grid-integration-concept.md
To enable the feature, you need the following items:
![19][] ## Event Grid subscriptions for Service Bus namespaces You can create Event Grid subscriptions for Service Bus namespaces in three different ways:
service-bus-messaging Service Bus To Event Grid Integration Example https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-to-event-grid-integration-example.md
# Tutorial: Respond to Azure Service Bus events received via Azure Event Grid by using Azure Logic Apps In this tutorial, you learn how to respond to Azure Service Bus events that are received via Azure Event Grid by using Azure Logic Apps. ## Receive messages by using Logic Apps In this step, you create an Azure logic app that receives Service Bus events via Azure Event Grid.
service-bus-messaging Service Bus To Event Grid Integration Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-to-event-grid-integration-function.md
In this tutorial, you learn how to:
> * Connect the function and namespace via Event Grid > * Receive messages by using Azure Functions ## Additional prerequisites Install [Visual Studio 2019](https://www.visualstudio.com/vs) and include the **Azure development** workload. This workload includes **Azure Function Tools** that you need to create, build, and deploy Azure Functions projects in Visual Studio.
service-bus-messaging Service Bus Tutorial Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-tutorial-topics-subscriptions-portal.md
To complete this tutorial, make sure you have installed:
Each [subscription to a topic](service-bus-messaging-overview.md#topics) can receive a copy of each message. Topics are fully protocol and semantically compatible with Service Bus queues. Service Bus topics support a wide array of selection rules with filter conditions, with optional actions that set or modify message properties. Each time a rule matches, it produces a message. To learn more about rules, filters, and actions, follow this [link](topic-filters.md).
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-troubleshoot-replication.md
Some of the most common issues are listed below
**How to fix** : Refer Kb [article](https://support.microsoft.com/help/4493364/fix-error-occurs-when-you-back-up-a-virtual-machine-with-non-component) #### Cause 4: App-Consistency not enabled on Linux servers
-**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](/azure/site-recovery/site-recovery-faq.yml#replication) are the steps to enable it.
+**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](/azure/site-recovery/site-recovery-faq#replication) are the steps to enable it.
### More causes due to VSS related issues:
This error occurs when trying to enable replication and the application folders
## Next steps
-If you need more help, post your question in the [Microsoft Q&A question page for Azure Site Recovery](/answers/topics/azure-site-recovery.html). We have an active community, and one of our engineers can assist you.
+If you need more help, post your question in the [Microsoft Q&A question page for Azure Site Recovery](/answers/topics/azure-site-recovery.html). We have an active community, and one of our engineers can assist you.
spatial-anchors Spatial Anchor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/spatial-anchor-faq.md
- Title: Frequently asked questions
-description: FAQs about the Azure Spatial Anchors service.
---- Previously updated : 11/20/2020--
-#Customer intent: Address frequently asked questions regarding Azure Spatial Anchors.
--
-# Frequently asked questions about Azure Spatial Anchors
-
-Azure Spatial Anchors is a managed cloud service and developer platform that enables multi-user, spatially aware mixed reality experiences across HoloLens, iOS, and Android devices.
-
-For more information, see [Azure Spatial Anchors overview](overview.md).
-
-## Azure Spatial Anchors Product FAQs
-
-**Q: Which devices does Azure Spatial Anchors support?**
-
-**A:** Azure Spatial Anchors enables developers to build apps on HoloLens, on iOS devices with ARKit support, and on Android devices with ARCore support; for iOS and Android this includes both phones and tablets.
-
-**Q: Do I have to be connected to the cloud to use Azure Spatial Anchors?**
-
-**A:** Azure Spatial Anchors currently requires a network connection to the internet. We welcome your comments on our [feedback site](https://feedback.azure.com/forums/919252-azure-spatial-anchors).
-
-**Q: What are the connectivity requirements for Azure Spatial Anchors?**
-
-**A:** Azure Spatial Anchors works with Wi-Fi and mobile broadband connections.
-
-**Q: How accurately can Azure Spatial Anchors locate anchors?**
-
-**A:** Many factors affect the accuracy of locating anchors--lighting conditions, the objects in the environment, and even the surface on which the anchor is placed. To determine if the accuracy will meet your needs, try the anchors in environments representative of where you plan to use them. If you encounter environments where accuracy isn't meeting your needs, see [Logging and diagnostics in Azure Spatial Anchors](./concepts/logging-diagnostics.md).
-
-**Q: How long does it take to create and locate anchors?**
-
-**A:** The time required to create and locate anchors is dependent on many factors--network connection, the device's processing and load, and the specific environment. We have customers building applications in many industries including manufacturing, retail, and gaming indicating that the service enables a great user experience for their scenarios.
-
-## Privacy FAQ
-
-**Q: How does Azure Spatial Anchors store data?**
-
-**A:** All data is stored encrypted with a Microsoft managed data encryption key and all data is stored regionally for each of the resources.
-
-**Q: Where does Azure Spatial Anchors store data?**
-
-**A:** Azure Spatial Anchors accounts allow you to specify the region where your data will be stored. Microsoft may replicate data to other regions for resiliency, but Microsoft does not replicate or move data outside the geography. This data is stored in the region where the Azure Spatial Anchors account is configured. For example, if the account is registered in the East US region, this data is stored in the East US region but may be replicated to another region in the North America geography to ensure resiliency.
-
-**Q: What information about an environment is transmitted and stored on the service when using Azure Spatial Anchors? Are pictures of the environment transmitted and stored?**
-
-**A**: When creating or locating anchors, pictures of the environment are processed on the device into a derived format. This derived format is transmitted to and stored on the service.
-
-To provide transparency, below is an image of an environment and the derived sparse point cloud. The point cloud shows the geometric representation of the environment that's transmitted and stored on the service. For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.
-
-Azure Spatial Anchors adheres to the [Azure Service Agreement Terms](https://go.microsoft.com/fwLink/?LinkID=522330&amp;amp;clcid=0x9), and the [Microsoft Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=521839&amp;clcid=0x409).
-
-![An environment and its derived sparse point cloud](./media/sparse-point-cloud.png)
-*Figure 1: An environment and its derived sparse point cloud*
-
-**Q: Is there a way I can send diagnostics information to Microsoft?**
-
-**A**: Yes. Azure Spatial Anchors has a diagnostics mode that developers can choose to opt into through the Azure Spatial Anchors API. This is useful, for example, if you encounter an environment where you are unable to create and locate anchors predictably. We may ask if you can submit a diagnostics report containing information that helps us debug. For more information see [Logging and diagnostics in Azure Spatial Anchors](./concepts/logging-diagnostics.md).
-
-## Privacy FAQ (for developers)
-
-**Q: When my application places a Spatial Anchor somewhere do all apps have access to it?**
-
-**A:** Anchors are isolated by Azure account. Only apps to which you grant access to your account will be able to access anchors within the account.
-
-**Q: What terms and conditions apply when using ASA in my app or SDK?**
-
-**A:** The following [terms](https://www.nuget.org/packages/Microsoft.Azure.SpatialAnchors.WinRT/2.9.0/License) apply as well as any terms embedded in that license.
-
-### I want to release an app or SDK that includes ASA
-**Q: Do I need to agree to any additional terms?**
-
-**A:** No. By using ASA you agree to the above linked ToU and the embedded terms. No additional terms are required.
-
-**Q: Does Microsoft require the disclosure of ASA in my application or SDK to my users?**
-
-**A:** Microsoft does not require disclosure unless it is required by your local law to meet privacy or other requirements. You should work with your own legal advisers to determine whether disclosing the use of ASA or MicrosoftΓÇÖs privacy practices is required by your local law.
-
-**Q: Do my users need to agree to any specific terms?**
-
-**A:** No. The contractual relationship is between Microsoft and the developer of the app or the SDK. You should work with your own legal advisers to determine whether consent to the use of ASA or MicrosoftΓÇÖs privacy practices is required by your local law.
-
-**Q: I want to notify my users that my app/sdk is using ASA, what is the recommended interface to notify my users?**
-
-**A:** ΓÇ£This app is using MicrosoftΓÇÖs Azure Spatial Anchors. For more information on Azure Spatial Anchors go to [Azure Spatial Anchors | Microsoft Azure](https://azure.microsoft.com/en-us/services/spatial-anchors/)ΓÇ¥
-
-## Availability and Pricing FAQs
-
-**Q: Do you provide an SLA?**
-
-**A:** As is standard for Azure services, we target an availability greater than 99.9%.
-
-**Q: Can I publish my apps using Azure Spatial Anchors to app stores? Can I use Azure Spatial Anchors for mission-critical production scenarios?**
-
-**A:** Yes, Azure Spatial Anchors is generally available and has a standard Azure services SLA. We invite you to develop apps for your production deployments, and [share your feedback](https://feedback.azure.com/forums/919252-azure-spatial-anchors) about the product with us.
-
-**Q: Do you have any throttling limits in place?**
-
-**A**: Yes, we have throttling limits. We donΓÇÖt expect youΓÇÖll hit them for typical application development and testing. For production deployments, we are ready to support our customersΓÇÖ high-scale requirements. [Contact us](mailto:azuremrscontact@microsoft.com) to discuss.
-
-**Q: In what regions is Azure Spatial Anchors available?**
-
-**A:** Azure Spatial Anchors is currently available in West US 2, East US, East US 2, South Central US, West Europe, North Europe, UK South, Australia East, Southeast Asia, and Korea Central. Additional regions will be available in the future.
-
-What this means is that both compute and storage powering this service are in these regions. That said, there are no restrictions on where your clients are located.
-
-**Q: Do you charge for Azure Spatial Anchors?**
-
-**A:** You can find details about pricing on our [pricing page](https://azure.microsoft.com/pricing/details/spatial-anchors/).
-
-## Technical FAQs
-
-**Q: How does Azure Spatial Anchors work?**
-
-**A:** Azure Spatial Anchors depends on mixed reality / augmented reality trackers. These trackers perceive the environment with cameras and track the device in 6-degrees-of-freedom (6DoF) as it moves through the space.
-
-Given a 6DoF tracker as a building block, Azure Spatial Anchors allows you to designate certain points of interest in your real environment as "anchor" points. You might, for example, use an anchor to render content at a specific place in the real-world.
-
-When you create an anchor, the client SDK captures environment information around that point and transmits it to the service. If another device looks for the anchor in that same space, similar data transmits to the service. That data is matched against the environment data previously stored. The position of the anchor relative to the device is then sent back for use in the application.
-
-**Q: How does Azure Spatial Anchors integrate with ARKit and ARCore on iOS and Android?**
-
-**A:** Azure Spatial Anchors leverages the native tracking capabilities of ARKit and ARCore. In addition, our SDKs for iOS and Android offer capabilities such as persisting anchors in a managed cloud service, and allowing your apps to locate those anchors again by simply connecting to the service.
-
-**Q: How does Azure Spatial Anchors integrate with HoloLens?**
-
-**A:** Azure Spatial Anchors leverages the native tracking capabilities of HoloLens. We provide an Azure Spatial Anchors SDK for building apps on HoloLens. The SDK integrates with the native HoloLens capabilities and provides additional capabilities. These capabilities include allowing app developers to persist anchors in a managed cloud service and allowing your apps to locate those anchors again by connecting to the service.
-
-**Q: Which platforms and languages does Azure Spatial Anchors support?**
-
-**A:** Developers can build apps with Azure Spatial Anchors using familiar tools and frameworks for their device:
--- Unity across HoloLens, iOS, and Android-- Xamarin on iOS and Android-- Swift or Objective-C on iOS-- Java or the Android NDK on Android-- C++/WinRT on HoloLens-
-Get started with [development here](index.yml).
-
-**Q: Does it work with Unreal?**
-
-**A:** Support for Unreal will be considered in the future.
-
-**Q: What ports and protocols does Azure Spatial Anchors use?**
-
-**A:** Azure Spatial Anchors communicates over TCP port 443 using an encrypted protocol. For authentication, it uses [Azure Active Directory](../active-directory/index.yml), which communicates using HTTPS over port 443.
spatial-anchors Spatial Anchor Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/spatial-anchor-support.md
## Open a tech support ticket
-To open a technical support ticket within the Azure Portal for Azure Spatial Anchors:
+To open a technical support ticket within the Azure portal for Azure Spatial Anchors:
1. With the [Azure portal](https://azure.microsoft.com/account/) open, select the help icon from the top menu bar, then select the **Help + support** button.
-![Azure portal help and support](./media/spatial-anchor-support.png)
+ ![Azure portal help and support](./media/spatial-anchor-support.png)
1. With the Help + support page open, select **+ New support request**.
-![Azure portal new support request](./media/spatial-anchor-support2.png)
+ ![Azure portal new support request](./media/spatial-anchor-support2.png)
1. When completing the support ticket fields: -- Issue type: Technical-- Service: Spatial Anchors
+ - Issue type: Technical
+ - Service: Spatial Anchors
-![Azure portal support ticket fields](./media/spatial-anchor-support3.png)
+ ![Azure portal support ticket fields](./media/spatial-anchor-support3.png)
## Community support
To provide feedback, share an idea or suggestion for the Azure Spatial Anchors s
## Next steps
-For frequently asked questions about Azure Spatial Anchors, see the [FAQ](spatial-anchor-faq.md) page.
+For frequently asked questions about Azure Spatial Anchors, see the [FAQ](spatial-anchor-faq.yml) page.
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/configuration.md
Configuration for Azure Static Web Apps is defined in the _staticwebapp.config.j
> [!NOTE] > [_routes.json_](https://github.com/Azure/static-web-apps/wiki/routes.json-reference-(deprecated)) that was previously used to configure routing is deprecated. Use _staticwebapp.config.json_ as described in this article to configure routing and other settings for your static web app. >
-> This document is regarding Azure Static Web Apps, which is a stand alone product and seperate from $web container, which is a feature of Azure Storage.
+> This document is regarding Azure Static Web Apps, which is a standalone product and separate from the [static website hosting](../storage/blobs/storage-blob-static-website.md) feature of Azure Storage.
## File location
Each property has a specific purpose in the request/response pipeline.
## Securing routes with roles
-Routes are secured by adding one or more role names into a rule's `allowedRoles` array, and users are associated to custom roles via [invitations](./authentication-authorization.md). See the [example configuration file](#example-configuration-file) for usage examples.
+Routes are secured by adding one or more role names into a rule's `allowedRoles` array. See the [example configuration file](#example-configuration-file) for usage examples.
-By default, every user belongs to the built-in `anonymous` role, and all logged-in users are members of the `authenticated` role.
+By default, every user belongs to the built-in `anonymous` role, and all logged-in users are members of the `authenticated` role. Optionally, users are associated to custom roles via [invitations](./authentication-authorization.md).
For instance, to restrict a route to only authenticated users, add the built-in `authenticated` role to the `allowedRoles` array.
static-web-apps Github Actions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/github-actions-workflow.md
The `repo_token`, `action`, and `azure_static_web_apps_api_token` values are set
## Custom build commands
-You can have fine-grained control over what commands run during a deployment. The following commands can be defined under a job's `with` section.
+You can have fine-grained control over what commands run during the app or API build process. The following commands can be defined under a job's `with` section.
-The deployment always calls `npm install` before any custom command.
+> [!NOTE]
+> Currently, you can only define custom build commands for Node.js builds. The build process always calls `npm install` before any custom command.
| Command | Description | | - | |
-| `app_build_command` | Defines a custom command to run during deployment of the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:azure` commands. |
-| `api_build_command` | Defines a custom command to run during deployment of the Azure Functions API application. |
+| `app_build_command` | Defines a custom command to build the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:azure` commands. |
+| `api_build_command` | Defines a custom command to build the Azure Functions API application. |
## Skip app build
static-web-apps Review Publish Pull Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/review-publish-pull-requests.md
There are many benefits of using pre-production environments. For example, you c
- Perform sanity checks before deploying to production. > [!NOTE]
-> There is [maximum of three staging environments](quotas.md) allowed at a time.
+> Pull requests and pre-production environments are currently only supported in GitHub Actions deployments.
## Prerequisites
To verify the changes in production, open your production URL to load the live
## Limitations
-Staged versions of your application are currently accessible publicly by their URL, even if your GitHub repository is private.
+- Staged versions of your application are currently accessible publicly by their URL, even if your GitHub repository is private.
-> [!WARNING]
-> Be careful when publishing sensitive content to staged versions, as access to pre-production environments are not restricted.
+ > [!WARNING]
+ > Be careful when publishing sensitive content to staged versions, as access to pre-production environments are not restricted.
-The number of pre-production environments available for each app deployed with Static Web Apps depends of the SKU tier you are using. For example, with the Free tier you can have 3 pre-production environments in addition to the production environment.
+- The number of pre-production environments available for each app deployed with Static Web Apps depends of the [hosting plan](plans.md) you are using. For example, with the Free tier you can have 3 pre-production environments in addition to the production environment.
+
+- Pre-production environments are not geo-distributed.
+
+- Currently, only GitHub Actions deployments support pre-production environments.
## Next steps
storage Blob Inventory How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-inventory-how-to.md
+
+ Title: Enable Azure Storage blob inventory reports (preview)
+description: Obtain an overview of your containers, blobs, snapshots, and blob versions within a storage account.
++++ Last updated : 06/18/2021++++++
+# Enable Azure Storage blob inventory reports (preview)
+
+The Azure Storage blob inventory feature provides an overview of your containers, blobs, snapshots, and blob versions within a storage account. Use the inventory report to understand various attributes of blobs and containers such as your total data size, age, encryption status, immutability policy, and legal hold and so on. The report provides an overview of your data for business and compliance requirements.
+
+To learn more about blob inventory reports, see [Azure Storage blob inventory (preview)](blob-inventory.md).
+
+Enable blob inventory reports by adding a policy with one or more rules to your storage account. Add, edit, or remove a policy by using the [Azure portal](https://portal.azure.com/).
+
+## Enable inventory reports
+
+### [Portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
+
+2. Locate your storage account and display the account overview.
+
+3. Under **Data management**, select **Blob inventory (preview)**.
+
+4. Select **Add your first inventory rule**.
+
+ The **Add a rule** page appears.
+
+5. In the **Add a rule** page, name your new rule.
+
+6. Choose a container.
+
+7. Under **Object type to inventory**, choose whether to create a report for blobs or containers.
+
+ If you select **Blob**, then under **Blob subtype**, choose the types of blobs that you want to include in your report, and whether to include blob versions and/or snapshots in your inventory report.
+
+ > [!NOTE]
+ > Versions and snapshots must be enabled on the account to save a new rule with the corresponding option enabled.
+
+8. Select the fields that you would like to include in your report, and the format of your reports.
+
+9. Choose how often you want to generate reports.
+
+9. Optionally, add a prefix match to filter blobs in your inventory report.
+
+10. Select **Save**.
+
+ :::image type="content" source="./media/blob-inventory-how-to/portal-blob-inventory.png" alt-text="Screenshot showing how to add a blob inventory rule by using the Azure portal":::
+
+### [PowerShell](#tab/azure-powershell)
+
+<a id="powershell"></a>
+
+You can enable static website hosting by using the Azure PowerShell module.
+
+1. Open a Windows PowerShell command window.
+
+2. Make sure that you have the latest Azure PowerShell module. See [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+
+3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+4. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that will host your static website.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+5. Get the storage account context that defines the storage account you want to use.
+
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+ * Replace the `<resource-group-name>` placeholder value with the name of your resource group.
+
+ * Replace the `<storage-account-name>` placeholder value with the name of your storage account.
+
+6. Create inventory rules by using the [New-AzStorageBlobInventoryPolicyRule](/powershell/module/az.storage/new-azstorageblobinventorypolicyrule) command. Each rule lists report fields. For a complete list of report fields, see [Azure Storage blob inventory (preview)](blob-inventory.md).
+
+ ```Powershell
+ $containerName = "my-container"
+
+ $rule1 = New-AzStorageBlobInventoryPolicyRule -Name Test1 -Destination $containerName -Disabled -Format Csv -Schedule Daily -PrefixMatch con1,con2 `
+ -ContainerSchemaField Name,Metadata,PublicAccess,Last-modified,LeaseStatus,LeaseState,LeaseDuration,HasImmutabilityPolicy,HasLegalHold
+
+ $rule2 = New-AzStorageBlobInventoryPolicyRule -Name test2 -Destination $containerName -Format Parquet -Schedule Weekly -BlobType blockBlob,appendBlob -PrefixMatch aaa,bbb `
+ -BlobSchemaField name,Last-Modified,Metadata,LastAccessTime
+
+ $rule3 = New-AzStorageBlobInventoryPolicyRule -Name Test3 -Destination $containerName -Format Parquet -Schedule Weekly -IncludeBlobVersion -IncludeSnapshot -BlobType blockBlob,appendBlob -PrefixMatch aaa,bbb `
+ -BlobSchemaField name,Creation-Time,Last-Modified,Content-Length,Content-MD5,BlobType,AccessTier,AccessTierChangeTime,Expiry-Time,hdi_isfolder,Owner,Group,Permissions,Acl,Metadata,LastAccessTime
+
+ $rule4 = New-AzStorageBlobInventoryPolicyRule -Name test4 -Destination $containerName -Format Csv -Schedule Weekly -BlobType blockBlob -BlobSchemaField Name,BlobType,Content-Length,Creation-Time
+
+ ```
+
+7. Use the [Set-AzStorageBlobInventoryPolicy](/powershell/module/az.storage/set-azstorageblobinventorypolicy) to create a blob inventory policy. Pass rules into this command by using the `-Rule` parameter.
+
+ ```powershell
+ $policy = Set-AzStorageBlobInventoryPolicy -StorageAccount $storageAccount -Rule $rule1,$rule2,$rule3,$rule4
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+<a id="cli"></a>
+
+You can enable static website hosting by using the [Azure Command-Line Interface (CLI)](/cli/azure/).
+
+1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that will host your static website.
+
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+3. Define the rules of your policy in a JSON document. The following shows the contents of an example JSON file named `policy.json`.
+
+ ```json
+ {
+ "enabled": true,
+ "type": "Inventory",
+ "rules": [
+ {
+ "enabled": true,
+ "name": "inventoryPolicyRule2",
+ "destination": "mycontainer",
+ "definition": {
+ "filters": {
+ "blobTypes": [
+ "blockBlob"
+ ],
+ "prefixMatch": [
+ "inventoryprefix1",
+ "inventoryprefix2"
+ ],
+ "includeSnapshots": true,
+ "includeBlobVersions": true
+ },
+ "format": "Csv",
+ "schedule": "Daily",
+ "objectType": "Blob",
+ "schemaFields": [
+ "Name",
+ "Creation-Time",
+ "Last-Modified",
+ "Content-Length",
+ "Content-MD5",
+ "BlobType",
+ "AccessTier",
+ "AccessTierChangeTime",
+ "Snapshot",
+ "VersionId",
+ "IsCurrentVersion",
+ "Metadata"
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+4. Create a blob inventory policy by using the [az storage account blob-inventory-policy](/cli/azure/storage/account/blob-inventory-policy#az_storage_account_blob_inventory_policy_create) create command. Provide the name of your JSON document by using the `--policy` parameter.
+
+ ```azurecli
+ az storage account blob-inventory-policy create -g myresourcegroup --account-name mystorageaccount --policy @policy.json
+ ```
+++
+## Next steps
+
+- [Calculate the count and total size of blobs per container](calculate-blob-count-size.md)
+- [Manage the Azure Blob Storage lifecycle](storage-lifecycle-management-concepts.md)
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-inventory.md
Title: Use Azure Storage inventory to manage blob data (preview)
+ Title: Azure Storage blob inventory (preview)
description: Azure Storage inventory is a tool to help get an overview of all your blob data within a storage account. Previously updated : 04/01/2021 Last updated : 06/18/2021
-# Use Azure Storage blob inventory to manage blob data (preview)
+# Azure Storage blob inventory (preview)
-The Azure Storage blob inventory feature provides an overview of your blob data within a storage account. Use the inventory report to understand your total data size, age, access tiers, and so on. The report provides an overview of your data for business and compliance requirements. Once enabled, an inventory report is automatically created daily.
+The Azure Storage blob inventory feature provides an overview of your containers, blobs, snapshots, and blob versions within a storage account. Use the inventory report to understand various attributes of blobs and containers such as your total data size, age, encryption status, immutability policy, and legal hold and so on. The report provides an overview of your data for business and compliance requirements.
## Availability
-Blob inventory is supported for both general purpose version 2 (GPv2) and premium block blob storage accounts. This feature is supported with or without the [hierarchical namespace](data-lake-storage-namespace.md) feature enabled.
+Blob inventory is supported for both general purpose version 2 (GPv2) and premium block blob storage accounts. This feature is supported with or without the [hierarchical namespace](data-lake-storage-namespace.md) feature enabled on the account.
> [!IMPORTANT]
-> Blob inventory is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Storage Blob inventory is currently in PREVIEW and is available on storage accounts in all public regions.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### Preview regions
+## Pricing and billing
-The blob inventory preview is available on storage accounts in all public regions.
+The fee for inventory reports isn't charged during the preview period. Pricing will be determined when this feature is generally available.
-### Pricing and billing
+## Inventory features
-The fee for inventory reports isn't charged during the preview period. Pricing will be determined when this feature is generally available.
+The following list describes features and capabilities that are available in the current release of Azure Storage blob inventory.
-## Enable inventory reports
+- **Inventory reports for blobs and containers**
-Enable blob inventory reports by adding a policy to your storage account. Add, edit, or remove a policy by using the [Azure portal](https://portal.azure.com/).
+ You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status.
-1. Navigate to the [Azure portal](https://portal.azure.com/)
-1. Select one of your storage accounts
-1. Under **Blob service**, select **Blob inventory**
-1. Make sure **Blob inventory enabled** is selected
-1. Select **Add a rule**
-1. Name your new rule
-1. Select the **Blob types** for your inventory report
-1. Add a prefix match to filter your inventory report
-1. Select whether to **Include blob versions** and **Include snapshots** in your inventory report. Versions and snapshots must be enabled on the account to save a new rule with the corresponding option enabled.
-1. Select **Save**
+- **Custom Schema**
+ You can choose which fields appear in reports. Choose from a list of supported fields. That list appears later in this article.
-Inventory policies are read or written in full. Partial updates aren't supported.
+- **CSV and Apache Parquet output format**
-> [!IMPORTANT]
-> If you enable firewall rules for your storage account, inventory requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the Exceptions section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
+ You can generate an inventory report in either CSV or Apache Parquet output format.
-A blob inventory run is automatically scheduled every day. It can take up to 24 hours for an inventory run to complete. An inventory report is configured by adding an inventory policy with one or more rules.
+- **Manifest file and Azure Event Grid event per inventory report**
+
+ A manifest file and an Azure Event Grid event are generated per inventory report. These are described later in this article.
+
+## Enabling inventory reports
+
+Enable blob inventory reports by adding a policy with one or more rules to your storage account. For guidance, see [Enable Azure Storage blob inventory reports (preview)](blob-inventory-how-to.md).
+
+## Upgrading an inventory policy
+
+If you are an existing Azure Storage blob inventory user who has configured inventory prior to June 2021, you can start using the new features by loading the policy, and then saving the policy back after making changes. When you reload the policy, the new fields in the policy will be populated with default values. You can change these values if you want. Also, the following two features will be available.
+
+- A destination container is now supported for every rule instead of just being supported for the policy.
+
+- A manifest file and Azure Event Grid event are now generated per rule instead of per policy.
## Inventory policy
An inventory policy is a collection of rules in a JSON document.
```json {
- "destination": "destinationContainer",
"enabled": true, "rules": [ { "enabled": true, "name": "inventoryrule1",
+ "destination": "inventory-destination-container",
"definition": {. . .} }, { "enabled": true, "name": "inventoryrule2",
+ "destination": "inventory-destination-container",
"definition": {. . .} }] }
View the JSON for an inventory policy by selecting the **Code view** tab in the
| Parameter name | Parameter type | Notes | Required? | |-|--|-|--|
-| destination | String | The destination container where all inventory files will be generated. The destination container must already exist. | Yes |
-| enabled | Boolean | Used to disable the entire policy. When set to **true**, the rule level enabled field overrides this parameter. When disabled, inventory for all rules will be disabled. | Yes |
+| enabled | boolean | Used to disable the entire policy. When set to **true**, the rule level enabled field overrides this parameter. When disabled, inventory for all rules will be disabled. | Yes |
| rules | Array of rule objects | At least one rule is required in a policy. Up to 10 rules are supported. | Yes | ## Inventory rules
Each rule within the policy has several parameters:
| Parameter name | Parameter type | Notes | Required? | |-|--|-|--|
-| name | String | A rule name can include up to 256 case-sensitive alphanumeric characters. The name must be unique within a policy. | Yes |
-| enabled | Boolean | A flag allowing a rule to be enabled or disabled. The default value is **true**. | Yes |
+| name | string | A rule name can include up to 256 case-sensitive alphanumeric characters. The name must be unique within a policy. | Yes |
+| enabled | boolean | A flag allowing a rule to be enabled or disabled. The default value is **true**. | Yes |
| definition | JSON inventory rule definition | Each definition is made up of a rule filter set. | Yes |
+| destination | string | The destination container where all inventory files will be generated. The destination container must already exist.|
The global **Blob inventory enabled** flag takes precedence over the *enabled* parameter in a rule.
+### Rule definition
+
+| Parameter name | Parameter type | Notes | Required |
+|--|--|--|--|
+| filters | json | Filters decide whether a blob or container is part of inventory or not. | Yes |
+| format | string | Determines the output of the inventory file. Valid values are `csv` (For CSV format) and `parquet` (For Apache Parquet format).| Yes |
+| objectType | string | Denotes whether this is an inventory rule for blobs or containers. Valid values are `blob` and `container`. |Yes |
+| schemaFields | Json array | List of Schema fields to be part of inventory. | Yes |
+ ### Rule filters Several filters are available for customizing a blob inventory report: | Filter name | Filter type | Notes | Required? | |||-|--|
-| blobTypes | Array of predefined enum values | Valid values are `blockBlob` and `appendBlob` for hierarchical namespace enabled accounts, and `blockBlob`, `appendBlob`, and `pageBlob` for other accounts. | Yes |
-| prefixMatch | Array of up to 10 strings for prefixes to be matched. A prefix must start with a container name, for example, "container1/foo" | If you don't define *prefixMatch* or provide an empty prefix, the rule applies to all blobs within the storage account. | No |
-| includeSnapshots | Boolean | Specifies whether the inventory should include snapshots. Default is **false**. | No |
-| includeBlobVersions | Boolean | Specifies whether the inventory should include blob versions. Default is **false**. | No |
+| blobTypes | Array of predefined enum values | Valid values are `blockBlob` and `appendBlob` for hierarchical namespace enabled accounts, and `blockBlob`, `appendBlob`, and `pageBlob` for other accounts. This field is not applicable for inventory on a container, (objectType: `container`). | Yes |
+| prefixMatch | Array of up to 10 strings for prefixes to be matched. | If you don't define *prefixMatch* or provide an empty prefix, the rule applies to all blobs within the storage account. A prefix must be a container name prefix or a container name. For example, `container`, `container1/foo`.| No |
+| includeSnapshots | boolean | Specifies whether the inventory should include snapshots. Default is `false`. This field is not applicable for inventory on a container, (objectType: `container`).| No |
+| includeBlobVersions | boolean | Specifies whether the inventory should include blob versions. Default is `false`. This field is not applicable for inventory on a container, (objectType: `container`).| No |
View the JSON for inventory rules by selecting the **Code view** tab in the **Blob inventory** section of the Azure portal. Filters are specified within a rule definition. ```json {
- "destination": "destinationContainer",
- "enabled": true,
- "rules": [
- {
- "enabled": true,
- "name": "inventoryrule1",
- "definition":
- {
- "filters":
- {
- "blobTypes": ["blockBlob", "appendBlob", "pageBlob"],
- "prefixMatch": ["inventorycontainer1", "inventorycontainer2/abcd", "etc"]
- }
- }
- },
- {
- "enabled": true,
- "name": "inventoryrule2",
- "definition":
- {
- "filters":
- {
- "blobTypes": ["pageBlob"],
- "prefixMatch": ["inventorycontainer-disks-", "inventorycontainer4/"],
- "includeSnapshots": true,
- "includeBlobVersions": true
- }
- }
- }]
+ "destination": "inventory-destination-container",
+ "enabled": true,
+ "rules": [
+ {
+ "definition": {
+ "filters": {
+ "blobTypes": ["blockBlob", "appendBlob", "pageBlob"],
+ "prefixMatch": ["inventorytestcontainer1", "inventorytestcontainer2/abcd", "etc"],
+ "includeSnapshots": false,
+ "includeBlobVersions": true,
+ },
+ "format": "csv",
+ "objectType": "blob",
+ "schedule": "daily",
+ "schemaFields": ["Name", "Creation-Time"]
+ }
+ "enabled": true,
+ "name": "blobinventorytest",
+ "destination": "inventorydestinationContainer"
+ },
+ {
+ "definition": {
+ "filters": {
+ "prefixMatch": ["inventorytestcontainer1", "inventorytestcontainer2/abcd", "etc"]
+ },
+ "format": "csv",
+ "objectType": "container",
+ "schedule": "weekly",
+ "schemaFields": ["Name", "HasImmutabilityPolicy", "HasLegalHold"]
+ }
+ "enabled": true,
+ "name": "containerinventorytest",
+ "destination": "inventorydestinationContainer"
+ }
+ ]
}+ ```
-## Inventory output
+### Custom schema fields supported for blob inventory
+
+- Name (Required)
+- Creation-Time
+- Last-Modified
+- Content-Length
+- Content-MD5
+- BlobType
+- AccessTier
+- AccessTierChangeTime
+- AccessTierInferred
+- Expiry-Time
+- hdi_isfolder
+- Owner
+- Group
+- Permissions
+- Acl
+- Snapshot (Available and required when you choose to include snapshots in your report)
+- VersionId (Available and required when you choose to include blob versions in your report)
+- IsCurrentVersion (Available and required when you choose to include blob versions in your report)
+- Metadata
+- Tags
+- LastAccessTime
+++
+### Custom schema fields supported for container inventory
+
+- Name (Required)
+- Last-Modified
+- LeaseStatus
+- LeaseState
+- LeaseDuration
+- PublicAccess
+- HasImmutabilityPolicy
+- HasLegalHold
+- Metadata
+
+## Inventory run
+
+A blob inventory run is automatically scheduled every day. It can take up to 24 hours for an inventory run to complete. An inventory report is configured by adding an inventory policy with one or more rules.
-Each inventory run generates a set of CSV formatted files in the specified inventory destination container. The inventory output is generated under the following path:
-`https://<accountName>.blob.core.windows.net/<inventory-destination-container>/YYYY/MM/DD/HH-MM-SS/` where:
+Inventory policies are read or written in full. Partial updates aren't supported.
-- *accountName* is your Azure Blob Storage account name-- *inventory-destination-container* is the destination container you specified in the inventory policy-- *YYYY/MM/DD/HH-MM-SS* is the time when the inventory began to run
+> [!IMPORTANT]
+> If you enable firewall rules for your storage account, inventory requests might be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the Exceptions section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
-### Inventory files
+## Inventory completed event
-Each inventory run generates the following files:
+The `BlobInventoryPolicyCompleted` event is generated when the inventory run completes for a rule. This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container is not present will trigger the event. The following json shows an example `BlobInventoryPolicyCompleted` event.
-- **Inventory CSV file**: A comma separated values (CSV) file for each inventory rule. Each file contains matched objects and their metadata. The first row in every CSV formatted file is always the schema row. The following image shows an inventory CSV file opened in Microsoft Excel.
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
+ "subject": "BlobDataManagement/BlobInventory",
+ "eventType": "Microsoft.Storage.BlobInventoryPolicyCompleted",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleDateTime": "2021-05-28T03:50:27Z",
+ "accountName": "testaccount",
+ "ruleName": "Rule_1",
+ "policyRunStatus": "Succeeded",
+ "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
+ "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+"manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ },
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-05-28T15:03:18Z"
+}
+```
- :::image type="content" source="./media/blob-inventory/csv-file-excel.png" alt-text="Screenshot of an inventory CSV file opened in Microsoft Excel":::
+The following table describes the schema of the `BlobInventoryPolicyCompleted` event.
-- **Manifest file**: A manifest.json file containing the details of the inventory files generated for every rule in that run. The manifest file also captures the rule definition provided by the user and the path to the inventory for that rule.
+|Field|Type|Description|
+|||
+|scheduleDateTime|string|The time that the inventory policy was scheduled.|
+|accountName|string|The storage account name.|
+|ruleName|string|The rule name.|
+|policyRunStatus|string|The status of inventory run. Possible values are `Succeeded`, `PartiallySucceeded`, and `Failed`.|
+|policyRunStatusMessage|string|The status message for the inventory run.|
+|policyRunId|string|The policy run ID for the inventory run.|
+|manifestBlobUrl|string|The blob URL for manifest file for inventory run.|
-- **Checksum file**: A manifest.checksum file containing the MD5 checksum of the contents of manifest.json file. Generation of the manifest.checksum file marks the completion of an inventory run.
+## Inventory output
-## Inventory completed event
+Each inventory rule generates a set of files in the specified inventory destination container for that rule. The inventory output is generated under the following path:
+`https://<accountName>.blob.core.windows.net/<inventory-destination-container>/YYYY/MM/DD/HH-MM-SS/<ruleName` where:
-Subscribe to the inventory completed event to get notified when the inventory run completes. This event is generated when the manifest checksum file is created. The inventory completed event also occurs if the inventory run fails into user error before it starts to run. For example, an invalid policy, or destination container not present error will trigger the event. The event is published to Blob Inventory Topic.
+- *accountName* is your Azure Blob Storage account name.
+- *inventory-destination-container* is the destination container you specified in the inventory rule.
+- *YYYY/MM/DD/HH-MM-SS* is the time when the inventory began to run.
+- *ruleName* is the inventory rule name.
-Sample event:
+### Inventory files
-```json
-{
- "topic": "/subscriptions/3000151d-7a84-4120-b71c-336feab0b0f0/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
- "subject": "BlobDataManagement/BlobInventory",
- "eventType": "Microsoft.Storage.BlobInventoryPolicyCompleted",
- "id": "c99f7962-ef9d-403e-9522-dbe7443667fe",
- "data": {
- "scheduleDateTime": "2020-10-13T15:37:33Z",
- "accountName": "inventoryaccountname",
- "policyRunStatus": "Succeeded",
- "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
- "policyRunId": "b5e1d4cc-ee23-4ed5-b039-897376a84f79",
- "manifestBlobUrl": "https://inventoryaccountname.blob.core.windows.net/inventory-destination-container/2020/10/13/15-37-33/manifest.json"
- },
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-10-13T15:47:54Z"
-}
-```
+Each inventory run for a rule generates the following files:
+
+- **Inventory file**: An inventory run for a rule generates one or more CSV or Apache Parquet formatted files. If the matched object count is large, then multiple files are generated instead of a single file. Each such file contains matched objects and their metadata. For a CS formatted file, the first row is always the schema row. The following image shows an inventory CSV file opened in Microsoft Excel.
+
+ :::image type="content" source="./media/blob-inventory/csv-file-excel.png" alt-text="Screenshot of an inventory CSV file opened in Microsoft Excel":::
+
+ > [!NOTE]
+ > Reports in the Apache Parquet format present dates in the following format: `timestamp_millis [number of milliseconds since 1970-01-01 00:00:00 UTC`.
++
+- **Checksum file**: A checksum file contains the MD5 checksum of the contents of manifest.json file. The name of the checksum file is `<ruleName>-manifest.checksum`. Generation of the checksum file marks the completion of an inventory rule run.
+
+- **Manifest file**: A manifest.json file contains the details of the inventory file(s) generated for that rule. The name of the file is `<ruleName>-manifest.json`. This file also captures the rule definition provided by the user and the path to the inventory for that rule. The following json shows the contents of a sample manifest.json file.
+
+ ```json
+ {
+ "destinationContainer" : "inventory-destination-container",
+ "endpoint" : "https://testaccount.blob.core.windows.net",
+ "files" : [
+ {
+ "blob" : "2021/05/26/13-25-36/Rule_1/Rule_1.csv",
+ "size" : 12710092
+ }
+ ],
+ "inventoryCompletionTime" : "2021-05-26T13:35:56Z",
+ "inventoryStartTime" : "2021-05-26T13:25:36Z",
+ "ruleDefinition" : {
+ "filters" : {
+ "blobTypes" : [ "blockBlob" ],
+ "includeBlobVersions" : false,
+ "includeSnapshots" : false,
+ "prefixMatch" : [ "penner-test-container-100003" ]
+ },
+ "format" : "csv",
+ "objectType" : "blob",
+ "schedule" : "daily",
+ "schemaFields" : [
+ "Name",
+ "Creation-Time",
+ "BlobType",
+ "Content-Length",
+ "LastAccessTime",
+ "Last-Modified",
+ "Metadata",
+ "AccessTier"
+ ]
+ },
+ "ruleName" : "Rule_1",
+ "status" : "Succeeded",
+ "summary" : {
+ "objectCount" : 110000,
+ "totalObjectSize" : 23789775
+ },
+ "version" : "1.0"
+ }
+ ```
## Known issues This section describes limitations and known issues of the Azure Storage blob inventory feature.
-### Inventory job fails to complete
+### Inventory job fails to complete for hierarchical namespace enabled accounts
+
+The inventory job may not complete within 24 hours for an account with hundreds of millions of blobs and hierarchical namespace enabled. If this happens, no inventory file is created.
+
+### Inventory job cannot write inventory reports
-The inventory job may not complete within 24 hours for an account with millions of blobs and hierarchical namespaces enabled. If this happens, no inventory file is created.
+An object replication policy can prevent an inventory job from writing inventory reports to the destination container. Some other scenarios can archive the reports or make them immutable when they are partially completed. This can lead to inventory job failure.
## Next steps
+- [Enable Azure Storage blob inventory reports (preview)](blob-inventory-how-to.md)
- [Calculate the count and total size of blobs per container](calculate-blob-count-size.md)-- [Manage the Azure Blob Storage lifecycle](storage-lifecycle-management-concepts.md)
+- [Manage the Azure Blob Storage lifecycle](storage-lifecycle-management-concepts.md)
storage Calculate Blob Count Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/calculate-blob-count-size.md
Previously updated : 06/16/2021 Last updated : 06/18/2021
Blob metadata is not included in this method. The Azure Blob Storage inventory f
## Enable inventory reports
-The first step in this method is to [enable inventory reports](blob-inventory.md#enable-inventory-reports) on your storage account. You may have to wait up to 24 hours after enabling inventory reports for your first report to be generated.
+The first step in this method is to [enable inventory reports](blob-inventory.md#enabling-inventory-reports) on your storage account. You may have to wait up to 24 hours after enabling inventory reports for your first report to be generated.
When you have an inventory report to analyze, grant yourself read access to the container where the report CSV file resides by assigning yourself the **Storage Blob Data Reader** role. Be sure to use the email address of the account you're using to run the report. To learn how to assign an Azure role to a user with Azure role-based access control (Azure RBAC), follow the instructions provided in [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
storage Storage Blob Immutable Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-immutable-storage.md
Previously updated : 05/17/2021 Last updated : 06/18/2021
In the case of non-payment, normal data retention policies will apply as stipula
Yes. When a time-based retention policy is first created, it is in an *unlocked* state. In this state, you can make any desired change to the retention interval, such as increase or decrease and even delete the policy. After the policy is locked, it stays locked until the retention interval expires. This locked policy prevents deletion and modification to the retention interval. We strongly recommend that you use the *unlocked* state only for trial purposes and lock the policy within a 24-hour period. These practices help you comply with SEC 17a-4(f) and other regulations.
-**Can I use soft delete alongside Immutable blob policies?**
+**Can I use soft delete alongside immutable blob policies?**
-Yes, if your compliance requirements allow for soft delete to be enabled. [Soft delete for Azure Blob storage](./soft-delete-blob-overview.md) applies for all containers within a storage account regardless of a legal hold or time-based retention policy. We recommend enabling soft delete for additional protection before any immutable WORM policies are applied and confirmed.
+Yes, if your compliance requirements allow for soft delete to be enabled. [Soft delete for Azure Blob storage](./soft-delete-blob-overview.md) applies to all containers within a storage account regardless of whether a legal hold or time-based retention policy is in effect. Microsoft recommends enabling soft delete for additional protection before any immutability policies are applied. If soft delete is enabled on a container and then an immutability policy is added to the container, any blobs that have already been soft deleted will be permanently deleted once the soft delete retention policy has expired. Soft-deleted blobs can be restored during the soft delete retention period. Any blobs that have not yet been soft deleted are protected by the immutability policy and cannot be soft deleted until after the immutable policy has expired (for time-based retention) or removed (for legal holds).
**For an HNS-enabled account, can I rename or move a blob when the blob is in the immutable state?** No, both the name and the directory structure are considered important container-level data that cannot be modified once the immutable policy is in place. Rename and move are only available for HNS-enabled accounts in general.
storage Storage Lifecycle Management Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-lifecycle-management-concepts.md
The platform runs the lifecycle policy once a day. Once you configure a policy,
**If I update an existing policy, how long does it take for the actions to run?**
-The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete.
+The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob will not then move from Hot to Cool given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
**I manually rehydrated an archived blob, how do I prevent it from being moved back to the Archive tier temporarily?**
storage Use Azurite To Run Automated Tests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/use-azurite-to-run-automated-tests.md
Learn how to write automated tests against private endpoints for Azure Blob Stor
python -m venv .venv ```
-1. Create a container and initialize environment variables. Use a [PyTest](https://docs.pytest.org/) [conftest.py](https://docs.pytest.org/en/2.1.0/plugins.html) file to generate tests. Here is an example of a conftest.py file:
+1. Create a container and initialize environment variables. Use a [PyTest](https://docs.pytest.org/) [conftest.py](https://docs.pytest.org/en/latest/how-to/writing_plugins.html#conftest-py-plugins) file to generate tests. Here is an example of a conftest.py file:
```python from azure.storage.blob import BlobServiceClient
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-statistics.md
Another option you have is to specify the sample size as a percent:
```sql CREATE STATISTICS col1_stats ON dbo.table1 (col1)
- WITH SAMPLE = 50 PERCENT;
+ WITH SAMPLE 50 PERCENT;
``` #### Create single-column statistics on only some of the rows
You can also combine the options together. The following example creates a filte
CREATE STATISTICS stats_col1 ON table1 (col1) WHERE col1 > '2000101' AND col1 < '20001231'
- WITH SAMPLE = 50 PERCENT;
+ WITH SAMPLE 50 PERCENT;
``` For the full reference, see [CREATE STATISTICS](/sql/t-sql/statements/create-statistics-transact-sql?view=azure-sqldw-latest&preserve-view=true).
In this example, the histogram is on *product\_category*. Cross-column statistic
CREATE STATISTICS stats_2cols ON table1 (product_category, product_sub_category) WHERE product_category > '2000101' AND product_category < '20001231'
- WITH SAMPLE = 50 PERCENT;
+ WITH SAMPLE 50 PERCENT;
``` Because a correlation exists between *product\_category* and *product\_sub\_category*, a multi-column statistics object can be useful if these columns are accessed at the same time.
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
The following procedure describes how to use PowerShell to deploy an Azure Resou
1. Install Azure PowerShell by following the instructions in [Getting started with Azure PowerShell](/powershell/azure/get-started-azureps).
-1. Clone or copy the [201-timeseriesinsights-environment-with-eventhub](https://github.com/Azure/azure-quickstart-templates/blob/master/201-timeseriesinsights-environment-with-eventhub/azuredeploy.json) template from GitHub.
+1. Clone or copy the [timeseriesinsights-environment-with-eventhub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.timeseriesinsights/timeseriesinsights-environment-with-eventhub/azuredeploy.json) template from GitHub.
- Create a parameters file
- To create a parameters file, copy the [201-timeseriesinsights-environment-with-eventhub](https://github.com/Azure/azure-quickstart-templates/blob/master/201-timeseriesinsights-environment-with-eventhub/azuredeploy.parameters.json) file.
+ To create a parameters file, copy the [timeseriesinsights-environment-with-eventhub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.timeseriesinsights/timeseriesinsights-environment-with-eventhub/azuredeploy.parameters.json) file.
- [!code-json[deployment-parameters](~/quickstart-templates/201-timeseriesinsights-environment-with-eventhub/azuredeploy.parameters.json)]
+ [!code-json[deployment-parameters](~/quickstart-templates/quickstarts/microsoft.timeseriesinsights/timeseriesinsights-environment-with-eventhub/azuredeploy.parameters.json)]
<div id="required-parameters"></div>
The following procedure describes how to use PowerShell to deploy an Azure Resou
- The quickstart template's home page on GitHub also includes a **Deploy to Azure** button. Clicking it opens a Custom Deployment page in the Azure portal. From this page, you can enter or select values for each of the parameters from the [required parameters](#required-parameters) or [optional parameters](#optional-parameters) tables. After filling out the settings, clicking the **Purchase** button will initiate the template deployment. </br> </br>
- <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-timeseriesinsights-environment-with-eventhub%2Fazuredeploy.json" target="_blank">
+ <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.timeseriesinsights%2Ftimeseriesinsights-environment-with-eventhub%2Fazuredeploy.json" target="_blank">
<img src="https://azuredeploy.net/deploybutton.png" alt="The Deploy to Azure button."/> </a>
virtual-desktop Diagnostics Role Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/diagnostics-role-service.md
Title: Azure Virtual Desktop diagnose issues - Azure
description: How to use the Azure Virtual Desktop diagnostics feature to diagnose issues. Previously updated : 09/21/2020 Last updated : 06/19/2021
Connections that don't reach Azure Virtual Desktop won't show up in diagnostics
The WVDErrors table tracks errors across all activity types. The column called "ServiceError" provides an additional flag marked either "True" or "False." This flag will tell you whether the error is related to the service. * If the value is "True," the service team may have already investigated this issue. If this impacts user experience and appears a high number of times, we recommend you submit a support ticket for Azure Virtual Desktop.
-* If the value is "False," this is may be a misconfiguration that you can fix yourself. The error message can give you a clue about where to start.
+* If the value is "False," this may be a misconfiguration that you can fix yourself. The error message can give you a clue about where to start.
The following table lists common errors your admins might run into. >[!NOTE] >This list includes most common errors and is updated on a regular cadence. To ensure you have the most up-to-date information, be sure to check back on this article at least once a month.
-## Management errors
-
-|Error message|Suggested solution|
-|||
-|Failed to create registration key |Registration token couldn't be created. Try creating it again with a shorter expiry time (between 1 hour and 1 month). |
-|Failed to delete registration key|Registration token couldn't be deleted. Try deleting it again. If it still doesn't work, use PowerShell to check if the token is still there. If it's there, delete it with PowerShell.|
-|Failed to change session host drain mode |Couldn't change drain mode on the VM. Check the VM status. If the VM's unavailable, drain mode can't be changed.|
-|Failed to disconnect user sessions |Couldn't disconnect the user from the VM. Check the VM status. If the VM's unavailable, the user session can't be disconnected. If the VM is available, check the user session status to see if it's disconnected. |
-|Failed to log off all user(s) within the session host |Could not sign users out of the VM. Check the VM status. If unavailable, users can't be signed out. Check user session status to see if they're already signed out. You can force sign out with PowerShell. |
-|Failed to unassign user from application group|Could not unpublish an app group for a user. Check to see if user is available on Azure AD. Check to see if the user is part of a user group that the app group is published to. |
-|There was an error retrieving the available locations |Check location of VM used in the create host pool wizard. If image is not available in that location, add image in that location or choose a different VM location. |
- ### Connection error codes |Numeric code|Error code|Suggested solution|
The following table lists common errors your admins might run into.
|14|UnexpectedNetworkDisconnect|The connection to the network dropped. Ask the user to connect again.| |24|ReverseConnectFailed|The host virtual machine has no direct line of sight to RD Gateway. Ensure the Gateway IP address can be resolved.|
-## Error: Can't add user assignments to an app group
-
-After assigning a user to an app group, the Azure portal displays a warning that says "Session Ending" or "Experiencing Authentication Issues - Extension Microsoft_Azure_WVD." The assignment page then doesn't load, and after that, pages stop loading throughout the Azure portal (for example, Azure Monitor, Log Analytics, Service Health, and so on).
-
-**Cause:** There's a problem with the conditional access policy. The Azure portal is trying to obtain a token for Microsoft Graph, which is dependent on SharePoint Online. The customer has a conditional access policy called "Microsoft Office 365 Data Storage Terms of Use" that requires users to accept the terms of use to access data storage. However, they haven't signed in yet, so the Azure portal can't get the token.
-
-**Fix:** Before signing in to the Azure portal, the admin first needs to sign in to SharePoint and accept the Terms of Use. After that, they should be able to sign in to the Azure portal like normal.
- ## Next steps To learn more about roles within Azure Virtual Desktop, see [Azure Virtual Desktop environment](environment-setup.md).
virtual-desktop Troubleshoot Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-management-issues.md
+
+ Title: Azure Virtual Desktop management issues - Azure
+description: Common management issues in Azure Virtual Desktop and how to solve them.
++ Last updated : 06/19/2021+++
+# Management issues
+
+This article describes common management errors and gives suggestions for how to solve them.
+
+## Common management errors
+
+The following table lists error messages that appear due to management-related issues and suggestions for how to solve them.
+
+|Error message|Suggested solution|
+|||
+|Failed to create registration key |Registration token couldn't be created. Try creating it again with a shorter expiry time (between 1 hour and 1 month). |
+|Failed to delete registration key|Registration token couldn't be deleted. Try deleting it again. If it still doesn't work, use PowerShell to check if the token is still there. If it's there, delete it with PowerShell.|
+|Failed to change session host drain mode |Couldn't change drain mode on the VM. Check the VM status. If the VM isn't available, you can't change drain mode.|
+|Failed to disconnect user sessions |Couldn't disconnect the user from the VM. Check the VM status. If the VM isn't available, you can't disconnect the user session. If the VM is available, check the user session status to see if it's disconnected. |
+|Failed to log off all user(s) within the session host |Could not sign users out of the VM. Check the VM status. If unavailable, users can't be signed out. Check user session status to see if they're already signed out. You can force sign out with PowerShell. |
+|Failed to unassign user from application group|Could not unpublish an app group for a user. Check to see if user is available on Azure AD. Check to see if the user is part of a user group that the app group is published to. |
+|There was an error retrieving the available locations |Check location of VM used in the create host pool wizard. If image is not available in that location, add image in that location or choose a different VM location. |
+
+## Error: Can't add user assignments to an app group
+
+After assigning a user to an app group, the Azure portal displays a warning that says "Session Ending" or "Experiencing Authentication Issues - Extension Microsoft_Azure_WVD." The assignment page then doesn't load, and after that, pages stop loading throughout the Azure portal (for example, Azure Monitor, Log Analytics, Service Health, and so on).
+
+This issue usually appears because there's a problem with the conditional access policy. The Azure portal is trying to obtain a token for Microsoft Graph, which is dependent on SharePoint Online. The customer has a conditional access policy called "Microsoft Office 365 Data Storage Terms of Use" that requires users to accept the terms of use to access data storage. However, they haven't signed in yet, so the Azure portal can't get the token.
+
+To solve this issue, before signing in to the Azure portal, the admin first needs to sign in to SharePoint and accept the Terms of Use. After that, they should be able to sign in to the Azure portal like normal.
+
+## Next steps
+
+To review common error scenarios that the diagnostics feature can identify for you, see [Identify and diagnose issues](diagnostics-role-service.md#common-error-scenarios).
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-overview.md
Installing WMF requires a restart. After restarting, the extension downloads the
### Default configuration script
-The Azure DSC extension includes a default configuration script that's intended to be used when you onboard a VM to the Azure Automation DSC service. The script parameters are aligned with the configurable properties of [Local Configuration Manager](/powershell/scripting/dsc/managing-nodes/metaConfig). For script parameters, see [Default configuration script](dsc-template.md#default-configuration-script) in [Desired State Configuration extension with Azure Resource Manager templates](dsc-template.md). For the full script, see the [Azure quickstart template in GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/azmgmt-demo/nestedtemplates/scripts/UpdateLCMforAAPull.zip).
+The Azure DSC extension includes a default configuration script that's intended to be used when you onboard a VM to the Azure Automation DSC service. The script parameters are aligned with the configurable properties of [Local Configuration Manager](/powershell/scripting/dsc/managing-nodes/metaConfig). For script parameters, see [Default configuration script](dsc-template.md#default-configuration-script) in [Desired State Configuration extension with Azure Resource Manager templates](dsc-template.md). For the full script, see the [Azure quickstart template in GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/azmgmt-demo/nestedtemplates/scripts/UpdateLCMforAAPull.zip).
## Information for registering with Azure Automation State Configuration (DSC) service
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-troubleshooting.md
Any network security group settings that are applied must still allow the endpoi
When encryption is being enabled with [Azure AD credentials](disk-encryption-windows-aad.md#), the target VM must allow connectivity to both Azure Active Directory endpoints and Key Vault endpoints. Current Azure Active Directory authentication endpoints are maintained in sections 56 and 59 of the [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) documentation. Key Vault instructions are provided in the documentation on how to [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). ### Azure Instance Metadata Service
-The VM must be able to access the [Azure Instance Metadata service](../windows/instance-metadata-service.md) endpoint which uses a well-known non-routable IP address (`169.254.169.254`) that can be accessed only from within the VM. Proxy configurations that alter local HTTP traffic to this address (for example, adding an X-Forwarded-For header) are not supported.
+The VM must be able to access the [Azure Instance Metadata service](../windows/instance-metadata-service.md) endpoint (`169.254.169.254`) and the [virtual public IP address](../../virtual-network/what-is-ip-address-168-63-129-16.md) (`168.63.129.16`) used for communication with Azure platform resources. Proxy configurations that alter local HTTP traffic to these addresses (for example, adding an X-Forwarded-For header) are not supported.
## Troubleshooting Windows Server 2016 Server Core
virtual-machines Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-monitor-troubleshoot.md
Title: Monitoring and troubleshooting from HANA side on SAP HANA on Azure (Large Instances) | Microsoft Docs
-description: Monitoring and troubleshooting from HANA side on SAP HANA on an Azure (Large Instances).
+description: Learn how to monitor and troubleshoot from HANA side on SAP HANA on an Azure (Large Instances).
documentationcenter:
vm-linux Previously updated : 09/10/2018- Last updated : 6/18/2021+ # Monitoring and troubleshooting from HANA side
-In order to effectively analyze problems related to SAP HANA on Azure (Large Instances), it is useful to narrow down the root cause of a problem. SAP has published a large amount of documentation to help you.
+In this article, we'll look at monitoring and troubleshooting your SAP HANA on Azure (Large Instances) from the HANA side.
-Applicable FAQs related to SAP HANA performance can be found in the following SAP Notes:
+To analyze problems related to SAP HANA on Azure (Large Instances), you'll want to narrow down the root cause of a problem. SAP has published lots of documentation to help you. FAQs related to SAP HANA performance can be found in the following SAP Notes:
- [SAP Note #2222200 ΓÇô FAQ: SAP HANA Network](https://launchpad.support.sap.com/#/notes/2222200) - [SAP Note #2100040 ΓÇô FAQ: SAP HANA CPU](https://launchpad.support.sap.com/#/notes/0002100040)
Applicable FAQs related to SAP HANA performance can be found in the following SA
## SAP HANA Alerts
-As a first step, check the current SAP HANA alert logs. In SAP HANA Studio, go to **Administration Console: Alerts: Show: all alerts**. This tab will show all SAP HANA alerts for specific values (free physical memory, CPU utilization, etc.) that fall outside of the set minimum and maximum thresholds. By default, checks are auto-refreshed every 15 minutes.
+First, check the current SAP HANA alert logs. In SAP HANA Studio, go to **Administration Console: Alerts: Show: all alerts**. This tab will show all SAP HANA alerts for values (free physical memory, CPU use, and so on) that fall outside the set minimum and maximum thresholds. By default, checks are automatically refreshed every 15 minutes.
![In SAP HANA Studio, go to Administration Console: Alerts: Show: all alerts](./media/troubleshooting-monitoring/image1-show-alerts.png) ## CPU
-For an alert triggered due to improper threshold setting, a resolution is to reset to the default value or a more reasonable threshold value.
+For an alert triggered by improper threshold setting, reset to the default value or a more reasonable threshold value.
![Reset to the default value or a more reasonable threshold value](./media/troubleshooting-monitoring/image2-cpu-utilization.png)
The following alerts may indicate CPU resource problems:
- Most recent savepoint operation (Alert 28) - Savepoint duration (Alert 54)
-You may notice high CPU consumption on your SAP HANA database from one of the following:
+You may notice high CPU consumption on your SAP HANA database from:
- Alert 5 (Host CPU usage) is raised for current or past CPU usage - The displayed CPU usage on the overview screen
The Load graph might show high CPU consumption, or high consumption in the past:
![The Load graph might show high CPU consumption, or high consumption in the past](./media/troubleshooting-monitoring/image4-load-graph.png)
-An alert triggered due to high CPU utilization could be caused by several reasons, including, but not limited to: execution of certain transactions, data loading, jobs that are not responding, long running SQL statements, and bad query performance (for example, with BW on HANA cubes).
+An alert triggered by high CPU use could be caused by several reasons:
+- Execution of certain transactions
+- Data loading
+- Jobs that aren't responding
+- Long-running SQL statements
+- Bad query performance (for example, with BW on HANA cubes)
-Refer to the [SAP HANA Troubleshooting: CPU Related Causes and Solutions](https://help.sap.com/saphelp_hanaplatform/helpdata/en/4f/bc915462db406aa2fe92b708b95189/content.htm?frameset=/en/db/6ca50424714af8b370960c04ce667b/frameset.htm&amp;current_toc=/en/85/d132c3f05e40a2b20c25aa5fd6331b/plain.htm&amp;node_id=46&amp;show_children=false) site for detailed troubleshooting steps.
+For detailed CPU usage troubleshooting steps, see [SAP HANA Troubleshooting: CPU Related Causes and Solutions](https://help.sap.com/viewer/bed8c14f9f024763b0777aa72b5436f6/2.0.05/en-US/4fbc915462db406aa2fe92b708b95189.html?q=%20SAP%20HANA%20Troubleshooting:%20CPU%20Related%20Causes%20and%20Solutions).
-## Operating System
+## Operating System (OS)
-One of the most important checks for SAP HANA on Linux is to make sure that Transparent Huge Pages are disabled, see [SAP Note #2131662 ΓÇô Transparent Huge Pages (THP) on SAP HANA Servers](https://launchpad.support.sap.com/#/notes/2131662).
+An important check for SAP HANA on Linux is to make sure Transparent Huge Pages are disabled. For more information, see [SAP Note #2131662 ΓÇô Transparent Huge Pages (THP) on SAP HANA Servers](https://launchpad.support.sap.com/#/notes/2131662).
-- You can check if Transparent Huge Pages are enabled through the following Linux command:
-**cat /sys/kernel/mm/transparent\_hugepage/enabled**
-- If _always_ is enclosed in brackets as below, it means that the Transparent Huge Pages are enabled: [always] madvise never; if _never_ is enclosed in brackets as below, it means that the Transparent Huge Pages are disabled: always madvise [never]
+You can check whether Transparent Huge Pages are enabled through the following Linux command: **cat /sys/kernel/mm/transparent\_hugepage/enabled**
+- If _always_ is enclosed in brackets, it means that the Transparent Huge Pages are enabled: [always] madvise never
+- If _never_ is enclosed in brackets, it means that the Transparent Huge Pages are disabled: always madvise [never]
The following Linux command should return nothing: **rpm -qa | grep ulimit.** If it appears _ulimit_ is installed, uninstall it immediately. ## Memory
-You may observe that the amount of memory allocated by the SAP HANA database is higher than expected. The following alerts indicate issues with high memory usage:
+You may observe that the amount of memory allotted to the SAP HANA database is higher than expected. The following alerts indicate issues with high memory usage:
- Host physical memory usage (Alert 1) - Memory usage of name server (Alert 12)
You may observe that the amount of memory allocated by the SAP HANA database is
- Memory usage of main storage of Column Store tables (Alert 45) - Runtime dump files (Alert 46)
-Refer to the [SAP HANA Troubleshooting: Memory Problems](https://help.sap.com/saphelp_hanaplatform/helpdata/en/db/6ca50424714af8b370960c04ce667b/content.htm?frameset=/en/59/5eaa513dde43758b51378ab3315ebb/frameset.htm&amp;current_toc=/en/85/d132c3f05e40a2b20c25aa5fd6331b/plain.htm&amp;node_id=26&amp;show_children=false) site for detailed troubleshooting steps.
+For detailed memory troubleshooting steps, see [SAP HANA Troubleshooting: Root Causes of Memory Problems](https://help.sap.com/viewer/bed8c14f9f024763b0777aa72b5436f6/2.0.05/en-US/3a2ea5c4593b4b8d823b5b48152bd1d4.html).
## Network
-Refer to [SAP Note #2081065 ΓÇô Troubleshooting SAP HANA Network](https://launchpad.support.sap.com/#/notes/2081065) and perform the network troubleshooting steps in this SAP Note.
+Refer to [SAP Note #2081065 ΓÇô Troubleshooting SAP HANA Network](https://launchpad.support.sap.com/#/notes/2081065) and do the network troubleshooting steps in this SAP Note.
1. Analyzing round-trip time between server and client.
- A. Run the SQL script [_HANA\_Network\_Clients_](https://launchpad.support.sap.com/#/notes/1969700)_._
+ - Run the SQL script [_HANA\_Network\_Clients_](https://launchpad.support.sap.com/#/notes/1969700)_._
2. Analyze internode communication.
- A. Run SQL script [_HANA\_Network\_Services_](https://launchpad.support.sap.com/#/notes/1969700)_._
+ - Run SQL script [_HANA\_Network\_Services_](https://launchpad.support.sap.com/#/notes/1969700)_._
-3. Run Linux command **ifconfig** (the output shows if any packet losses are occurring).
+3. Run Linux command **ifconfig** (the output shows whether any packet losses are occurring).
4. Run Linux command **tcpdump**.
-Also, use the open source [IPERF](https://iperf.fr/) tool (or similar) to measure real application network performance.
+Also, use the open-source [IPERF](https://iperf.fr/) tool (or similar) to measure real application network performance.
-Refer to the [SAP HANA Troubleshooting: Networking Performance and Connectivity Problems](https://help.sap.com/saphelp_hanaplatform/helpdata/en/a3/ccdff1aedc4720acb24ed8826938b6/content.htm?frameset=/en/dc/6ff98fa36541e997e4c719a632cbd8/frameset.htm&amp;current_toc=/en/85/d132c3f05e40a2b20c25aa5fd6331b/plain.htm&amp;node_id=142&amp;show_children=false) site for detailed troubleshooting steps.
+For detailed network troubleshooting steps, see [SAP HANA Troubleshooting: Network Performance and Connectivity Problems](https://help.sap.com/viewer/bed8c14f9f024763b0777aa72b5436f6/2.0.05/en-US/a3ccdff1aedc4720acb24ed8826938b6.html?q=Networking%20Performance%20and%20Connectivity%20Problems).
## Storage
-From an end-user perspective, an application (or the system as a whole) runs sluggishly, is unresponsive, or can even seem to stop responding if there are issues with I/O performance. In the **Volumes** tab in SAP HANA Studio, you can see the attached volumes, and what volumes are used by each service.
+Let's say there are issues with I/O performance. End users may then find applications, or the system as a whole, runs sluggishly, is unresponsive, or can even stop responding. In the **Volumes** tab in SAP HANA Studio, you can see the attached volumes and what volumes are used by each service.
![In the Volumes tab in SAP HANA Studio, you can see the attached volumes, and what volumes are used by each service](./media/troubleshooting-monitoring/image5-volumes-tab-a.png)
-Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics.
+On the lower part of the screen (on the Volumes tab), you can see details of the volumes, such as files and I/O statistics.
-![Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics](./media/troubleshooting-monitoring/image6-volumes-tab-b.png)
+![On the lower part of the screen, you can see details of the volumes, such as files and I/O statistics](./media/troubleshooting-monitoring/image6-volumes-tab-b.png)
-Refer to the [SAP HANA Troubleshooting: I/O Related Root Causes and Solutions](https://help.sap.com/saphelp_hanaplatform/helpdata/en/dc/6ff98fa36541e997e4c719a632cbd8/content.htm?frameset=/en/47/4cb08a715c42fe9f7cc5efdc599959/frameset.htm&amp;current_toc=/en/85/d132c3f05e40a2b20c25aa5fd6331b/plain.htm&amp;node_id=55&amp;show_children=false) and [SAP HANA Troubleshooting: Disk Related Root Causes and Solutions](https://help.sap.com/saphelp_hanaplatform/helpdata/en/47/4cb08a715c42fe9f7cc5efdc599959/content.htm?frameset=/en/44/3e1db4f73d42da859008df4f69e37a/frameset.htm&amp;current_toc=/en/85/d132c3f05e40a2b20c25aa5fd6331b/plain.htm&amp;node_id=53&amp;show_children=false) site for detailed troubleshooting steps.
+For I/O troubleshooting steps, see [SAP HANA Troubleshooting: I/O Related Root Causes and Solutions](https://help.sap.com/viewer/4e9b18c116aa42fc84c7dbfd02111aba/2.0.05/en-US/dc6ff98fa36541e997e4c719a632cbd8.html?q=I%2FO%20Related%20Root%20Causes%20and%20Solutions). For disk-related troubleshooting steps, see [SAP HANA Troubleshooting: Disk Related Root Causes and Solutions](https://help.sap.com/viewer/bed8c14f9f024763b0777aa72b5436f6/2.0.05/en-US/474cb08a715c42fe9f7cc5efdc599959.html?q=Disk%20Related%20Root%20Causes%20and%20Solutions).
## Diagnostic Tools
-Perform an SAP HANA Health Check through HANA\_Configuration\_Minichecks. This tool returns potentially critical technical issues that should have already been raised as alerts in SAP HANA Studio.
+Do an SAP HANA Health Check through HANA\_Configuration\_Minichecks. This tool returns potentially critical technical issues that should have already been raised as alerts in SAP HANA Studio.
-Refer to [SAP Note #1969700 ΓÇô SQL statement collection for SAP HANA](https://launchpad.support.sap.com/#/notes/1969700) and download the SQL Statements.zip file attached to that note. Store this .zip file on the local hard drive.
+1. Refer to [SAP Note #1969700 ΓÇô SQL statement collection for SAP HANA](https://launchpad.support.sap.com/#/notes/1969700) and download the SQL Statements.zip file attached to that note. Store this .zip file on the local hard drive.
-In SAP HANA Studio, on the **System Information** tab, right-click in the **Name** column and select **Import SQL Statements**.
+2. In SAP HANA Studio, on the **System Information** tab, right-click in the **Name** column and select **Import SQL Statements**.
-![In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Import SQL Statements](./media/troubleshooting-monitoring/image7-import-statements-a.png)
+ ![In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Import SQL Statements](./media/troubleshooting-monitoring/image7-import-statements-a.png)
-Select the SQL Statements.zip file stored locally, and a folder with the corresponding SQL statements will be imported. At this point, the many different diagnostic checks can be run with these SQL statements.
+3. Select the SQL Statements.zip file stored locally; a folder with the corresponding SQL statements will be imported. At this point, the many different diagnostic checks can be run with these SQL statements.
-For example, to test SAP HANA System Replication bandwidth requirements, right-click the **Bandwidth** statement under **Replication: Bandwidth** and select **Open** in SQL Console.
+ For example, to test SAP HANA System Replication bandwidth requirements, right-click the **Bandwidth** statement under **Replication: Bandwidth** and select **Open** in SQL Console.
-The complete SQL statement opens allowing input parameters (modification section) to be changed and then executed.
+ The complete SQL statement opens allowing input parameters (modification section) to be changed and then executed.
-![The complete SQL statement opens allowing input parameters (modification section) to be changed and then executed](./media/troubleshooting-monitoring/image8-import-statements-b.png)
+ ![The complete SQL statement opens allowing input parameters (modification section) to be changed and then executed](./media/troubleshooting-monitoring/image8-import-statements-b.png)
-Another example is right-clicking on the statements under **Replication: Overview**. Select **Execute** from the context menu:
+4. Another example is to right-click on the statements under **Replication: Overview**. Select **Execute** from the context menu:
-![Another example is right-clicking on the statements under Replication: Overview. Select Execute from the context menu](./media/troubleshooting-monitoring/image9-import-statements-c.png)
+ ![Another example is to right-click on the statements under Replication: Overview. Select Execute from the context menu](./media/troubleshooting-monitoring/image9-import-statements-c.png)
-This results in information that helps with troubleshooting:
+ You'll view information helpful with troubleshooting:
-![This will result in information that will help with troubleshooting](./media/troubleshooting-monitoring/image10-import-statements-d.png)
+ ![You'll view information helpful with troubleshooting](./media/troubleshooting-monitoring/image10-import-statements-d.png)
-Do the same for HANA\_Configuration\_Minichecks and check for any _X_ marks in the _C_ (Critical) column.
+5. Do the same for HANA\_Configuration\_Minichecks and check for any _X_ marks in the _C_ (Critical) column.
-Sample outputs:
+ Sample outputs:
-**HANA\_Configuration\_MiniChecks\_Rev102.01+1** for general SAP HANA checks.
+ **HANA\_Configuration\_MiniChecks\_Rev102.01+1** for general SAP HANA checks.
-![HANA\_Configuration\_MiniChecks\_Rev102.01+1 for general SAP HANA checks](./media/troubleshooting-monitoring/image11-configuration-minichecks.png)
+ ![HANA\_Configuration\_MiniChecks\_Rev102.01+1 for general SAP HANA checks](./media/troubleshooting-monitoring/image11-configuration-minichecks.png)
-**HANA\_Services\_Overview** for an overview of what SAP HANA services are currently running.
+ **HANA\_Services\_Overview** for an overview of which SAP HANA services are currently running.
-![HANA\_Services\_Overview for an overview of what SAP HANA services are currently running](./media/troubleshooting-monitoring/image12-services-overview.png)
+ ![HANA\_Services\_Overview for an overview of which SAP HANA services are currently running](./media/troubleshooting-monitoring/image12-services-overview.png)
-**HANA\_Services\_Statistics** for SAP HANA service information (CPU, memory, etc.).
+ **HANA\_Services\_Statistics** for SAP HANA service information (CPU, memory, and so on).
-![HANA\_Services\_Statistics for SAP HANA service information](./media/troubleshooting-monitoring/image13-services-statistics.png)
+ ![HANA\_Services\_Statistics for SAP HANA service information](./media/troubleshooting-monitoring/image13-services-statistics.png)
-**HANA\_Configuration\_Overview\_Rev110+** for general information on the SAP HANA instance.
+ **HANA\_Configuration\_Overview\_Rev110+** for general information on the SAP HANA instance.
-![HANA\_Configuration\_Overview\_Rev110+ for general information on the SAP HANA instance](./media/troubleshooting-monitoring/image14-configuration-overview.png)
+ ![HANA\_Configuration\_Overview\_Rev110+ for general information on the SAP HANA instance](./media/troubleshooting-monitoring/image14-configuration-overview.png)
-**HANA\_Configuration\_Parameters\_Rev70+** to check SAP HANA parameters.
+ **HANA\_Configuration\_Parameters\_Rev70+** to check SAP HANA parameters.
-![HANA\_Configuration\_Parameters\_Rev70+ to check SAP HANA parameters](./media/troubleshooting-monitoring/image15-configuration-parameters.png)
+ ![HANA\_Configuration\_Parameters\_Rev70+ to check SAP HANA parameters](./media/troubleshooting-monitoring/image15-configuration-parameters.png)
-**Next steps**
+## Next steps
-- Refer [High availability set up in SUSE using the STONITH](ha-setup-with-stonith.md).
+Learn how to set up high availability on the SUSE operating system using the STONITH device.
+
+> [!div class="nextstepaction"]
+> [High availability set up in SUSE using the STONITH](ha-setup-with-stonith.md)
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureActiveDirectory** | Azure Active Directory. | Outbound | No | Yes | | **AzureActiveDirectoryDomainServices** | Management traffic for deployments dedicated to Azure Active Directory Domain Services. | Both | No | Yes | | **AzureAdvancedThreatProtection** | Azure Advanced Threat Protection. | Outbound | No | No |
-| **AzureAPIForFHIR** | Azure API for FHIR (Fast Healthcare Interoperability Resources).<br/><br/> *Note: This tag is not currently configurable via Azure Portal.*| Outbound | No | No |
| **AzureArcInfrastructure** | Azure Arc enabled servers, Azure Arc enabled Kubernetes, and Guest Configuration traffic.<br/><br/>*Note:* This tag has a dependency on the **AzureActiveDirectory**,**AzureTrafficManager**, and **AzureResourceManager** tags. *This tag is not currently configurable via Azure Portal*.| Outbound | No | Yes | | **AzureBackup** |Azure Backup.<br/><br/>*Note:* This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | No |
The IP address ranges in these files are in CIDR notation.
- When new IP addresses are added to service tags, they will not be used in Azure for at least one week. This gives you time to update any systems that might need to track the IP addresses associated with service tags. ## Next steps-- Learn how to [create a network security group](tutorial-filter-network-traffic.md).
+- Learn how to [create a network security group](tutorial-filter-network-traffic.md).
virtual-wan Scenario Route Through Nvas Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-route-through-nvas-custom.md
To set up routing via NVA, consider the following steps:
> * Portal users must enable ΓÇÿPropagate to default routeΓÇÖ on connections (VPN/ER/P2S/VNet) for the 0.0.0.0/0 route to take effect. > * PS/CLI/REST users must set flag ΓÇÿenableinternetsecurityΓÇÖ to true for the 0.0.0.0/0 route to take effect. > * Virtual Network Connection does not support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE VNet 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet)
+ > * When 0.0.0.0/0 is configured as a static route on a Virtual Network Connection, that route is applied to all traffic, including the resources within the spoke itself. This means all traffic will be forwarded to the next hop IP address of the static route (NVA Private IP). Thus, in deployments with a 0.0.0.0/0 route with next hop NVA IP address configured on a spoke Virtual Network Connection, to access workloads in the same Virtual Network as the NVA directly (i.e. so that traffic does not pass through the NVA), please specify a /32 route on the Spoke Virtual Network connection. For instance, if you want to access 10.1.3.1 directly, please specify 10.1.3.1/32 next hop 10.1.3.1 on the Spoke Virtual Network Connection.
## Next steps