Updates from: 01/13/2021 04:05:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-ropc-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-ropc-policy.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 05/12/2020
+ms.date: 01/11/2021
ms.author: mimart ms.subservice: B2C zone_pivot_groups: b2c-policy-type
@@ -304,7 +304,7 @@ Use your favorite API development application to generate an API call, and revie
The actual POST request looks like the following example: ```https
-POST /<tenant-name>.onmicrosoft.com/oauth2/v2.0/token?p=B2C_1A_ROPC_Auth HTTP/1.1
+POST /<tenant-name>.onmicrosoft.com/B2C_1A_ROPC_Auth/oauth2/v2.0/token HTTP/1.1
Host: <tenant-name>.b2clogin.com Content-Type: application/x-www-form-urlencoded
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/plan-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/plan-conditional-access.md
@@ -221,14 +221,6 @@ If you misconfigure a policy, it can lock the organizations out of the Azure por
* Create a user account dedicated to policy administration and excluded from all your policies.
-* Break glass scenario for hybrid environments:
-
- * Create an on-premises security group and sync it to Azure AD. The security group should contain your dedicated policy administration account.
-
- * EXEMPT this security group form all Conditional Access policies.
-
- * When a service outage occurs, add your other administrators to the on-premises group as appropriate, and force a sync. This animates their exemption to Conditional Access policies.
- ### Set up report-only mode It can be difficult to predict the number and names of users affected by common deployment initiatives such as:
@@ -490,4 +482,4 @@ Once you have collected the information, See the following resources:
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
-[Manage Conditional Access policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta.md)
\ No newline at end of file
+[Manage Conditional Access policies with Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/conditionalaccesspolicy)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/identity-videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-videos.md
@@ -39,6 +39,18 @@ ___
<a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank"> <img src="./media/identity-videos/id-for-devs-08.jpg" alt="Video thumbnail for a video about modern authentication and the Microsoft identity platform." class="mx-imgBorder"> </a> :::column-end::: :::row-end:::
+:::row:::
+ :::column:::
+ <a href="https://www.youtube.com/watch?v=JpeMeTjQJ04" target="_blank">Overview: Implementing single sign-on in mobile applications - Microsoft Identity Platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (20:30)
+ :::column-end:::
+ :::column:::
+ <a href="https://www.youtube.com/watch?v=JpeMeTjQJ04" target="_blank"> <img src="./media/identity-videos/mobile-single-sign-on.jpg" alt="Video thumbnail for a video about implementing mobile single sign on using the Microsoft identity platform."></a> (20:30)
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+:::row-end:::
<!-- IMAGES -->
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-android-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-b2c.md
@@ -33,11 +33,14 @@ Given a B2C application that has two policies:
The configuration file for the app would declare two `authorities`. One for each policy. The `type` property of each authority is `B2C`.
+>Note: The `account_mode` must be set to **MULTIPLE** for B2C applications. Refer to the documentation for more information about [multiple account public client apps](https://docs.microsoft.com/azure/active-directory/develop/single-multi-account#multiple-account-public-client-application).
+ ### `app/src/main/res/raw/msal_config.json` ```json { "client_id": "<your_client_id_here>", "redirect_uri": "<your_redirect_uri_here>",
+ "account_mode" : "MULTIPLE",
"authorities": [{ "type": "B2C", "authority_url": "https://contoso.b2clogin.com/tfp/contoso.onmicrosoft.com/B2C_1_SISOPolicy/",
@@ -236,4 +239,4 @@ When you renew tokens for a policy with `acquireTokenSilent`, provide the same `
## Next steps
-Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
\ No newline at end of file
+Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/users-search-enhanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-search-enhanced.md
@@ -11,7 +11,7 @@ ms.service: active-directory
ms.subservice: enterprise-users ms.workload: identity ms.topic: how-to
-ms.date: 12/03/2020
+ms.date: 01/11/2020
ms.author: curtand ms.reviewer: krbain ms.custom: it-pro
@@ -57,6 +57,9 @@ The following are the displayed user properties on the **All users** page:
- Name: The display name of the user. - User principal name: The user principal name (UPN) of the user. - User Type: Member, guest, none.
+- Creation time: The date and time the user was created.
+- Job title: The job title of the user.
+- Department: The department the user works in.
- Directory synced: Indicates whether the user is synced from an on-premises directory. - Identity issuer: The issuers of the identity used to sign into a user account. - Object ID: The object ID of the user.
@@ -73,7 +76,8 @@ The following are the displayed user properties on the **All users** page:
The **Deleted users** page includes all the columns that are available on the **All users** page, and a few additional columns, namely: - Deletion date: The date the user was first deleted from the organization (the user is restorable).-- Permanent deletion date: The date after which the process of permanently deleting the user from the organization automatically begins.
+- Permanent deletion date: The date after which the process of permanently deleting the user from the organization automatically begins.
+- Original user principal name: The original UPN of the user before their object ID was added as a prefix to their deleted UPN.
> [!NOTE] > Deletion dates are displayed in Coordinated Universal Time ΓÇÄ(UTC)ΓÇÄ.
@@ -102,6 +106,10 @@ The following are the filterable properties on the **All users** page:
- User type: Member, guest, none - Directory synced status: Yes, no - Creation type: Invitation, Email verified, Local account
+- Creation time: Last 7, 14, 30, 90, 360 or >360 days ago
+- Job title: Enter a job title
+- Department: Enter a department name
+- Group: Search for a group
- Invitation state ΓÇô Pending acceptance, Accepted - Domain name: Enter a domain name - Company name: Enter a company name
@@ -114,6 +122,9 @@ The **Deleted users** page has additional filters not in the **All users** page.
- User type: Member, guest, none - Directory synced status: Yes, no - Creation type: Invitation, Email verified, Local account
+- Creation time: Last 7, 14, 30, 90, 360 or > 360 days ago
+- Job title: Enter a job title
+- Department: Enter a department name
- Invitation state: Pending acceptance, Accepted - Deletion date: Last 7, 14, or 30 days - Domain name: Enter a domain name
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
@@ -103,8 +103,6 @@ You can now automate creating, updating, and deleting user accounts for these ne
For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). ---
-
-[1233182](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1233182&triage=true&fullScreen=false&_a=edit)
### New Federated Apps available in Azure AD Application gallery - December 2020
@@ -120,6 +118,30 @@ You can also find the documentation of all the applications from here https://ak
For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+---
+
+### Navigate to Teams directly from My Access portal
+
+**Type:** Changed feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+You can now launch Teams directly from My Access portal. To do so, sign-in to [My Access](https://myaccess.microsoft.com/), navigate to **Access packages**, then go to the **Active** Tab to see all access packages you already have access to. When you expand the access package and hover on a Teams, you can launch it by clicking on the **Open** button.
+
+To learn more about using the My Access portal, go to [Request access to an access package in Azure AD entitlement management](../governance/entitlement-management-request-access.md#sign-in-to-the-my-access-portal).
+
+---
+
+### Public preview - Second level manager can be set as alternate approver
+
+**Type:** Changed feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+An additional option is now available in the approval process in Entitlement Management. If you select Manager as approver for the First Approver, you will have an additional option, Second level manager as alternate approver, available to choose in the alternate approver field. If you select this option, you need to add a fallback approver to forward the request to in case the system can't find the second level manager.
+
+For more information, go to [Change approval settings for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-approval-policy.md#alternate-approvers).
+ --- ## November 2020
@@ -185,7 +207,7 @@ Some common delegation scenarios:
---
-### Azure AD Application Proxy natively supports single sign-on access to applications that use headers for authentication
+### Public preview - Azure AD Application Proxy natively supports single sign-on access to applications that use headers for authentication
**Type:** New feature **Service category:** App Proxy
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
@@ -30,9 +30,6 @@ This tutorial shows you how to use a system-assigned managed identity for a Linu
> * Grant the Linux VM's Managed Identity access to an Azure Storage container > * Get an access token and use it to call Azure Storage
-> [!NOTE]
-> Azure Active Directory authentication for Azure Storage is in public preview.
- ## Prerequisites [!INCLUDE [msi-tut-prereqs](../../../includes/active-directory-msi-tut-prereqs.md)]
@@ -120,4 +117,4 @@ To complete the following steps, you need to work from the VM created earlier an
In this tutorial, you learned how enable a Linux VM system-assigned managed identity to access Azure Storage. To learn more about Azure Storage see: > [!div class="nextstepaction"]
-> [Azure Storage](../../storage/common/storage-introduction.md)
\ No newline at end of file
+> [Azure Storage](../../storage/common/storage-introduction.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
@@ -383,7 +383,11 @@ You can also use Microsoft Access Panel to test the application in any mode. Whe
* Roles must meet the following requirements to be eligible to be imported from AWS into Azure AD: * Roles must have exactly one saml-provider defined in AWS
- * The combined length of the ARN(Amazon Resource Name) for the role and the ARN for the associated saml-provider must be less than 120 characters
+ * The combined length of the ARN(Amazon Resource Name) for the role and the ARN for the associated saml-provider must be less than 240 characters.
+
+## Change log
+
+* 01/12/2020 - Increased role length limit from 119 characters to 239 characters.
## Next steps
@@ -408,4 +412,4 @@ Once you configure Amazon Web Services (AWS) you can enforce Session Control, wh
[38]: ./media/amazon-web-service-tutorial/tutorial_amazonwebservices_createnewaccesskey.png [39]: ./media/amazon-web-service-tutorial/tutorial_amazonwebservices_provisioning_automatic.png [40]: ./media/amazon-web-service-tutorial/tutorial_amazonwebservices_provisioning_testconnection.png
-[41]: ./media/amazon-web-service-tutorial/tutorial_amazonwebservices_provisioning_on.png
\ No newline at end of file
+[41]: ./media/amazon-web-service-tutorial/tutorial_amazonwebservices_provisioning_on.png
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/kfadvance-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kfadvance-tutorial.md
@@ -73,15 +73,15 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields: a. In the **Identifier** text box, type a URL using the following pattern:
- `https://api.kfadvance-<ENVIRONMENT>.com/<PARTNER_ID>`
+ `https://api.kfadvance.com/<PARTNER_ID>`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://api.kfadvance-<ENVIRONMENT>.com/vn/account/partnerssocallback?partnerKey=<PARTNER_ID>`
+ `https://api.kfadvance-<ENVIRONMENT>.com/v1/account/partnerssocallback?partnerKey=<PARTNER_ID>`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://api.kfadvance-<ENVIRONMENT>.com/vn/account/partnerssologin?partnerKey=<PARTNER_ID>`
+ `https://api.kfadvance.com/v1/account/partnerssologin?partnerKey=<PARTNER_ID>`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [KFAdvance Client support team](mailto:support@kornferry.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
aks https://docs.microsoft.com/en-us/azure/aks/concepts-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-storage.md
@@ -123,6 +123,18 @@ spec:
claimName: azure-managed-disk ```
+For mounting a volume in a Windows container, specify the drive letter and path. For example:
+
+```yaml
+...
+ volumeMounts:
+ - mountPath: "d:"
+ name: volume
+ - mountPath: "c:\k"
+ name: k-dir
+...
+```
+ ## Next steps For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
aks https://docs.microsoft.com/en-us/azure/aks/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
@@ -142,7 +142,7 @@ Moving your AKS cluster between tenants is currently unsupported.
Movement of clusters between subscriptions is currently unsupported.
-## Can I move my AKS clusters from the current Azure subscription to another?
+## Can I move my AKS clusters from the current Azure subscription to another?
Moving your AKS cluster and its associated resources between Azure subscriptions isn't supported.
@@ -150,7 +150,7 @@ Moving your AKS cluster and its associated resources between Azure subscriptions
Moving or renaming your AKS cluster and its associated resources isn't supported.
-## Why is my cluster delete taking so long?
+## Why is my cluster delete taking so long?
Most clusters are deleted upon user request; in some cases, especially where customers are bringing their own Resource Group, or doing cross-RG tasks deletion can take additional time or fail. If you have an issue with deletes, double-check that you do not have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
@@ -162,7 +162,7 @@ You can, but AKS doesn't recommend this. Upgrades should be performed when the s
No, delete/remove any nodes in a failed state or otherwise removed from the cluster prior to upgrading.
-## I ran a cluster delete, but see the error `[Errno 11001] getaddrinfo failed`
+## I ran a cluster delete, but see the error `[Errno 11001] getaddrinfo failed`
Most commonly, this is caused by users having one or more Network Security Groups (NSGs) still in use and associated with the cluster. Remove them and attempt the delete again.
@@ -170,7 +170,7 @@ Most commonly, this is caused by users having one or more Network Security Group
Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
-## My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.?
+## My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.?
Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
@@ -251,6 +251,25 @@ Below is an example ip route setup of transparent mode, each Pod's interface wil
- Provides better handling of UDP traffic and mitigation for UDP flood storm when ARP times out. In bridge mode, when bridge doesn't know a MAC address of destination pod in intra-VM Pod-to-Pod communication, by design, this results in storm of the packet to all ports. Solved in Transparent mode as there are no L2 devices in path. See more [here](https://github.com/Azure/azure-container-networking/issues/704). - Transparent mode performs better in Intra VM Pod-to-Pod communication in terms of throughput and latency when compared to bridge mode.
+## How to avoid permission ownership setting slow issues when the volume has a lot of files?
+
+Traditionally if your pod is running as a non-root user (which you should), you must specify a `fsGroup` inside the podΓÇÖs security context so that the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
+
+But one side-effect of setting `fsGroup` is that, each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume - with a few exceptions noted below. This happens even if group ownership of the volume already matches the requested `fsGroup`, and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time. This scenario has been a known problem before v1.20 and the workaround is setting the Pod run as root:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: security-context-demo
+spec:
+ securityContext:
+ runAsUser: 0
+ fsGroup: 0
+```
+
+The issue has been resolved by Kubernetes v1.20, refer [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/) for more details.
+ <!-- LINKS - internal -->
aks https://docs.microsoft.com/en-us/azure/aks/private-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
@@ -64,17 +64,21 @@ Where `--enable-private-cluster` is a mandatory flag for a private cluster.
### Configure Private DNS Zone
-The default value is "system", if the --private-dns-zone argument is omitted. AKS will create a Private DNS Zone in the Node Resource Group. Passing the "none" parameter means AKS will not create a Private DNS Zone. This relies on Bring Your Own DNS Server and configuration of the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
+The following parameters can be leveraged to configure Private DNS Zone.
+
+1. "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
+2. "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
+3. "Custom private dns zone name" should be in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. The user assigned identity or service principal must be granted at least `private dns zone contributor` role to the custom private dns zone.
## No Private DNS Zone Prerequisites
-No PrivateDNSZone
-* The Azure CLI version 0.4.67 or later
+
+* The Azure CLI version 0.4.71 or later
* The api version 2020-11-01 or later ## Create a private AKS cluster with Private DNS Zone ```azurecli-interactive
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --private-dns-zone [none|system]
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --private-dns-zone [none|system|custom private dns zone]
``` ## Options for connecting to the private cluster
@@ -136,4 +140,4 @@ As mentioned, virtual network peering is one way to access your private cluster.
[azure-bastion]: ../bastion/tutorial-create-host-portal.md [express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md [devops-agents]: /azure/devops/pipelines/agents/agents?view=azure-devops
-[availability-zones]: availability-zones.md
\ No newline at end of file
+[availability-zones]: availability-zones.md
api-management https://docs.microsoft.com/en-us/azure/api-management/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
@@ -377,7 +377,7 @@ Follow recommendations from Azure Security Center for the management and mainten
* [How to get a directory role definition in Azure AD with PowerShell](/powershell/module/az.resources/get-azroledefinition)
-* [Understand identity and access recommendations from Azure Security Center](../security-center/recommendations-reference.md#recs-identity)
+* [Understand identity and access recommendations from Azure Security Center](../security-center/recommendations-reference.md#recs-identityandaccess)
**Azure Security Center monitoring**: Yes
api-management https://docs.microsoft.com/en-us/azure/api-management/upgrade-and-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/upgrade-and-scale.md
@@ -85,4 +85,5 @@ If your security requirements include [compute isolation](../azure-government/az
- [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md) - [How to automatically scale an Azure API Management service instance](api-management-howto-autoscale.md)-- [Optimize and save on your cloud spending](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)\ No newline at end of file
+- [Plan and manage costs for API Management](plan-manage-costs.md)
+- [API Management limits](../azure-resource-manager/management/azure-subscription-service-limits.md#api-management-limits)
\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-python-1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python-1.md
@@ -100,13 +100,13 @@ The sample contains framework-specific code that Azure App Service recognizes wh
Deploy the code in your local folder (*python-docs-hello-world*) using the `az webapp up` command: ```azurecli
-az webapp up --sku F1 --name <app-name>
+az webapp up --sku B1 --name <app-name>
``` - If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment). - If the `webapp` command isn't recognized, because that your Azure CLI version is 2.0.80 or higher. If not, [install the latest version](/cli/azure/install-azure-cli). - Replace `<app_name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier.-- The `--sku F1` argument creates the web app on the Free pricing tier. Omit this argument to use a faster premium tier, which incurs an hourly cost.
+- The `--sku B1` argument creates the web app on the Basic pricing tier, which incurs a small hourly cost. Omit this argument to use a faster premium tier.
- You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command. - If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the *python-docs-hello-world* folder (Flask) or the *python-docs-hello-django* folder (Django) that contains the *requirements.txt* file. (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md) (GitHub).)
@@ -276,7 +276,7 @@ The App Service menu provides different pages for configuring your app.
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. If you use an App Service SKU other than the free F1 tier, these resources incur ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
+In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. If you keep the web app running, you will incur some ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
If you don't expect to need these resources in the future, delete the resource group by running the following command:
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
@@ -153,13 +153,13 @@ Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
Deploy the code in your local folder (*python-docs-hello-world*) using the `az webapp up` command: ```azurecli
-az webapp up --sku F1 --name <app-name>
+az webapp up --sku B1 --name <app-name>
``` - If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment). - If the `webapp` command isn't recognized, because that your Azure CLI version is 2.0.80 or higher. If not, [install the latest version](/cli/azure/install-azure-cli). - Replace `<app_name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier.-- The `--sku F1` argument creates the web app on the Free pricing tier. Omit this argument to use a faster premium tier, which incurs an hourly cost.
+- The `--sku B1` argument creates the web app on the Basic pricing tier, which incurs a small hourly cost. Omit this argument to use a faster premium tier.
- You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command. - If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the *python-docs-hello-world* folder (Flask) or the *python-docs-hello-django* folder (Django) that contains the *requirements.txt* file. (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md) (GitHub).)
@@ -263,7 +263,7 @@ Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. If you use an App Service SKU other than the free F1 tier, these resources incur ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
+In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. If you keep the web app running, you will incur some ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
If you don't expect to need these resources in the future, delete the resource group by running the following command:
automation https://docs.microsoft.com/en-us/azure/automation/shared-resources/variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/variables.md
@@ -20,7 +20,7 @@ Automation variables are useful for the following scenarios:
Azure Automation persists variables and makes them available even if a runbook or DSC configuration fails. This behavior allows one runbook or DSC configuration to set a value that is then used by another runbook, or by the same runbook or DSC configuration the next time it runs.
-Azure Automation stores each encrypted variable securely. When you create a variable, you can specify its encryption and storage by Azure Automation as a secure asset. After you create the variable, you can't change its encryption status without re-creating the variable. If you have Automation account variables storing sensitive data that are not already encrypted, then you need to delete them and recreate them as encrypted variables. An Azure Security Center recommendation is to encrypt all Azure Automation variables as described in [Automation account variables should be encrypted](../../security-center/recommendations-reference.md#recs-computeapp). If you have unencrypted variables that you want excluded from this security recommendation, see [Exempt a resource from recommendations and secure score](../../security-center/exempt-resource.md) to create an exemption rule.
+Azure Automation stores each encrypted variable securely. When you create a variable, you can specify its encryption and storage by Azure Automation as a secure asset. After you create the variable, you can't change its encryption status without re-creating the variable. If you have Automation account variables storing sensitive data that are not already encrypted, then you need to delete them and recreate them as encrypted variables. An Azure Security Center recommendation is to encrypt all Azure Automation variables as described in [Automation account variables should be encrypted](../../security-center/recommendations-reference.md#recs-compute). If you have unencrypted variables that you want excluded from this security recommendation, see [Exempt a resource from recommendations and secure score](../../security-center/exempt-resource.md) to create an exemption rule.
>[!NOTE] >Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are encrypted and stored in Azure Automation using a unique key that is generated for each Automation account. Azure Automation stores the key in the system-managed Key Vault. Before storing a secure asset, Automation loads the key from Key Vault and then uses it to encrypt the asset.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-baseline.md
@@ -76,7 +76,7 @@ Azure App Configuration is not intended to run web applications, it provides the
- [Manage Azure DDoS Protection Standard using the Azure portal](../ddos-protection/manage-ddos-protection.md) -- [Azure Security Center recommendations](../security-center/recommendations-reference.md#recs-network)
+- [Azure Security Center recommendations](../security-center/recommendations-reference.md#recs-networking)
**Azure Security Center monitoring**: Not applicable
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/security-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-overview.md
@@ -11,7 +11,7 @@ This article describes the security configuration and considerations you should
## Identity and access control
-Each Azure Arc enabled server has a managed identity as part of a resource group inside an Azure subscription, this identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md#access-control-iam) page in the Azure portal, you can verify who has access to your Azure Arc enabled server.
+Each Azure Arc enabled server has a managed identity as part of a resource group inside an Azure subscription, this identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc enabled server.
:::image type="content" source="./media/security-overview/access-control-page.png" alt-text="Azure Arc enabled server access control" border="false" lightbox="./media/security-overview/access-control-page.png":::
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-go-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-go-get-started.md new file mode 100644
@@ -0,0 +1,203 @@
+---
+title: Use Azure Cache for Redis with Go
+description: In this quickstart, you learn how to create a Go app that uses Azure Cache for Redis.
+author: abhirockzz
+ms.author: abhishgu
+ms.service: cache
+ms.devlang: go
+ms.topic: quickstart
+ms.date: 01/08/2021
+
+#Customer intent: As a Go developer new to Azure Cache for Redis, I want to create a new Go app that uses Azure Cache for Redis.
+---
+
+# Quickstart: Use Azure Cache for Redis with Go
+
+In this article, you will learn how to build a REST API in Go that will store and retrieve user information backed by a [HASH](https://redis.io/topics/data-types-intro#redis-hashes) data structure in [Azure Cache for Redis](./cache-overview.md).
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [Go](https://golang.org/doc/install) (preferably version 1.13 or above)
+- [Git](https://git-scm.com/downloads)
+- An HTTP client such [curl](https://curl.se/)
+
+## Create an Azure Cache for Redis instance
+[!INCLUDE [redis-cache-create](../../includes/redis-cache-create.md)]
+
+[!INCLUDE [redis-cache-create](../../includes/redis-cache-access-keys.md)]
+
+## Review the code (Optional)
+
+If you're interested in learning how the code works, you can review the following snippets. Otherwise, feel free to skip ahead to [Run the application](#run-the-application).
+
+The open source [go-redis](https://github.com/go-redis/redis) library is used to interact with Azure Cache for Redis.
+
+The `main` function starts off by reading the host name and password (Access Key) for the Azure Cache for Redis instance.
+
+```go
+func main() {
+ redisHost := os.Getenv("REDIS_HOST")
+ redisPassword := os.Getenv("REDIS_PASSWORD")
+...
+```
+
+Then, we establish connection with Azure Cache for Redis. Note that [tls.Config](https://golang.org/pkg/crypto/tls/#Config) is being used - Azure Cache for Redis only accepts secure connections with [TLS 1.2 as the minimum required version](cache-remove-tls-10-11.md).
+
+```go
+...
+op := &redis.Options{Addr: redisHost, Password: redisPassword, TLSConfig: &tls.Config{MinVersion: tls.VersionTLS12}}
+client := redis.NewClient(op)
+
+ctx := context.Background()
+err := client.Ping(ctx).Err()
+if err != nil {
+ log.Fatalf("failed to connect with redis instance at %s - %v", redisHost, err)
+}
+...
+```
+
+If the connection is successful, [HTTP handlers](https://golang.org/pkg/net/http/#HandleFunc) are configured to handle `POST` and `GET` operations and the HTTP server is started.
+
+> [gorilla mux library](https://github.com/gorilla/mux) is used for routing (although it's not strictly necessary and we could have gotten away by using the standard library for this sample application).
+
+```go
+uh := userHandler{client: client}
+
+router := mux.NewRouter()
+router.HandleFunc("/users/", uh.createUser).Methods(http.MethodPost)
+router.HandleFunc("/users/{userid}", uh.getUser).Methods(http.MethodGet)
+
+log.Fatal(http.ListenAndServe(":8080", router))
+```
+
+`userHandler` struct encapsulates a [redis.Client](https://pkg.go.dev/github.com/go-redis/redis/v8#Client), which is used by the `createUser`, `getUser` methods - code for these methods has not been included for the sake of brevity.
+
+- `createUser`: accepts a JSON payload (containing user information) and saves it as a `HASH` in Azure Cache for Redis.
+- `getUser`: fetches user info from `HASH` or returns an HTTP `404` response if not found.
+
+```go
+type userHandler struct {
+ client *redis.Client
+}
+...
+
+func (uh userHandler) createUser(rw http.ResponseWriter, r *http.Request) {
+ // details omitted
+}
+...
+
+func (uh userHandler) getUser(rw http.ResponseWriter, r *http.Request) {
+ // details omitted
+}
+```
+
+## Clone the sample application
+
+Start by cloning the application from GitHub.
+
+1. Open a command prompt and create a new folder named `git-samples`.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+1. Open a git terminal window, such as git bash. Use the `cd` command to change into the new folder where you will be cloning the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-redis-cache-go-quickstart.git
+ ```
+
+## Run the application
+
+The application accepts connectivity and credentials in the form of environment variables.
+
+1. Fetch the **Host name** and **Access Keys** (available via Access Keys) for Azure Cache for Redis instance in the [Azure portal](https://portal.azure.com/)
+
+1. Set them to the respective environment variables:
+
+ ```shell
+ set REDIS_HOST=<Host name>:<port> (e.g. <name of cache>.redis.cache.windows.net:6380)
+ set REDIS_PASSWORD=<Primary Access Key>
+ ```
+
+1. In the terminal window, change to the correct folder. For example:
+
+ ```shell
+ cd "C:\git-samples\azure-redis-cache-go-quickstart"
+ ```
+
+1. In the terminal, run the following command to start the application.
+
+ ```shell
+ go run main.go
+ ```
+
+The HTTP server will start on port `8080`.
+
+## Test the application
+
+1. Create a few user entries. The below example uses curl:
+
+ ```bash
+ curl -i -X POST -d '{"id":"1","name":"foo1", "email":"foo1@baz.com"}' localhost:8080/users/
+ curl -i -X POST -d '{"id":"2","name":"foo2", "email":"foo2@baz.com"}' localhost:8080/users/
+ curl -i -X POST -d '{"id":"3","name":"foo3", "email":"foo3@baz.com"}' localhost:8080/users/
+ ```
+
+1. Fetch an existing user with its `id`:
+
+ ```bash
+ curl -i localhost:8080/users/1
+ ```
+
+ You should get JSON response as such:
+
+ ```json
+ {
+ "email": "foo1@bar",
+ "id": "1",
+ "name": "foo1"
+ }
+ ```
+
+1. If you try to fetch a user that does not exist, you will get an HTTP `404`. For example:
+
+ ```bash
+ curl -i localhost:8080/users/100
+
+ #response
+
+ HTTP/1.1 404 Not Found
+ Date: Fri, 08 Jan 2021 13:43:39 GMT
+ Content-Length: 0
+ ```
+
+## Clean up resources
+
+If you're finished with the Azure resource group and resources you created in this quickstart, you can delete them to avoid charges.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible, and the resource group and all the resources in it are permanently deleted. If you created your Azure Cache for Redis instance in an existing resource group that you want to keep, you can delete just the cache by selecting **Delete** from the cache **Overview** page.
+
+To delete the resource group and its Redis Cache for Azure instance:
+
+1. From the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+1. In the **Filter by name** text box, enter the name of the resource group that contains your cache instance, and then select it from the search results.
+1. On your resource group page, select **Delete resource group**.
+1. Type the resource group name, and then select **Delete**.
+
+ ![Delete your resource group for Azure Cache for Redis](./media/cache-python-get-started/delete-your-resource-group-for-azure-cache-for-redis.png)
+
+## Next steps
+
+In this quickstart, you learned how to get started using Go with Azure Cache for Redis. You configured and ran a simple REST API based application to create and get user information backed by a Redis `HASH` data structure.
+
+> [!div class="nextstepaction"]
+> [Create a simple ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-agent.md
@@ -32,7 +32,7 @@ If you have [instrumented your Java web app with Application Insights SDK][java]
To use the Java agent, you install it on your server. Your web apps must be instrumented with the [Application Insights Java SDK][java]. ## Install the Application Insights agent for Java
-1. On the machine running your Java server, [download the agent](https://github.com/Microsoft/ApplicationInsights-Java/releases/latest). Please ensure to download the same version of Java Agent as Application Insights Java SDK core and web packages.
+1. On the machine running your Java server, [download the 2.x agent](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/2.6.2). Please make sure the version of the 2.x Java Agent that you use matches the version of the 2.x Application Insights Java SDK that you use.
2. Edit the application server startup script, and add the following JVM argument: `-javaagent:<full path to the agent JAR file>`
@@ -85,10 +85,7 @@ For Azure App Services, do the following:
* Under App Settings, add a new key value pair: Key: `JAVA_OPTS`
-Value: `-javaagent:D:/home/site/wwwroot/applicationinsights-agent-2.5.0.jar`
-
-For the latest version of the Java agent, check the releases [here](https://github.com/Microsoft/ApplicationInsights-Java/releases
-).
+Value: `-javaagent:D:/home/site/wwwroot/applicationinsights-agent-2.6.2.jar`
The agent must be packaged as a resource in your project such that it ends up in the D:/home/site/wwwroot/ directory. You can confirm that your agent is in the correct App Service directory by going to **Development Tools** > **Advanced Tools** > **Debug Console** and examining the contents of the site directory.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-filter-telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-filter-telemetry.md
@@ -78,7 +78,7 @@ In ApplicationInsights.xml, add a `TelemetryProcessors` section like this exampl
```
-[Inspect the full set of built-in processors](https://github.com/Microsoft/ApplicationInsights-Java/tree/master/core/src/main/java/com/microsoft/applicationinsights/internal).
+[Inspect the full set of built-in processors](https://github.com/microsoft/ApplicationInsights-Java/tree/master/core/src/main/java/com/microsoft/applicationinsights/internal).
## Built-in filters
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-get-started.md
@@ -51,7 +51,7 @@ Then, refresh the project dependencies to get the binaries downloaded.
<artifactId>applicationinsights-web-auto</artifactId> <!-- or applicationinsights-web for manual web filter registration --> <!-- or applicationinsights-core for bare API -->
- <version>2.5.0</version>
+ <version>2.6.2</version>
</dependency> </dependencies> ```
@@ -64,16 +64,12 @@ Then refresh the project dependencies to get the binaries downloaded.
```gradle dependencies {
- compile group: 'com.microsoft.azure', name: 'applicationinsights-web-auto', version: '2.5.0'
+ compile group: 'com.microsoft.azure', name: 'applicationinsights-web-auto', version: '2.6.2'
// or applicationinsights-web for manual web filter registration // or applicationinsights-core for bare API } ```
-# [Other types](#tab/other)
-
-Download the [latest version](https://github.com/Microsoft/ApplicationInsights-Java/releases/latest) and copy the necessary files into your project, replacing any previous versions.
- --- ### Questions
@@ -85,10 +81,7 @@ Download the [latest version](https://github.com/Microsoft/ApplicationInsights-J
* `applicationinsights-core` gives you just the bare API, for example, if your application isn't servlet-based. * *How should I update the SDK to the latest version?*
- * If you're using Gradle or Maven...
- * Update your build file to specify the latest version.
- * If you're manually managing dependencies...
- * Download the latest [Application Insights SDK for Java](https://github.com/Microsoft/ApplicationInsights-Java/releases/latest) and replace the old ones. Changes are described in the [SDK release notes](https://github.com/Microsoft/ApplicationInsights-Java#release-notes).
+ * As of November 2020, for monitoring Java applications we recommend auto-instrumentation using the Azure Monitor Application Insights Java 3.0 agent. For more information on how to get started, see [Application Insights Java 3.0 agent](./java-in-process-agent.md).
## Add an *ApplicationInsights.xml* file Add *ApplicationInsights.xml* to the resources folder in your project, or make sure it's added to your project's deployment class path. Copy the following XML into it.
@@ -167,10 +160,6 @@ Click through any chart to see more detailed aggregated metrics.
![Application Insights failures pane with charts](./media/java-get-started/006-barcharts.png)
-<!--
-[TODO update image with 2.5.0 operation naming provided by agent]
- ### Instance data Click through a specific request type to see individual instances.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors-examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md new file mode 100644
@@ -0,0 +1,483 @@
+---
+title: Telemetry processors examples - Azure Monitor Application Insights for Java
+description: Examples illustrating telemetry processors in Azure Monitor Application Insights for Java
+ms.topic: conceptual
+ms.date: 12/29/2020
+author: kryalama
+ms.custom: devx-track-java
+ms.author: kryalama
+---
+
+# Telemetry processors examples - Azure Monitor Application Insights for Java
+
+## Include/Exclude Samples
+
+### 1. Include Spans
+
+The following demonstrates including spans for this attributes processor. All other spans that do no match the properties are not processed by this processor.
+
+The following are conditions to be met for a match:
+* The span name must be equal to "spanA" or "spanB"
+
+The following are spans that match the include properties and the processor actions are applied.
+* Span1 Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* Span2 Name: 'spanB' Attributes: {env: dev, test_request: false}
+* Span3 Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+
+The following span does not match the include properties and the processor actions are not applied.
+* Span4 Name: 'spanC' Attributes: {env: dev, test_request: false}
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "include": {
+ "matchType": "strict",
+ "spanNames": [
+ "spanA",
+ "spanB"
+ ]
+ },
+ "actions": [
+ {
+ "key": "credit_card",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### 2. Exclude Spans
+
+The following demonstrates excluding spans for this attributes processor. All spans that match the properties are not processed by this processor.
+
+The following are conditions to be met for a match:
+* The span name must be equal to "spanA" or "spanB"
+
+The following are spans that match the exclude properties and the processor actions are not applied.
+* Span1 Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* Span2 Name: 'spanB' Attributes: {env: dev, test_request: false}
+* Span3 Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+
+The following span do not match the exclude properties and the processor actions are applied.
+* Span4 Name: 'spanC' Attributes: {env: dev, test_request: false}
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "exclude": {
+ "matchType": "strict",
+ "spanNames": [
+ "spanA",
+ "spanB"
+ ]
+ },
+ "actions": [
+ {
+ "key": "credit_card",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### 3. ExcludeMulti Spans
+
+The following demonstrates excluding spans for this attributes processor. All spans that match the properties are not processed by this processor.
+
+The following are conditions to be met for a match:
+* An attribute ('env', 'dev') must exist in the span for a match.
+* As long as there is an attribute with key 'test_request' in the span there is a match.
+
+The following are spans that match the exclude properties and the processor actions are not applied.
+* Span1 Name: 'spanB' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* Span2 Name: 'spanA' Attributes: {env: dev, test_request: false}
+
+The following span do not match the exclude properties and the processor actions are applied.
+* Span3 Name: 'spanB' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+* Span4 Name: 'spanC' Attributes: {env: dev, dev_request: false}
++
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "exclude": {
+ "matchType": "strict",
+ "spanNames": [
+ "spanA",
+ "spanB"
+ ],
+ "attributes": [
+ {
+ "key": "env",
+ "value": "dev"
+ },
+ {
+ "key": "test_request"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "key": "credit_card",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### 4. Selective processing
+
+The following demonstrates specifying the set of span properties to
+indicate which spans this processor should be applied to. The `include` of
+properties say which ones should be included and the `exclude` properties
+further filter out spans that shouldn't be processed.
+
+With the below configuration, the following spans match the properties and processor actions are applied:
+
+* Span1 Name: 'spanB' Attributes: {env: production, test_request: 123, credit_card: 1234, redact_trace: "false"}
+* Span2 Name: 'spanA' Attributes: {env: staging, test_request: false, redact_trace: true}
+
+The following spans do not match the include properties and processor actions are not applied:
+* Span3 Name: 'spanB' Attributes: {env: production, test_request: true, credit_card: 1234, redact_trace: false}
+* Span4 Name: 'spanC' Attributes: {env: dev, test_request: false}
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "include": {
+ "matchType": "strict",
+ "spanNames": [
+ "spanA",
+ "spanB"
+ ]
+ },
+ "exclude": {
+ "matchType": "strict",
+ "attributes": [
+ {
+ "key": "redact_trace",
+ "value": "false"
+ }
+ ]
+ },
+ "actions": [
+ {
+ "key": "credit_card",
+ "action": "delete"
+ },
+ {
+ "key": "duplicate_key",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+## Attribute Processor Samples
+
+### Insert
+
+The following inserts a new attribute {"attribute1": "attributeValue1"} to spans where the key "attribute1" does not exist.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "attribute1",
+ "value": "attributeValue1",
+ "action": "insert"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Insert from another key
+
+The following uses the value from attribute "anotherkey" to insert a new attribute {"newKey": "value from attribute 'anotherkey'} to spans where the key "newKey" does not exist. If the attribute 'anotherkey' doesn't exist, no new attribute is inserted to spans.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "newKey",
+ "fromAttribute": "anotherKey",
+ "action": "insert"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Update
+
+The following updates the attribute to { "db.secret": "redacted"} and updates the attribute 'boo' using the value from attribute 'foo'. Spans without the attribute 'boo' will not change.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "db.secret",
+ "value": "redacted",
+ "action": "update"
+ },
+ {
+ "key": "boo",
+ "fromAttribute": "foo",
+ "action": "update"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Delete
+
+The following demonstrates deleting attribute with key 'credit_card'.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "credit_card",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Hash
+
+The following demonstrates hash existing attribute values.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "user.email",
+ "action": "hash"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Extract
+
+The following example demonstrates using regex to create new attributes based on the value of another attribute.
+For example given http.url = http://example.com/path?queryParam1=value1,queryParam2=value2 the following attributes will be inserted:
+* httpProtocol: http
+* httpDomain: example.com
+* httpPath: path
+* httpQueryParams: queryParam1=value1,queryParam2=value2
+* http.url value does NOT change.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "http.url",
+ "pattern": "^(?<httpProtocol>.*):\\/\\/(?<httpDomain>.*)\\/(?<httpPath>.*)(\\?|\\&)(?<httpQueryParams>.*)",
+ "action": "extract"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+The following example demonstrates how to process spans that have a span name that match regexp patterns.
+This processor will remove "token" attribute and will obfuscate "password" attribute in spans where span name matches "auth.\*"
+and where span name does not match "login.\*".
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "include": {
+ "matchType": "regexp",
+ "spanNames": [
+ "auth.*"
+ ]
+ },
+ "exclude": {
+ "matchType": "regexp",
+ "spanNames": [
+ "login.*"
+ ]
+ },
+ "actions": [
+ {
+ "key": "password",
+ "value": "obfuscated",
+ "action": "update"
+ },
+ {
+ "key": "token",
+ "action": "delete"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
++
+## Span Processor Samples
+
+### Name a span
+
+The following example specifies the values of attribute "db.svc", "operation", and "id" will form the new name of the span, in that order, separated by the value "::".
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "span",
+ "name": {
+ "fromAttributes": [
+ "db.svc",
+ "operation",
+ "id"
+ ],
+ "separator": "::"
+ }
+ }
+ ]
+ }
+}
+```
+
+### Extract attributes from span name
+
+Let's assume the input span name is /api/v1/document/12345678/update. Applying the following results in the output span name /api/v1/document/{documentId}/update will add a new attribute "documentId"="12345678" to the span.
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "span",
+ "name": {
+ "toAttributes": {
+ "rules": [
+ "^/api/v1/document/(?<documentId>.*)/update$"
+ ]
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+### Extract attributes from span name with include and exclude
+
+The following demonstrates renaming the span name to "{operation_website}" and adding the attribute {Key: operation_website, Value: oldSpanName } when the span has the following properties:
+- The span name contains '/' anywhere in the string.
+- The span name is not 'donot/change'.
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "span",
+ "include": {
+ "matchType": "regexp",
+ "spanNames": [
+ "^(.*?)/(.*?)$"
+ ]
+ },
+ "exclude": {
+ "matchType": "strict",
+ "spanNames": [
+ "donot/change"
+ ]
+ },
+ "name": {
+ "toAttributes": {
+ "rules": [
+ "(?<operation_website>.*?)$"
+ ]
+ }
+ }
+ }
+ ]
+ }
+}
+```
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
@@ -3,9 +3,9 @@ title: Telemetry processors (preview) - Azure Monitor Application Insights for J
description: How to configure telemetry processors in Azure Monitor Application Insights for Java ms.topic: conceptual ms.date: 10/29/2020
-author: MS-jgol
+author: kryalama
ms.custom: devx-track-java
-ms.author: jgol
+ms.author: kryalama
--- # Telemetry processors (preview) - Azure Monitor Application Insights for Java
@@ -15,12 +15,55 @@ ms.author: jgol
Java 3.0 Agent for Application Insights now has the capabilities to process telemetry data before the data is exported.
-### Some use cases:
+The following are some use cases of telemetry processors:
* Mask sensitive data * Conditionally add custom dimensions * Update the telemetry name used for aggregation and display
+ * Drop or filter span attributes to control ingestion cost
-### Supported processors:
+## Terminology
+
+Before we jump into telemetry processors, it is important to understand what are traces and spans.
+
+### Traces
+
+Traces track the progression of a single request, called a `trace`, as it is handled by services that make up an application. The request may be initiated by a user or an application. Each unit of work in a `trace` is called a `span`; a `trace` is a tree of spans. A `trace` is comprised of the single root span and any number of child spans.
+
+### Span
+
+Spans are objects that represent the work being done by individual services or components involved in a request as it flows through a system. A `span` contains a `span context`, which is a set of globally unique identifiers that represent the unique request that each span is a part of.
+
+Spans encapsulate:
+
+* The span name
+* An immutable `SpanContext` that uniquely identifies the Span
+* A parent span in the form of a `Span`, `SpanContext`, or null
+* A `SpanKind`
+* A start timestamp
+* An end timestamp
+* [`Attributes`](#attributes)
+* A list of timestamped Events
+* A `Status`.
+
+Generally, the lifecycle of a span resembles the following:
+
+* A request is received by a service. The span context is extracted from the request headers, if it exists.
+* A new span is created as a child of the extracted span context; if none exists, a new root span is created.
+* The service handles the request. Additional attributes and events are added to the span that are useful for understanding the context of the request, such as the hostname of the machine handling the request, or customer identifiers.
+* New spans may be created to represent work being done by sub-components of the service.
+* When the service makes a remote call to another service, the current span context is serialized and forwarded to the next service by injecting the span context into the headers or message envelope.
+* The work being done by the service completes, successfully or not. The span status is appropriately set, and the span is marked finished.
+
+### Attributes
+
+`Attributes` are a list of zero or more key-value pairs which are encapsulated in a `span`. An Attribute MUST have the following properties:
+
+The attribute key, which MUST be a non-null and non-empty string.
+The attribute value, which is either:
+* A primitive type: string, boolean, double precision floating point (IEEE 754-1985) or signed 64 bit integer.
+* An array of primitive type values. The array MUST be homogeneous, i.e. it MUST NOT contain values of different types. For protocols that do not natively support array values such values SHOULD be represented as JSON strings.
+
+## Supported processors:
* Attribute Processor * Span Processor
@@ -50,9 +93,9 @@ Create a configuration file named `applicationinsights.json`, and place it in th
} ```
-## Include/exclude spans
+## Include/Exclude spans
-The attribute processor and the span processor expose the option to provide a set of properties of a span to match against, to determine if the span should be included or excluded from the processor. To configure this option, under `include` and/or `exclude` at least one `matchType` and one of `spanNames` or `attributes` is required. The include/exclude configuration is supported to have more than one specified condition. All of the specified conditions must evaluate to true for a match to occur.
+The attribute processor and the span processor expose the option to provide a set of properties of a span to match against, to determine if the span should be included or excluded from the telemetry processor. To configure this option, under `include` and/or `exclude` at least one `matchType` and one of `spanNames` or `attributes` is required. The include/exclude configuration is supported to have more than one specified condition. All of the specified conditions must evaluate to true for a match to occur.
**Required field**: * `matchType` controls how items in `spanNames` and `attributes` arrays are interpreted. Possible values are `regexp` or `strict`.
@@ -64,187 +107,164 @@ The attribute processor and the span processor expose the option to provide a se
> [!NOTE] > If both `include` and `exclude` are specified, the `include` properties are checked before the `exclude` properties.
-#### Sample usage
-
-The following demonstrates specifying the set of span properties to
-indicate which spans this processor should be applied to. The `include` of
-properties say which ones should be included and the `exclude` properties
-further filter out spans that shouldn't be processed.
+#### Sample Usage
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
+
+"processors": [
+ {
+ "type": "attribute",
+ "include": {
+ "matchType": "strict",
+ "spanNames": [
+ "spanA",
+ "spanB"
+ ]
+ },
+ "exclude": {
+ "matchType": "strict",
+ "attributes": [
+ {
+ "key": "redact_trace",
+ "value": "false"
+ }
+ ]
+ },
+ "actions": [
{
- "type": "attribute",
- "include": {
- "matchType": "strict",
- "spanNames": [
- "svcA",
- "svcB"
- ]
- },
- "exclude": {
- "matchType": "strict",
- "attributes": [
- {
- "key": "redact_trace",
- "value": "false"
- }
- ]
- },
- "actions": [
- {
- "key": "credit_card",
- "action": "delete"
- },
- {
- "key": "duplicate_key",
- "action": "delete"
- }
- ]
+ "key": "credit_card",
+ "action": "delete"
+ },
+ {
+ "key": "duplicate_key",
+ "action": "delete"
} ] }
-}
+]
```
+For more understanding, check out the [telemetry processor examples](./java-standalone-telemetry-processors-examples.md) documentation.
-With the above configuration, the following spans match the properties and processor actions are applied:
-
-* Span1 Name: 'svcB' Attributes: {env: production, test_request: 123, credit_card: 1234, redact_trace: "false"}
-
-* Span2 Name: 'svcA' Attributes: {env: staging, test_request: false, redact_trace: true}
-
-The following spans do not match the include properties and processor actions are not applied:
-
-* Span3 Name: 'svcB' Attributes: {env: production, test_request: true, credit_card: 1234, redact_trace: false}
+## Attribute processor
-* Span4 Name: 'svcC' Attributes: {env: dev, test_request: false}
+The attributes processor modifies attributes of a span. It optionally supports the ability to include/exclude spans. It takes a list of actions which are performed in order specified in the configuration file. The supported actions are:
-## Attribute processor
+### `insert`
-The attributes processor modifies attributes of a span. It optionally supports the ability to include/exclude spans.
-It takes a list of actions which are performed in order specified in the configuration file. The supported actions are:
+Inserts a new attribute in spans where the key does not already exist.
-* `insert` : Inserts a new attribute in spans where the key does not already exist
-* `update` : Updates an attribute in spans where the key does exist
-* `delete` : Deletes an attribute from a span
-* `hash` : Hashes (SHA1) an existing attribute value
+```json
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "attribute1",
+ "value": "value1",
+ "action": "insert"
+ },
+ ]
+ }
+]
+```
+For the `insert` action, following are required
+ * `key`
+ * one of `value` or `fromAttribute`
+ * `action`:`insert`
-For the actions `insert` and `update`
-* `key` is required
-* one of `value` or `fromAttribute` is required
-* `action` is required.
+### `update`
-For the `delete` action,
-* `key` is required
-* `action`: `delete` is required.
+Updates an attribute in spans where the key does exist
-For the `hash` action,
-* `key` is required
-* `action` : `hash` is required.
+```json
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "attribute1",
+ "value": "newValue",
+ "action": "update"
+ },
+ ]
+ }
+]
+```
+For the `update` action, following are required
+ * `key`
+ * one of `value` or `fromAttribute`
+ * `action`:`update`
-The list of actions can be composed to create rich scenarios, such as back filling attribute, copying values to a new key, redacting sensitive information.
-#### Sample usage
+### `delete`
-The following example demonstrates inserting keys/values into spans:
+Deletes an attribute from a span
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
{
- "type": "attribute",
- "actions": [
- {
- "key": "attribute1",
- "value": "value1",
- "action": "insert"
- },
- {
- "key": "key1",
- "fromAttribute": "anotherkey",
- "action": "insert"
- }
- ]
- }
+ "key": "attribute1",
+ "action": "delete"
+ },
] }
-}
+]
```
+For the `delete` action, following are required
+ * `key`
+ * `action`: `delete`
+
+### `hash`
-The following example demonstrates configuring the processor to only update existing keys in an attribute:
+Hashes (SHA1) an existing attribute value
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
{
- "type": "attribute",
- "actions": [
- {
- "key": "piiattribute",
- "value": "redacted",
- "action": "update"
- },
- {
- "key": "credit_card",
- "action": "delete"
- },
- {
- "key": "user.email",
- "action": "hash"
- }
- ]
- }
+ "key": "attribute1",
+ "action": "hash"
+ },
] }
-}
+]
```
+For the `hash` action, following are required
+* `key`
+* `action` : `hash`
-The following example demonstrates how to process spans that have a span name that match regexp patterns.
-This processor will remove "token" attribute and will obfuscate "password" attribute in spans where span name matches "auth.\*"
-and where span name does not match "login.\*".
+### `extract`
+
+> [!NOTE]
+> This feature is only in 3.0.1 and later
+
+Extracts values using a regular expression rule from the input key to target keys specified in the rule. If a target key already exists, it will be overridden. It behaves similar to the [Span Processor](#extract-attributes-from-span-name) `toAttributes` setting with the existing attribute as the source.
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
{
- "type": "attribute",
- "include": {
- "matchType": "regexp",
- "spanNames": [
- "auth.*"
- ]
- },
- "exclude": {
- "matchType": "regexp",
- "spanNames": [
- "login.*"
- ]
- },
- "actions": [
- {
- "key": "password",
- "value": "obfuscated",
- "action": "update"
- },
- {
- "key": "token",
- "action": "delete"
- }
- ]
- }
+ "key": "attribute1",
+ "pattern": "<regular pattern with named matchers>",
+ "action": "extract"
+ },
] }
-}
+]
```
+For the `extract` action, following are required
+* `key`
+* `pattern`
+* `action` : `extract`
+
+For more understanding, check out the [telemetry processor examples](./java-standalone-telemetry-processors-examples.md) documentation.
## Span processors
@@ -262,28 +282,19 @@ The following setting can be optionally configured:
> [!NOTE] > If renaming is dependent on attributes being modified by the attributes processor, ensure the span processor is specified after the attributes processor in the pipeline specification.
-#### Sample usage
-
-The following example specifies the values of attribute "db.svc", "operation", and "id" will form the new name of the span, in that order, separated by the value "::".
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
- {
- "type": "span",
- "name": {
- "fromAttributes": [
- "db.svc",
- "operation",
- "id"
- ],
- "separator": "::"
- }
- }
- ]
+"processors": [
+ {
+ "type": "span",
+ "name": {
+ "fromAttributes": [
+ "attributeKey1",
+ "attributeKey2",
+ ],
+ "separator": "::"
+ }
}
-}
+]
``` ### Extract attributes from span name
@@ -294,60 +305,45 @@ The following settings are required:
`rules` : A list of rules to extract attribute values from span name. The values in the span name are replaced by extracted attribute names. Each rule in the list is regex pattern string. Span name is checked against the regex. If the regex matches, all named subexpressions of the regex are extracted as attributes and are added to the span. Each subexpression name becomes an attribute name and subexpression matched portion becomes the attribute value. The matched portion in the span name is replaced by extracted attribute name. If the attributes already exist in the span, they will be overwritten. The process is repeated for all rules in the order they are specified. Each subsequent rule works on the span name that is the output after processing the previous rule.
-#### Sample usage
-
-Let's assume the input span name is /api/v1/document/12345678/update. Applying the following results in the output span name /api/v1/document/{documentId}/update will add a new attribute "documentId"="12345678" to the span.
```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
- {
- "type": "span",
- "name": {
- "toAttributes": {
- "rules": [
- "^/api/v1/document/(?<documentId>.*)/update$"
- ]
- }
- }
+
+"processors": [
+ {
+ "type": "span",
+ "name": {
+ "toAttributes": {
+ "rules": [
+ "rule1",
+ "rule2",
+ "rule3"
+ ]
}
- ]
+ }
}
-}
+]
+ ```
-The following demonstrates renaming the span name to "{operation_website}" and adding the attribute {Key: operation_website, Value: oldSpanName } when the span has the following properties:
-- The span name contains '/' anywhere in the string.-- The span name is not 'donot/change'.
-```json
-{
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
- "preview": {
- "processors": [
- {
- "type": "span",
- "include": {
- "matchType": "regexp",
- "spanNames": [
- "^(.*?)/(.*?)$"
- ]
- },
- "exclude": {
- "matchType": "strict",
- "spanNames": [
- "donot/change"
- ]
- },
- "name": {
- "toAttributes": {
- "rules": [
- "(?<operation_website>.*?)$"
- ]
- }
- }
- }
- ]
- }
-}
-```
\ No newline at end of file
+## List of Attributes
+
+Following are list of some common span attributes that can be used in the telemetry processors.
+
+### HTTP Spans
+
+| Attribute | Type | Description |
+|---|---|---|
+| `http.method` | string | HTTP request method.|
+| `http.url` | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. Usually the fragment is not transmitted over HTTP, but if it is known, it should be included nevertheless.|
+| `http.status_code` | number | [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6).|
+| `http.flavor` | string | Kind of HTTP protocol used |
+| `http.user_agent` | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. |
+
+### JDBC Spans
+
+| Attribute | Type | Description |
+|---|---|---|
+| `db.system` | string | An identifier for the database management system (DBMS) product being used. |
+| `db.connection_string` | string | The connection string used to connect to the database. It is recommended to remove embedded credentials.|
+| `db.user` | string | Username for accessing the database. |
+| `db.name` | string | This attribute is used to report the name of the database being accessed. For commands that switch the database, this should be set to the target database (even if the command fails).|
+| `db.statement` | string | The database statement being executed.|
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
@@ -15,6 +15,11 @@ By default, the Java 3.0 agent for Application Insights produces a log file name
This log file is the first place to check for hints to any issues you might be experiencing.
+## JVM fails to start
+
+If the JVM fails to start with "Error opening zip file or JAR manifest missing",
+try re-downloading the agent jar file because it may have been corrupted during file transfer.
+ ## Upgrade from the Application Insights Java 2.x SDK If you're already using the Application Insights Java 2.x SDK in your application, you can keep using it. The Java 3.0 agent will detect it. For more information, see [Upgrade from the Java 2.x SDK](./java-standalone-upgrade-from-2x.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-troubleshoot.md
@@ -18,7 +18,7 @@ Questions or problems with [Azure Application Insights in Java][java]? Here are
## Build errors **In Eclipse or Intellij Idea, when adding the Application Insights SDK via Maven or Gradle, I get build or checksum validation errors.**
-* If the dependency `<version>` element is using a pattern with wildcard characters (e.g. (Maven) `<version>[2.0,)</version>` or (Gradle) `version:'2.0.+'`), try specifying a specific version instead like `2.0.1`. See the [release notes](https://github.com/Microsoft/ApplicationInsights-Java/releases) for the latest version.
+* If the dependency `<version>` element is using a pattern with wildcard characters (e.g. (Maven) `<version>[2.0,)</version>` or (Gradle) `version:'2.+'`), try specifying a specific version instead like `2.6.2`.
## No data **I added Application Insights successfully and ran my app, but I've never seen data in the portal.**
@@ -31,7 +31,7 @@ Questions or problems with [Azure Application Insights in Java][java]? Here are
* [Turn on logging](#debug-data-from-the-sdk) by adding an `<SDKLogger />` element under the root node in the ApplicationInsights.xml file (in the resources folder in your project), and check for entries prefaced with AI: INFO/WARN/ERROR for any suspicious logs. * Make sure that the correct ApplicationInsights.xml file has been successfully loaded by the Java SDK, by looking at the console's output messages for a "Configuration file has been successfully found" statement. * If the config file is not found, check the output messages to see where the config file is being searched for, and make sure that the ApplicationInsights.xml is located in one of those search locations. As a rule of thumb, you can place the config file near the Application Insights SDK JARs. For example: in Tomcat, this would mean the WEB-INF/classes folder. During development you can place ApplicationInsights.xml in resources folder of your web project.
-* Please also look at [GitHub issues page](https://github.com/Microsoft/ApplicationInsights-Java/issues) for known issues with the SDK.
+* Please also look at [GitHub issues page](https://github.com/microsoft/ApplicationInsights-Java/issues) for known issues with the SDK.
* Please ensure to use same version of Application Insights core, web, agent and logging appenders to avoid any version conflict issues. #### I used to see data, but it has stopped
@@ -189,7 +189,7 @@ Application Insights uses `org.apache.http`. This is relocated within Applicatio
## Get help * [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
-* [File an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-Java/issues)
+* [File an issue on GitHub](https://github.com/microsoft/ApplicationInsights-Java/issues)
<!--Link references-->
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agents-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/agents-overview.md
@@ -7,7 +7,7 @@ ms.subservice:
ms.topic: conceptual author: bwren ms.author: bwren
-ms.date: 12/17/2020
+ms.date: 01/12/2021
--- # Overview of Azure Monitor agents
@@ -15,9 +15,9 @@ ms.date: 12/17/2020
Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. This article describes the agents used by Azure Monitor and helps you determine which you need to meet the requirements for your particular environment. > [!NOTE]
-> Azure Monitor currently has multiple agents because of recent consolidation of Azure Monitor and Log Analytics. While there may be overlap in their features, each has unique capabilities. Depending on your requirements, you may need one or more of the agents on your virtual machines.
+> Azure Monitor currently has multiple agents because of recent consolidation of Azure Monitor and Log Analytics. While there may be overlap in their features, each has unique capabilities. Depending on your requirements, you may need one or more of the agents on your machines.
-You may have a specific set of requirements that can't be completely met with a single agent for a particular virtual machine. For example, you may want to use metric alerts which requires Azure diagnostics extension but also want to leverage the functionality of Azure Monitor for VMs which requires the Log Analytics agent and the Dependency agent. In cases such as this, you can use multiple agents, and this is a common scenario for customers who require functionality from each.
+You may have a specific set of requirements that can't be completely met with a single agent for a particular machine. For example, you may want to use metric alerts which requires Azure diagnostics extension but also want to leverage the functionality of Azure Monitor for VMs which requires the Log Analytics agent and the Dependency agent. In cases such as this, you can use multiple agents, and this is a common scenario for customers who require functionality from each.
## Summary of agents
@@ -40,7 +40,7 @@ The following tables provide a quick comparison of the Azure Monitor agents for
| | Azure Monitor agent (preview) | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent | |:---|:---|:---|:---|:---|:---|
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Arc Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
| **Agent requirements** | None | None | None | None | Requires Log Analytics agent | | **Data collected** | Syslog<br>Performance | Syslog<br>Performance | Performance | Syslog<br>Performance| Process dependencies<br>Network connection metrics | | **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
@@ -48,15 +48,16 @@ The following tables provide a quick comparison of the Azure Monitor agents for
## Azure Monitor agent (preview)
-The [Azure Monitor agent](azure-monitor-agent-overview.md) is currently in preview and will replace the Log Analytics agent and Telegraf agent for both Windows and Linux virtual machines. It can to send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
+
+The [Azure Monitor agent](azure-monitor-agent-overview.md) is currently in preview and will replace the Log Analytics agent and Telegraf agent for both Windows and Linux machines. It can to send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
Use the Azure Monitor agent if you need to: -- Collect guest logs and metrics from any virtual machine in Azure, in other clouds, or on-premises. (Azure Arc required for virtual machines outside of Azure.)
+- Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc enabled servers](../../azure-arc/servers/overview.md) required for machines outside of Azure.)
- Send data to Azure Monitor Logs and Azure Monitor Metrics for analysis with Azure Monitor. - Send data to Azure Storage for archiving. - Send data to third-party tools using [Azure Event Hubs](diagnostics-extension-stream-event-hubs.md).-- Manage the security of your virtual machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md). (Not available in preview.)
+- Manage the security of your machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md). (Not available in preview.)
Limitations of the Azure Monitor agent include:
@@ -64,21 +65,18 @@ Limitations of the Azure Monitor agent include:
## Log Analytics agent
-The [Log Analytics agent](log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager, and you can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
-
+The [Log Analytics agent](log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager, and you can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
> [!NOTE] > The Log Analytics agent for Windows is often referred to as Microsoft Monitoring Agent (MMA). The Log Analytics agent for Linux is often referred to as OMS agent. -- Use the Log Analytics agent if you need to:
-* Collect logs and performance data from virtual or physical machines inside or outside of Azure.
+* Collect logs and performance data from Azure virtual machines or hybrid machines hosted outside of Azure.
* Send data to a Log Analytics workspace to take advantage of features supported by [Azure Monitor Logs](data-platform-logs.md) such as [log queries](../log-query/log-query-overview.md).
-* Use [Azure Monitor for VMs](../insights/vminsights-overview.md) which allows you to monitor your virtual machines at scale and monitors their processes and dependencies on other resources and external processes..
-* Manage the security of your virtual machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md).
-* Use [Azure Automation Update management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure VMs
+* Use [Azure Monitor for VMs](../insights/vminsights-overview.md) which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes..
+* Manage the security of your machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md).
+* Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines.
* Use different [solutions](../monitor-reference.md#insights-and-core-solutions) to monitor a particular service or application. Limitations of the Log Analytics agent include:
@@ -109,13 +107,11 @@ The [InfluxData Telegraf agent](collect-custom-metrics-linux-telegraf.md) is use
Use Telegraf agent if you need to:
-* Send data to [Azure Monitor Metrics](data-platform-metrics.md) to analyze it with [metrics explorer](metrics-getting-started.md) and to take advantage of features such as near real-time [metric alerts](./alerts-metric-overview.md) and [autoscale](autoscale-overview.md) (Linux only).
--
+* Send data to [Azure Monitor Metrics](data-platform-metrics.md) to analyze it with [metrics explorer](metrics-getting-started.md) and to take advantage of features such as near real-time [metric alerts](./alerts-metric-overview.md) and [autoscale](autoscale-overview.md) (Linux only).
## Dependency agent
-The Dependency agent collects discovered data about processes running on the virtual machine and external process dependencies.
+The Dependency agent collects discovered data about processes running on the machine and external process dependencies.
Use the Dependency agent if you need to:
@@ -123,15 +119,17 @@ Use the Dependency agent if you need to:
Consider the following when using the Dependency agent: -- The Dependency agent requires the Log Analytics agent to be installed on the same virtual machine.-- On Linux VMs, the Log Analytics agent must be installed before the Azure Diagnostic Extension.
+- The Dependency agent requires the Log Analytics agent to be installed on the same machine.
+- On Linux computers, the Log Analytics agent must be installed before the Azure Diagnostic Extension.
## Virtual machine extensions The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
+On hybrid machines, use [Azure Arc enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Log Analytics and Azure Monitor Dependency VM extensions.
## Supported operating systems+ The following tables list the operating systems that are supported by the Azure Monitor agents. See the documentation for each agent for unique considerations and for the installation process. See Telegraf documentation for its supported operating systems. All operating systems are assumed to be x64. x86 is not supported for any operating system. ### Windows
@@ -148,7 +146,6 @@ The following tables list the operating systems that are supported by the Azure
| Windows 8 Enterprise and Pro<br>(Server scenarios only) | | X | X | | | Windows 7 SP1<br>(Server scenarios only) | | X | X | | - ### Linux | Operations system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
@@ -178,8 +175,8 @@ The following tables list the operating systems that are supported by the Azure
| Ubuntu 16.04 LTS | X | X | X | X | | Ubuntu 14.04 LTS | | X | | X | - #### Dependency agent Linux kernel support+ Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent. | Distribution | OS version | Kernel version |
@@ -213,7 +210,6 @@ Since the Dependency agent works at the kernel level, support is also dependent
| | 12 SP2 | 4.4.* | | Debian | 9 | 4.9 | - ## Next steps Get more details on each of the agents at the following:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-data-explorer-monitor-cross-service-query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-data-explorer-monitor-cross-service-query.md
@@ -14,8 +14,6 @@ Create cross service queries between [Azure Data Explorer](https://docs.microsof
## Azure Monitor and Azure Data Explorer cross-service querying This experience enables you to [create cross service queries between Azure Data Explorer and Azure Monitor](https://docs.microsoft.com/azure/data-explorer/query-monitor-data) and to [create cross service queries between Azure Monitor and Azure Data Explorer](https://docs.microsoft.com/azure/azure-monitor/platform/azure-monitor-data-explorer-proxy).
-:::image type="content" source="media\azure-data-explorer-monitor-proxy\azure-data-explorer-monitor-flow.png" alt-text="Azure data explorer proxy flow.":::
- For example, (querying Azure Data Explorer from Log Analytics): ```kusto CustomEvents | where aField == 1
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-monitor-troubleshooting-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-monitor-troubleshooting-logs.md deleted file mode 100644
@@ -1,139 +0,0 @@
-title: Azure Monitor Troubleshooting logs (Preview)
-description: Use Azure Monitor to quickly, or periodically investigate issues, troubleshoot code or configuration problems or address support cases, which often rely upon searching over high volume of data for specific insights.
-author: osalzberg
-ms.author: bwren
-ms.reviewer: bwren
-ms.subservice: logs
-ms.topic: conceptual
-ms.date: 12/29/2020
--
-# Azure Monitor Troubleshooting logs (Preview)
-Use Azure Monitor to quickly and/or periodically investigate issues, troubleshoot code or configuration problems or address support cases, which often rely upon searching over high volume of data for specific insights.
-
->[!NOTE]
-> * Troubleshooting Logs is in preview mode.
->* Contact the [Log Analytics team](mailto:orens@microsoft.com) with any questions or to apply the feature.
-## Troubleshoot and query your code or configuration issues
-Use Azure Monitor Troubleshooting Logs to fetch your records and investigate problems and issues in a simpler and cheaper way using KQL.
-Troubleshooting Logs decrees your charges by giving you basic capabilities for troubleshooting.
-
-> [!NOTE]
->* The decision for troubleshooting mode is configurable.
->* Troubleshooting Logs can be applied to specific tables, currently on "Container Logs" and "App Traces" tables.
->* There is a 4 days free retention period, can be extended in addition cost.
->* By default, the tables inherits the workspace retention. To avoid additional charges, it is recommended to change these tables retention. [Click here to learn how to change table retention](https://docs.microsoft.com//azure/azure-monitor/platform/manage-cost-storage).
-
-## Turn on Troubleshooting Logs on your tables
-
-To turn on Troubleshooting Logs in your workspace, you need to use the following API call.
-```http
-PUT https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
-
-(With body in the form of a GET single table request response)
-
-Response:
-
-{
- "properties": {
- "retentionInDays": 40,
- "isTroubleshootingAllowed": true,
- "isTroubleshootEnabled": true
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}",
- "name": "{tableName}"
- }
-```
-## Check if the Troubleshooting logs feature is enabled for a given table
-To check whether the Troubleshooting Log is enabled for a given table, you can use the following API call.
-
-```http
-GET https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
-
-Response:
-"properties": {
- "retentionInDays": 30,
- "isTroubleshootingAllowed": true,
- "isTroubleshootEnabled": true,
- "lastTroubleshootDate": "Thu, 19 Nov 2020 07:40:51 GMT"
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/{tableName}",
- "name": " {tableName}"
-
-```
-## Check if the Troubleshooting logs feature is enabled for all of the tables in a workspace
-To check which tables have the Troubleshooting Log enabled, you can use the following API call.
-
-```http
-GET "https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables"
-
-Response:
-{
- "properties": {
- "retentionInDays": 30,
- "isTroubleshootingAllowed": true,
- "isTroubleshootEnabled": true,
- "lastTroubleshootDate": "Thu, 19 Nov 2020 07:40:51 GMT"
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table1",
- "name": "table1"
- },
- {
- "properties": {
- "retentionInDays": 7,
- "isTroubleshootingAllowed": true
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table2",
- "name": "table2"
- },
- {
- "properties": {
- "retentionInDays": 7,
- "isTroubleshootingAllowed": false
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table3",
- "name": "table3"
- }
-```
-## Turn off Troubleshooting Logs on your tables
-
-To turn off Troubleshooting Logs in your workspace, you need to use the following API call.
-```http
-PUT https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
-
-(With body in the form of a GET single table request response)
-
-Response:
-
-{
- "properties": {
- "retentionInDays": 40,
- "isTroubleshootingAllowed": true,
- "isTroubleshootEnabled": false
- },
- "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}",
- "name": "{tableName}"
- }
-```
->[!TIP]
->* You can use any REST API tool to run the commands. [Read More](https://docs.microsoft.com/rest/api/azure/)
->* You need to use the Bearer token for authentication. [Read More](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx)
-
->[!NOTE]
->* The "isTroubleshootingAllowed" flag ΓÇô describes if the table is allowed in the service
->* The "isTroubleshootEnabled" indicates if the feature is enabled for the table - can be switched on or off (true or false)
->* When disabling the "isTroubleshootEnabled" flag for a specific table, re-enabling it is possible only one week after the prior enable date.
->* Currently this is supported only for tables under (some other SKUs will also be supported in the future) - [Read more about pricing](https://docs.microsoft.com/services-hub/health/azure_pricing).
-
-## Query limitations for Troubleshooting
-There are few limitations for a table that is marked as "Troubleshooting Logs":
-* Will get less processing resources and therefore, will not be suitable for large dashboards, complex analytics, or many concurrent API calls.
-* Queries are limited to a time range of two days.
-* purging will not work ΓÇô [Read more about purge](https://docs.microsoft.com/rest/api/loganalytics/workspacepurge/purge).
-* Alerts are not supported through this service.
-## Next steps
-* [Write queries](https://docs.microsoft.com/azure/data-explorer/write-queries)
--
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/customer-managed-keys.md
@@ -1,6 +1,6 @@
--- title: Azure Monitor customer-managed key
-description: Information and steps to configure Customer-Managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key.
+description: Information and steps to configure Customer-managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key.
ms.subservice: logs ms.topic: conceptual author: yossi-y
@@ -21,25 +21,25 @@ We recommend you review [Limitations and constraints](#limitationsandconstraints
Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). Azure Monitor also provides an option for encryption using your own key that is stored in your [Azure Key Vault](../../key-vault/general/overview.md), which gives you the control to revoke the access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-encryption) operates.
-Customer-Managed key is delivered on [dedicated clusters](../log-query/logs-dedicated-clusters.md) providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-Managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
+Customer-managed key is delivered on [dedicated clusters](../log-query/logs-dedicated-clusters.md) providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
-Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-Managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-Managed key in the first half of 2021.
+Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-managed key in the first half of 2021.
Log Analytics Dedicated Clusters use a Capacity Reservation [pricing model](../log-query/logs-dedicated-clusters.md#cluster-pricing-model) starting at 1000 GB/day. > [!IMPORTANT] > Due to temporary capacity constraints, we require you pre-register to before creating a cluster. Use your contacts into Microsoft, or open support request to register your subscriptions IDs.
-## How Customer-Managed key works in Azure Monitor
+## How Customer-managed key works in Azure Monitor
-Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-Managed key protection on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
+Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key protection on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to Log Analytics storage and encrypted with Microsoft key, while you can query your new and old data seamlessly. > [!IMPORTANT]
-> Customer-Managed key capability is regional. Your Azure Key Vault, cluster and linked Log Analytics workspaces must be in the same region, but they can be in different subscriptions.
+> Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked Log Analytics workspaces must be in the same region, but they can be in different subscriptions.
-![Customer-Managed key overview](media/customer-managed-keys/cmk-overview.png)
+![Customer-managed key overview](media/customer-managed-keys/cmk-overview.png)
1. Key Vault 2. Log Analytics *Cluster* resource having managed identity with permissions to Key Vault -- The identity is propagated to the underlay dedicated Log Analytics cluster storage
@@ -50,7 +50,7 @@ After the Customer-managed key configuration, new ingested data to workspaces li
There are 3 types of keys involved in Storage data encryption: -- **KEK** - Key Encryption Key (your Customer-Managed key)
+- **KEK** - Key Encryption Key (your Customer-managed key)
- **AEK** - Account Encryption Key - **DEK** - Data Encryption Key
@@ -71,7 +71,7 @@ The following rules apply:
1. Updating cluster with key identifier details 1. Linking Log Analytics workspaces
-Customer-Managed key configuration isn't supported in Azure portal currently and provisioning can be performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics) or [REST](/rest/api/loganalytics/) requests.
+Customer-managed key configuration isn't supported in Azure portal currently and provisioning can be performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics) or [REST](/rest/api/loganalytics/) requests.
### Asynchronous operations and status check
@@ -121,11 +121,11 @@ These settings can be updated in Key Vault via CLI and PowerShell:
## Create cluster
-> [!INFORMATION]
-> Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). System-assigned managed identity is created with the cluster when you enter `SystemAssigned` identity type and this can be used later to grant access to your Key Vault. If you want to create a cluster that is configured for Customer-managed key at creation, create the cluster with User-assigned managed identity that is granted in your Key Vault -- Update the cluster with `UserAssigned` identity type, the identity's resource ID in `UserAssignedIdentities` and provide provide your key details in `keyVaultProperties`.
+> [!NOTE]
+> Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned and each can be based depending your scenario. System-assigned managed identity is simpler and it's created automatically with the cluster creation when identity `type` is set as "*SystemAssigned*" -- this identity can be used later to grant the cluster access to your Key Vault. If you want to create a cluster while Customer-managed key is defined at cluster creation time, you should have a key defined and User-assigned identity granted in your Key Vault beforehand, then create the cluster with these settings: identity `type` as "*UserAssigned*", `UserAssignedIdentities` with the identity's resource ID and `keyVaultProperties` with key details.
> [!IMPORTANT]
-> Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet). This limitation isn't applied to System-assigned managed identity.
+> Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet) and you can use System-assigned managed identity in this case.
Follow the procedure illustrated in [Dedicated Clusters article](../log-query/logs-dedicated-clusters.md#creating-a-cluster).
@@ -252,20 +252,20 @@ The cluster's storage periodically polls your Key Vault to attempt to unwrap the
## Key rotation
-Customer-Managed key rotation requires an explicit update to the cluster with the new key version in Azure Key Vault. [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you don't update the new key version in the cluster, the Log Analytics cluster storage will keep using your previous key for encryption. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
+Customer-managed key rotation requires an explicit update to the cluster with the new key version in Azure Key Vault. [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you don't update the new key version in the cluster, the Log Analytics cluster storage will keep using your previous key for encryption. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
-## Customer-Managed key for queries
+## Customer-managed key for queries
-The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-Managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log-alerts* queries encrypted with your key in your own storage account when connected to your workspace.
+The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log-alerts* queries encrypted with your key in your own storage account when connected to your workspace.
> [!NOTE]
-> Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key (MMK) in the following scenarios regardless Customer-Managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
+> Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key (MMK) in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
When you Bring Your Own Storage (BYOS) and link it to your workspace, the service uploads *saved-searches* and *log-alerts* queries to your storage account. That means that you control the storage account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md) either using the same key that you use to encrypt data in Log Analytics cluster, or a different key. You will, however, be responsible for the costs associated with that storage account.
-**Considerations before setting Customer-Managed key for queries**
+**Considerations before setting Customer-managed key for queries**
* You need to have 'write' permissions to both your workspace and Storage Account * Make sure to create your Storage Account in the same region as your Log Analytics workspace is located * The *saves searches* in storage is considered as service artifacts and their format may change
@@ -383,7 +383,7 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
## Limitations and constraints -- Customer-Managed key is supported on dedicated Log Analytics cluster and suitable for customers sending 1TB per day or more.
+- Customer-managed key is supported on dedicated Log Analytics cluster and suitable for customers sending 1TB per day or more.
- The max number of cluster per region and subscription is 2
@@ -393,7 +393,7 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
- Workspace link to cluster should be carried ONLY after you have verified that the Log Analytics cluster provisioning was completed. Data sent to your workspace prior to the completion will be dropped and won't be recoverable. -- Customer-Managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-Managed key configuration seamlessly.
+- Customer-managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-managed key configuration seamlessly.
- The Azure Key Vault must be configured as recoverable. These properties aren't enabled by default and should be configured using CLI or PowerShell:<br> - [Soft Delete](../../key-vault/general/soft-delete-overview.md)
@@ -413,7 +413,7 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
- If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn' applied to System-assigned managed identity.
- - Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet). This limitation isn't applied to System-assigned managed identity.
+ - Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet) and you can use System-assigned managed identity in this case.
## Troubleshooting
@@ -422,7 +422,7 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
- Transient connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for a short while longer and this overcomes any small blips in availability. The query and ingestion capabilities continue without interruption.
- - Live site -- unavailability of about 30 minutes will cause the Storage account to become unavailable. The query capability is unavailable and ingested data is cached for several hours using Microsoft key to avoid data loss. When access to Key Vault is restored, query becomes available and the temporary cached data is ingested to the data-store and encrypted with Customer-Managed key.
+ - Live site -- unavailability of about 30 minutes will cause the Storage account to become unavailable. The query capability is unavailable and ingested data is cached for several hours using Microsoft key to avoid data loss. When access to Key Vault is restored, query becomes available and the temporary cached data is ingested to the data-store and encrypted with Customer-managed key.
- Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -152,7 +152,7 @@ Use the following procedure to create action groups:
* In a case you select in the "Work Item" dropdown "Event": ![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png)
- * If you select **"Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)"** in the radio buttons selection, a work item will be created per each row in the search results of the log search alert query. In the payload of the work item the description property will have the row from the search results.
+ * If you select **"Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)"** in the radio buttons selection, a work item will be created per each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results.
* If you select **"Create individual work items for each Configuration Item"** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section. 10. Select **OK**.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/log-analytics-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/log-analytics-agent.md
@@ -5,11 +5,12 @@ ms.subservice: logs
ms.topic: conceptual author: bwren ms.author: bwren
-ms.date: 08/21/2020
+ms.date: 01/12/2021
--- # Log Analytics agent overview+ The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and those monitored by [System Center Operations Manager](/system-center/scom/) and sends it collected data to your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other services in Azure Monitor such as [Azure Monitor for VMs](../insights/vminsights-enable-overview.md), [Azure Security Center](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods. > [!NOTE]
@@ -25,14 +26,15 @@ The key differences to consider are:
- The Log Analytics agent is required for [solutions](../monitor-reference.md#insights-and-core-solutions), [Azure Monitor for VMs](../insights/vminsights-overview.md), and other services such as [Azure Security Center](../../security-center/index.yml). ## Costs+ There is no cost for Log Analytics agent, but you may incur charges for the data ingested. Check [Manage usage and costs with Azure Monitor Logs](manage-cost-storage.md) for detailed information on the pricing for data collected in a Log Analytics workspace. ## Supported operating systems See [Supported operating systems](agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are supported by the Log Analytics agent. - ## Data collected+ The following table lists the types of data you can configure a Log Analytics workspace to collect from all connected agents. See [What is monitored by Azure Monitor?](../monitor-reference.md) for a list of insights, solutions, and other solutions that use the Log Analytics agent to collect other kinds of data. | Data Source | Description |
@@ -44,9 +46,11 @@ The following table lists the types of data you can configure a Log Analytics wo
| [Custom logs](data-sources-custom-logs.md) | Events from text files on both Windows and Linux computers. | ## Data destinations+ The Log Analytics agent sends data to a Log Analytics workspace in Azure Monitor. The Windows agent can be multihomed to send data to multiple workspaces and System Center Operations Manager management groups. The Linux agent can send to only a single destination, either a workspace or management group. ## Other services+ The agent for Linux and Windows isn't only for connecting to Azure Monitor. Other services such as Azure Security Center and Azure Sentinel rely on the agent and its connected Log Analytics workspace. The agent also supports Azure Automation to host the Hybrid Runbook worker role and other services such as [Change Tracking](../../automation/change-tracking/overview.md), [Update Management](../../automation/update-management/overview.md), and [Azure Security Center](../../security-center/security-center-introduction.md). For more information about the Hybrid Runbook Worker role, see [Azure Automation Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md). ## Workspace and management group limitations
@@ -56,15 +60,14 @@ See [Configure agent to report to an Operations Manager management group](agent-
* Windows agents can connect to up to four workspaces, even if they are connected to a System Center Operations Manager management group. * The Linux agent does not support multi-homing and can only connect to a single workspace or management group. - ## Security limitations * The Windows and Linux agents support the [FIPS 140 standard](/windows/security/threat-protection/fips-140-validation), but [other types of hardening may not be supported](agent-linux.md#supported-linux-hardening). - ## Installation options There are multiple methods to install the Log Analytics agent and connect your machine to Azure Monitor depending on your requirements. The following sections list the possible methods for different types of virtual machine.+ > [!NOTE] > It is not supported to clone a machine with the Log Analytics Agent already configured. If the agent has already been associated with a workspace this will not work for 'golden images'.
@@ -75,19 +78,21 @@ There are multiple methods to install the Log Analytics agent and connect your m
- Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or a Azure Resource Manager template. - Install for individual Azure virtual machines [manually from the Azure portal](../learn/quick-collect-azurevm.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+### Windows virtual machine on-premises or in another cloud
-### Windows virtual machine on-premises or in another cloud
-
+- Use [Azure Arc enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension.
- [Manually install](agent-windows.md) the agent from the command line. - Automate the installation with [Azure Automation DSC](agent-windows.md#install-agent-using-dsc-in-azure-automation). - Use a [Resource Manager template with Azure Stack](https://github.com/Azure/AzureStack-QuickStart-Templates/tree/master/MicrosoftMonitoringAgent-ext-win) ### Linux virtual machine on-premises or in another cloud
+- Use [Azure Arc enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension.
- [Manually install](../learn/quick-collect-linux-computer.md) the agent calling a wrapper-script hosted on GitHub.-- System Center Operations Manager|[Integrate Operations Manager with Azure Monitor](./om-agents.md) to forward collected data from Windows computers reporting to a management group.
+- Integrate [System Center Operations Manager](./om-agents.md) with Azure Monitor to forward collected data from Windows computers reporting to a management group.
## Workspace ID and key+ Regardless of the installation method used, you will require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section. [![Workspace details](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
@@ -97,6 +102,7 @@ Regardless of the installation method used, you will require the workspace ID an
To ensure the security of data in transit to Azure Monitor logs, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. For additional information, review [Sending data securely using TLS 1.2](data-security.md#sending-data-securely-using-tls-12). ## Network requirements+ The agent for Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. If the machine connects through a firewall or proxy server to communicate over the Internet, review requirements below to understand the network configuration required. If your IT security policies do not allow computers on the network to connect to the Internet, you can set up a [Log Analytics gateway](gateway.md) and then configure the agent to connect through the gateway to Azure Monitor. The agent can then receive configuration information and send data collected. ![Log Analytics agent communication diagram](./media/log-analytics-agent/log-analytics-agent-01.png)
@@ -106,7 +112,7 @@ The following table lists the proxy and firewall configuration information requi
### Firewall requirements |Agent Resource|Ports |Direction |Bypass HTTPS inspection|
-|------|---------|--------|--------|
+|------|---------|--------|--------|
|*.ods.opinsights.azure.com |Port 443 |Outbound|Yes | |*.oms.opinsights.azure.com |Port 443 |Outbound|Yes | |*.blob.core.windows.net |Port 443 |Outbound|Yes |
@@ -114,13 +120,13 @@ The following table lists the proxy and firewall configuration information requi
For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor).
-If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use runbooks or management solutions in your environment, it must have access to the port number and the URLs described in [Configure your network for the Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md#network-planning).
+If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use runbooks or management features in your environment, it must have access to the port number and the URLs described in [Configure your network for the Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md#network-planning).
### Proxy configuration The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell.
-For the Linux agent, the proxy server is specified during installation or [after installation](agent-manage.md#update-proxy-settings) by modifying the proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
+For the Linux agent, the proxy server is specified during installation or [after installation](agent-manage.md#update-proxy-settings) by modifying the proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
`[protocol://][user:password@]proxyhost[:port]`
@@ -138,8 +144,6 @@ For example:
> [!NOTE] > If you use special characters such as "\@" in your password, you receive a proxy connection error because value is parsed incorrectly. To work around this issue, encode the password in the URL using a tool such as [URLDecode](https://www.urldecoder.org/). -- ## Next steps * Review [data sources](agent-data-sources.md) to understand the data sources available to collect data from your Windows or Linux system.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/workbooks-link-actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/workbooks-link-actions.md new file mode 100644
@@ -0,0 +1,177 @@
+---
+title: Azure Monitor Workbooks link actions
+description: How to use link actions in Azure Monitor Workbooks
+ms.topic: conceptual
+ms.date: 01/07/2021
+
+author: lgayhardt
+ms.author: lagayhar
+---
+
+# Link actions
+
+Link actions can be accessed through Workbook link steps or through column settings of [grids](workbooks-grid-visualizations.md), [titles](workbooks-tile-visualizations.md), or [graphs](workbooks-graph-visualizations.md).
+
+## General link actions
+
+| Link action | Action on click |
+|:------------- |:-------------|
+| `Generic Details` | Shows the row values in a property grid context view. |
+| `Cell Details` | Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
+| `Url` | The value of the cell is expected to be a valid http url, and the cell will be a link that opens up that url in a new tab.|
+
+## Application Insights
+
+| Link action | Action on click |
+|:------------- |:-------------|
+| `Custom Event Details` | Opens the Application Insights search details with the custom event ID (`itemId`) in the cell. |
+| `* Details` | Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
+| `Custom Event User Flows` | Opens the Application Insights User Flows experience pivoted on the custom event name in the cell. |
+| `* User Flows` | Similar to Custom Event User Flows except for exceptions, page views and requests. |
+| `User Timeline` | Opens the user timeline with the user ID (`user_Id`) in the cell. |
+| `Session Timeline` | Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell). |
+
+`*` denotes a wildcard for the above table
+
+## Azure resource
+
+| Link action | Action on click |
+|:------------- |:-------------|
+| `ARM Deployment` | Deploy an Azure Resource Manager template. When this item is selected, additional fields are displayed to let the author configure which Azure Resource Manager template to open, parameters for the template, etc. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). |
+| `Create Alert Rule` | Creates an Alert rule for a resource. |
+| `Custom View` | Opens a custom View. When this item is selected, additional fields are displayed to let the author configure the View extension, View name, and any parameters used to open the View. [See custom view](#custom-view-link-settings). |
+| `Metrics` | Opens a metrics view. |
+| `Resource overview` | Open the resource's view in the portal based on the resource ID value in the cell. The author can also optionally set a `submenu` value that will open up a specific menu item in the resource view. |
+| `Workbook (template)` | Open a workbook template. When this item is selected, additional fields are displayed to let the author configure what template to open, etc. |
+
+## Link settings
+
+When using the link renderer, the following settings are available:
+
+![Screenshot of link settings](./media/workbooks-link-actions/link-settings.png)
+
+| Setting | Explanation |
+|:------------- |:-------------|
+| `View to open` | Allows the author to select one of the actions enumerated above. |
+| `Menu item` | If "Resource Overview" is selected, this is the menu item in the resource's overview to open. This can be used to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure `Resourcetype`.|
+| `Link label` | If specified, this value will be displayed in the grid column. If this value is not specified, the value of the cell will be displayed. If you want another value to be displayed, like a heatmap or icon, do not use the `Link` renderer, instead use the appropriate renderer and select the `Make this item a link` option. |
+| `Open link in Context Blade` | If specified, the link will be opened as a popup "context" view on the right side of the window instead of opening as a full view. |
+
+When using the `Make this item a link` option, the following settings are available:
+
+| Setting | Explanation |
+|:------------- |:-------------|
+| `Link value comes from` | When displaying a cell as a renderer with a link, this field specifies where the "link" value to be used in the link comes from, allowing the author to select from a dropdown of the other columns in the grid. For example, the cell may be a heatmap value, but you want the link to open up the Resource Overview for the resource ID in the row. In that case, you'd set the link value to come from the `Resource Id` field.
+| `View to open` | same as above. |
+| `Menu item` | same as above. |
+| `Open link in Context Blade` | same as above. |
+
+## Azure Resource Manager deployment link settings
+
+If the selected link type is `ARM Deployment` the author must specify additional settings to open an Azure Resource Manager deployment. There are two main tabs for configurations.
+
+### Template settings
+
+This section defines where the template should come from and the parameters used to run the Azure Resource Manager deployment.
+
+| Source | Explanation |
+|:------------- |:-------------|
+|`Resource group id comes from` | The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value is not specified, the deployment will fail. Select from `Cell`, `Column`, `Static Value`, or `Parameter` in [link sources](#link-sources).|
+|`ARM template URI from` | The URI to the Azure Resource Manager template itself. The template URI needs to be accessible to the users who will deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). For starters, take a look at our [quickstart templates](https://azure.microsoft.com/resources/templates/).|
+|`ARM Template Parameters` | This section defines the template parameters used for the template URI defined above. These parameters will be used to deploy the template on the run page. The grid contains an expand toolbar button to help fill the parameters using the names defined in the template URI and set it to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select Refresh to update the preview with current changes. Parameters are typically values, whereas references are something that could point to keyvault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** - does not render reference parameters correctly and will show up as null/value, thus users will not be able to correctly deploy reference parameters from Template Viewer tab.|
+
+![Screenshot of Azure Resource Manager template settings](./media/workbooks-link-actions/template-settings.png)
+
+### UX settings
+
+This section configures what the users will see before they run the Azure Resource Manager deployment.
+
+| Source | Explanation |
+|:------------- |:-------------|
+|`Title from` | Title used on the run view. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).|
+|`Description from` | This is the markdown text used to provide a helpful description to users when they want to deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). <br/><br/> **NOTE:** If `Static Value` is selected, a multi-line text box will appear. In this text box, you can resolve parameters using `{paramName}`. Also you can treat columns as parameters by appending `_column` after the column name like `{columnName_column}`. In the example image below, we can reference the column `VMName` by writing `{VMName_column}`. The value after the colon is the [parameter formatter](workbooks-parameters.md#parameter-options), in this case it is `value`.|
+|`Run button text from` | Label used on the run (execute) button to deploy the Azure Resource Manager template. This is what users will select to start deploying the Azure Resource Manager template.|
+
+![Screenshot of Azure Resource Manager UX settings](./media/workbooks-link-actions/ux-settings.png)
+
+After these configurations are set, when the users select the link, it will open up the view with the UX described in the UX settings. If the user selects `Run button text from` it will deploy an Azure Resource Manager template using the values from [template settings](#template-settings). View template will open up the template viewer tab for the user to examine the template and the parameters before deploying.
+
+![Screenshot of run Azure Resource Manager view](./media/workbooks-link-actions/run-tab.png)
+
+## Custom view link settings
+
+Use this to open Custom Views in the Azure portal. Verify all of the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings via the `Form` or `URL`.
+
+> [!NOTE]
+> Views with a menu cannot be opened in a context tab. If a view with a menu is configured to open in a context tab then no context tab will be shown when the link is selected.
+
+### Form
+
+| Source | Explanation |
+|:------------- |:-------------|
+|`Extension name` | The name of the extension that hosts the name of the View.|
+|`View name` | The name of the View to open.|
+
+#### View inputs
+
+There are two types of inputs, grids and JSON. Use `Grid` for simple key and value tab inputs or select `JSON` to specify a nested JSON input.
+
+- Grid
+ - `Parameter Name`: The name of the View input parameter.
+ - `Parameter Comes From`: Where the value of the View parameter should come from. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).
+ > [!NOTE]
+ > If `Static Value` is selected, the parameters can be resolved using brackets link `{paramName}` in the text box. Columns can be treated as parameters columns by appending `_column` after the column name like `{columnName_column}`.
+
+ - `Parameter Value`: depending on `Parameter Comes From`, this will be a dropdown of available parameters, columns, or a static value.
+
+ ![Screenshot of edit column setting show Custom View settings from form.](./media/workbooks-link-actions/custom-tab-settings.png)
+- JSON
+ - Specify your tab input in a JSON format on the editor. Like the `Grid` mode, parameters, and columns may be referenced by using `{paramName}` for parameters, and `{columnName_column}` for columns. Selecting `Show JSON Sample`, will show the expected output of all resolved parameters and columns user for the view input.
+
+ ![Screenshot showing of open custom view settings with view input on JSON.](./media/workbooks-link-actions/custom-tab-json.png)
+
+### URL
+
+Paste a Portal URL containing the extension, name of the view, and any inputs needed to open the view. After selecting `Initialize Settings`, it will populate the `Form` for the author to add/modify/remove any of the view inputs.
+
+![Screenshot showing column setting show Custom View settings from URL. ](./media/workbooks-link-actions/custom-tab-settings-url.png)
+
+## Workbook (template) link settings
+
+If the selected link type is `Workbook (Template)`, the author must specify additional settings to open up the correct workbook template. The settings below have options for how the grid will find the appropriate value for each of the settings.
+
+| Setting | Explanation |
+|:------------- |:-------------|
+| `Workbook owner Resource Id` | This is the Resource ID of the Azure Resource that "owns" the workbook. Commonly it is an Application Insights resource, or a Log Analytics Workspace. Inside of Azure Monitor, this may also be the literal string `"Azure Monitor"`. When the workbook is Saved, this is what the workbook will be linked to. |
+| `Workbook resources` | An array of Azure Resource Ids that specify the default resource used in the workbook. For example, if the template being opened shows Virtual Machine metrics, the values here would be Virtual Machine resource IDs. Many times, the owner, and resources are set to the same settings. |
+| `Template Id` | Specify the ID of the template to be opened. If this is a community template from the gallery (the most common case), prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If this is a link to a saved workbook/template, it is the full Azure resource ID of that item. |
+| `Workbook Type` | Specify the kind of workbook template to open. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
+| `Gallery Type` | This specifies the gallery type that will be displayed in the "Gallery" view of the template that opens. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
+|`Location comes from` | The location field should be specified if you are opening a specific Workbook resource. If location is not specified, finding the workbook content is much slower. If you know the location, specify it. If you do not know the location or are opening a template that with no specific location, leave this field as "Default".|
+|`Pass specific parameters to template` | Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template else all the parameters in the current workbook are passed to the template and in that case the parameter *names* must be the same in both workbooks for this parameter value to work.|
+|`Workbook Template Parameters` | This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select value from `Cell`, `Column`, `Parameter`, and `Static Value`. Name and value must not be empty to pass that parameter to the target template.|
+
+For each of the above settings, the author must pick where the value in the linked workbook will come from. See [link Sources](#link-sources)
+
+When the workbook link is opened, the new workbook view will be passed all of the values configured from the settings above.
+
+![Screenshot of workbook link settings](./media/workbooks-link-actions/workbook-link-settings.png)
+
+![Screenshot of workbook template parameters settings](./media/workbooks-link-actions/workbook-template-link-settings-parameter.png)
+
+## Link sources
+
+| Source | Explanation |
+|:------------- |:-------------|
+| `Cell` | This will use the value in that cell in the grid as the link value |
+| `Column` | When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting `Template Id` field to `column`, or to open up the same workbook template for different resources, if the `Workbook resources` field is set to a column that contains an Azure Resource ID |
+| `Parameter` | When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked |
+| `Static value` | When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
+| `Step` | Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook |
+| `Workbook` | Use the value set in the current workbook. |
+| `Default` | Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource |
+
+## Next steps
+
+- [Control](workbooks-access-control.md) and share access to your workbook resources.
+- Learn how to use [groups in workbooks](workbooks-groups.md).
\ No newline at end of file
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to
-ms.date: 01/05/2020
+ms.date: 01/12/2020
ms.author: b-juche --- # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
@@ -34,7 +34,6 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * Ensure that the NFS client is up to date and running the latest updates for the operating system. * Ensure that the Active Directory (AD) LDAP server is up and running on the AD. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
-* Ensure that a certificate authority (CA) is created for the AD using the [Active Directory Certificate Services (AD CS)](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) role to generate and export the self-signed root CA certificate.
* Dual-protocol volumes do not currently support Azure Active Directory Domain Services (AADDS). * The NFS version used by a dual-protocol volume is NFSv3. As such, the following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients.
@@ -100,9 +99,6 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
3. Click **Protocol**, and then complete the following actions: * Select **dual-protocol (NFSv3 and SMB)** as the protocol type for the volume.
- * Select the **Active Directory** connection from the drop-down list.
- The Active Directory you use must have a server root CA certificate.
- * Specify the **Volume path** for the volume. This volume path is the name of the shared volume. The name must start with an alphabetical character, and it must be unique within each subscription and each region.
@@ -118,32 +114,6 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
A volume inherits subscription, resource group, location attributes from its capacity pool. To monitor the volume deployment status, you can use the Notifications tab.
-## Upload Active Directory Certificate Authority public root certificate
-
-1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure ADDS Certificate Authority.
-
-2. Follow [View certificates with the MMC snap-in](/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) to use the MMC snap-in and the Certificate Manager tool.
- Use the Certificate Manager snap-in to locate the root or issuing certificate for the local device. You should run the Certificate Management snap-in commands from one of the following settings:
- * A Windows-based client that has joined the domain and has the root certificate installed
- * Another machine in the domain containing the root certificate
-
-3. Export the root CA certificate.
- Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory, as shown in the following examples:
- ![screenshot that shows personal certificates](../media/azure-netapp-files/personal-certificates.png)
- ![screenshot that shows trusted root certification authorities](../media/azure-netapp-files/trusted-root-certification-authorities.png)
-
- Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format:
-
- ![Certificate Export Wizard](../media/azure-netapp-files/certificate-export-wizard.png)
-
-4. Go to the NetApp account of the dual-protocol volume, click **Active Directory connections**, and upload the root CA certificate by using the **Join Active Directory** window:
-
- ![Server root CA certificate](../media/azure-netapp-files/server-root-ca-certificate.png)
-
- Ensure that the certificate authority name can be resolved by DNS. This name is the "Issued By" or "Issuer" field on the certificate:
-
- ![Certificate information](../media/azure-netapp-files/certificate-information.png)
- ## Manage LDAP POSIX Attributes You can manage POSIX attributes such as UID, Home Directory, and other values by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor:
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/snapshots-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/snapshots-introduction.md new file mode 100644
@@ -0,0 +1,162 @@
+---
+title: How Azure NetApp Files snapshots work | Microsoft Docs
+description: Explains how Azure NetApp Files snapshots work, including ways to create snapshots, ways to restore snapshots, how to use snapshots in cross-region replication settings.
+services: azure-netapp-files
+documentationcenter: ''
+author: b-juche
+manager: ''
+editor: ''
+
+ms.assetid:
+ms.service: azure-netapp-files
+ms.workload: storage
+ms.tgt_pltfrm: na
+ms.devlang: na
+ms.topic: conceptual
+ms.date: 01/12/2021
+ms.author: b-juche
+---
+# How Azure NetApp Files snapshots work
+
+This article explains how Azure NetApp Files snapshots work. Azure NetApp Files snapshot technology delivers stability, scalability, and faster recoverability, with no impact to performance. Azure NetApp Files snapshot technology provides the foundation for data protection solutions, including single file restores, volume restores and clones, and cross-region replication.
+
+For steps about using volume snapshots, see [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md). For considerations about snapshot management in cross-region replication, see [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md).
+
+## What volume snapshots are
+
+An Azure NetApp Files snapshot is a point-in-time file system (volume) image. It can serve as an ideal online backup. You can use a snapshot to [create a new volume](azure-netapp-files-manage-snapshots.md#restore-a-snapshot-to-a-new-volume), [restore a file](azure-netapp-files-manage-snapshots.md#restore-a-file-from-a-snapshot-using-a-client), or [revert a volume](azure-netapp-files-manage-snapshots.md#revert-a-volume-using-snapshot-revert). In specific application data stored on Azure NetApp Files volumes, extra steps might be required to ensure application consistency.
+
+Low-overhead snapshots are made possible by the unique features of the underlaying volume virtualization technology that is part of Azure NetApp Files. Like a database, this layer uses pointers to the actual data blocks on disk. But, unlike a database, it doesn't rewrite existing blocks; it writes updated data to a new block and changes the pointer. An Azure NetApp Files snapshot simply manipulates block pointers, creating a ΓÇ£frozenΓÇ¥, read-only view of a volume that lets applications access older versions of files and directory hierarchies without special programming. Actual data blocks arenΓÇÖt copied. As such, snapshots are efficient in the time needed to create them; they are near-instantaneous, regardless of volume size. Snapshots are also efficient in storage space. They themselves consume no space, and only delta blocks between snapshots and active volume are kept.
+
+The following diagrams illustrate the concepts:
+
+![Diagrams that show the key concepts of snapshots](../media/azure-netapp-files/snapshot-concepts.png)
+
+In the diagrams, a snapshot is taken in Figure 1a. In Figure 1b, changed data is written to a *new block* and the pointer is updated. But the snapshot pointer still points to the *previously written block*, giving you a live and a historical view of the data. Another snapshot is taken in Figure 1c. Now you have access to three generations of data (the live data, Snapshot 2, and Snapshot 1, in order of age), without taking up the volume space that three full copies would require.
+
+A snapshot takes only a copy of the volume metadata (*inode table*). It takes just a few seconds to create, regardless of the volume size, the capacity used, or the level of activity on the volume. So taking a snapshot of a 100-TiB volume takes the same (next to zero) time as taking a snapshot of a 100-GiB volume. After a snapshot is created, changes to data files are reflected in the active version of the files, as normal.
+
+Meanwhile, the data blocks that are pointed to from a snapshot remains stable and immutable. Because of the ΓÇ£Redirect on WriteΓÇ¥ nature of Azure NetApp Files volumes, a snapshot incurs no performance overhead and in itself does not consume any space. You can store up to 255 snapshots per volume over time, all of which are accessible as read-only and online versions of the data, consuming as little capacity as the number of changed blocks between each snapshot. Modified blocks are stored in the active volume. Blocks pointed to in snapshots are kept (as read-only) in the volume for safekeeping, to be repurposed only when all pointers (in the active volume and snapshots) have been cleared. Therefore, volume utilization will increase over time, by either new data blocks or (modified) data blocks kept in snapshots.
+
+ The following diagram shows a volumeΓÇÖs snapshots and used space over time:
+
+![Diagram that shows a volumeΓÇÖs snapshots and used space over time](../media/azure-netapp-files/snapshots-used-space-over-time.png)
+
+Because a volume snapshot records only the block changes since the latest snapshot, it provides the following key benefits:
+
+* Snapshots are ***storage efficient***.
+ Snapshots consume minimal storage space because it doesn't copy the data blocks of the entire volume. Two snapshots taken in sequence differ only by the blocks added or changed in the time interval between the two. This block-incremental behavior limits associated storage capacity consumption. Many alternative snapshot implementations consume storage volumes equal to the active file system, raising storage capacity requirements. Depending on application daily *block-level* change rates, Azure NetApp Files snapshots will consume more or less capacity, but on changed data only. Average daily snapshot consumption ranges from only 1-5% of used volume capacity for many application volumes, or up to 20-30% for volumes such as SAP HANA database volumes. Be sure to [monitor your volume and snapshot usage](azure-netapp-files-metrics.md#volumes) for snapshot capacity consumption relative to the number of created and maintained snapshots.
+
+* Snapshots are ***quick to create, replicate, restore, or clone***.
+ It takes only a few seconds to create, replicate, restore, or clone a snapshot, regardless of the volume size and level of activities. You can create a volume snapshot [on-demand](azure-netapp-files-manage-snapshots.md#create-an-on-demand-snapshot-for-a-volume). You can also use [snapshot policies](azure-netapp-files-manage-snapshots.md#manage-snapshot-policies) to specify when Azure NetApp Files should automatically create a snapshot and how many snapshots to keep for a volume. Application consistency can be achieved by orchestrating snapshots with the application layer, for example, by using the [AzAcSnap tool](azacsnap-introduction.md) for SAP HANA.
+
+* Snapshots have no impact on volume ***performance***.
+ Because of the ΓÇ£Redirect on WriteΓÇ¥ nature of the underlaying technology, storing or retaining Azure NetApp Files snapshots has no performance impact, even with heavy data activity. Deleting a snapshot also has little to no performance impact in most cases.
+
+* Snapshots provide ***scalability*** because they can be created frequently and many can be retained.
+ Azure NetApp Files volumes support up to 255 snapshots. The ability to store a large number of low-impact, frequently created snapshots increases the likelihood that the desired version of data can be successfully recovered.
+
+* Snapshots provide ***user visibility*** and ***file recoverability***.
+The high performance, scalability, and stability of Azure NetApp Files snapshot technology means it provides an ideal online backup for user-driven recovery. Snapshots can be made user-accessible for file, directory, or volume restore purposes. Additional solutions allow you to copy backups to offline storage or [replicate cross-region](cross-region-replication-introduction.md) for retention or disaster recovery purposes.
+
+## Ways to create snapshots
+
+You can use several methods to create and maintain snapshots:
+
+* Manually (on-demand), by using:
+ * The [Azure portal](azure-netapp-files-manage-snapshots.md#create-an-on-demand-snapshot-for-a-volume), [REST API](/rest/api/netapp/snapshots), [Azure CLI](/cli/azure/netappfiles/snapshot), or [PowerShell](/powershell/module/az.netappfiles/new-aznetappfilessnapshot) tools
+ * Scripts (see [examples](azure-netapp-files-solution-architectures.md#sap-tech-community-and-blog-posts))
+
+* Automated, by using:
+ * Snapshot policies, via the [Azure portal](azure-netapp-files-manage-snapshots.md#manage-snapshot-policies), [REST API](/rest/api/netapp/snapshotpolicies), [Azure CLI](/cli/azure/netappfiles/snapshot/policy), or [PowerShell](/powershell/module/az.netappfiles/new-aznetappfilessnapshotpolicy) tools
+ * Application consistent snapshot tooling, like [AzAcSnap](azacsnap-introduction.md)
+
+## How volumes and snapshots are replicated cross-region for DR
+
+Azure NetApp Files supports [cross-region replication](cross-region-replication-introduction.md) for disaster recovery (DR) purposes. Azure NetApp Files cross-region replication uses SnapMirror technology. Only changed blocks are sent over the network in a compressed, efficient format. After a cross-region replication is initiated between volumes, the entire volume contents (that is, the actual stored data blocks) are transferred only once. This operation is called a *baseline transfer*. After the initial transfer, only changed blocks (as captured in snapshots) are transferred. The result is an asynchronous 1:1 replica of the source volume, including all snapshots. This behavior follows a full and incremental-forever replication mechanism. This technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time. You can achieve a smaller Recovery Point Objective (RPO), because more snapshots can be created and transferred more frequently with limited data transfers. Further, it takes away the need for host-based replication mechanisms, avoiding virtual machine and software license cost.
+
+The following diagram shows snapshot traffic in cross-region replication scenarios:
+
+![Diagram that shows snapshot traffic in cross-region replication scenarios](../media/azure-netapp-files/snapshot-traffic-cross-region-replication.png)
+
+## Ways to restore data from snapshots
+
+The Azure NetApp Files snapshot technology greatly improves the frequency and reliability of backups. It incurs minimal performance overhead and can be safely created on an active volume. Azure NetApp Files snapshots allow near-instantaneous, secure, and optionally user-managed restores. This section describes various ways in which data can be accessed or restored from Azure NetApp Files snapshots.
+
+### Restoring files or directories from snapshots
+
+If the [Snapshot Path visibility](azure-netapp-files-manage-snapshots.md#edit-the-hide-snapshot-path-option) is not set to `hidden`, users can directly access snapshots to recover from accidental deletion, corruption, or modification of their data. The security of files and directories are retained in the snapshot, and snapshots are read-only by design. As such, the restoration is secure and simple.
+
+The following diagram shows file or directory access to a snapshot:
+
+![Diagram that shows file or directory access to a snapshot](../media/azure-netapp-files/snapshot-file-directory-access.png)
+
+In the diagram, Snapshot 1 consumes only the delta blocks between the active volume and the moment of snapshot creation. But when you access the snapshot via the volume snapshot path, the data will *appear* as if itΓÇÖs the full volume capacity at the time of the snapshot creation. By accessing the snapshot folders, users can self-restore data by copying files and directories out of a snapshot of choice.
+
+Similarly, snapshots in target cross-region replication volumes can be accessed read-only for data recovery in the DR region.
+
+The following diagram shows snapshot access in cross-region replication scenarios:
+
+![Diagram that shows snapshot access in cross-region replication](../media/azure-netapp-files/snapshot-access-cross-region-replication.png)
+
+See [Restore a file from a snapshot using a client](azure-netapp-files-manage-snapshots.md#restore-a-file-from-a-snapshot-using-a-client) about restoring individual files or directories from snapshots.
+
+### Restoring (cloning) a snapshot to a new volume
+
+You can restore Azure NetApp Files snapshots to a separate, independent volume. This operation is near-instantaneous, regardless of the volume size and the capacity consumed. The newly created volume is almost immediately available for access, while the actual volume and snapshot data blocks are being copied over. Depending on volume size and capacity, this process can take considerable time during which the parent volume and snapshot cannot be deleted. However, the volume can already be accessed after initial creation, while the copy process is in progress in the background. This capability enables fast volume creation for data recovery or volume cloning for test and development. By nature of the data copy process, storage capacity pool consumption will double when the restore completes, and the new volume will show the full active capacity of the original snapshot. After this process is completed, the volume will be independent and disassociated with the original volume, and source volumes and snapshot can be managed or removed independently from the new volume.
+
+The following diagram shows a new volume created by restoring (cloning) a snapshot:
+
+![Diagram that shows a new volume created by restoring a snapshot](../media/azure-netapp-files/snapshot-restore-clone-new-volume.png)
+
+The same operations can be performed on replicated snapshots to a disaster recovery (DR) volume. Any snapshot can be restored to a new volume, even when cross-region replication remains active or in progress. This capability enables non-disruptive creation of test and dev environments in a DR region, putting the data to use, whereas the replicated volumes would otherwise be used only for DR purposes. This use case enables test and development be isolated from production, eliminating potential impact on production environments.
+
+The following diagram shows volume restoration (cloning) by using DR target volume snapshot while cross-region replication is taking place:
+
+![Diagram that shows volume restoration using DR target volume snapshot](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png)
+
+See [Restore a snapshot to a new volume](azure-netapp-files-manage-snapshots.md#restore-a-snapshot-to-a-new-volume) about volume restore operations.
+
+### Restoring (reverting) a snapshot in-place
+
+In some cases, because the new volume will consume storage capacity, creating a new volume from a snapshot might not be needed or appropriate. To recover from data corruption quickly (for example, database corruptions or ransomware attacks), it might be more appropriate to restore a snapshot within the volume itself. This operation can be done using the Azure NetApp Files snapshot revert functionality. This functionality enables you to quickly revert a volume to the state it was in when a particular snapshot was taken. In most cases, reverting a volume is much faster than restoring individual files from a snapshot to the active file system, especially in large, multi-TiB volumes.
+
+Reverting a volume snapshot is near-instantaneous and takes only a few seconds to complete, even for the largest volumes. The active volume metadata (*inode table*) is replaced with the snapshot metadata from the time of snapshot creation, thus rolling back the volume to that specific point in time. No data blocks need to be copied for the revert to take effect. As such, it's more space efficient than restoring a snapshot to a new volume.
+
+The following diagram shows a volume reverting to an earlier snapshot:
+
+![Diagram that shows a volume reverting to an earlier snapshot](../media/azure-netapp-files/snapshot-volume-revert.png)
+
+> [!IMPORTANT]
+> Active filesystem data that was written and snapshots that were taken after the selected snapshot was taken will be lost. The snapshot revert operation will replace all the data in the targeted volume with the data in the selected snapshot. You should pay attention to the snapshot contents and creation date when you select a snapshot. You cannot undo the snapshot revert operation.
+
+See [Revert a volume using snapshot revert](azure-netapp-files-manage-snapshots.md#revert-a-volume-using-snapshot-revert) about how to use this feature.
+
+## How snapshots are deleted
+
+Snapshots do consume storage capacity. As such, they are not typically kept indefinitely. For data protection, retention, and recoverability, a number of snapshots (created at various points in time) are usually kept online for a certain duration depending on RPO, RTO, and retention SLA requirements. However, older snapshots often do not have to be kept on the storage service and might need to be deleted to free up space. Any snapshot can be deleted (not necessarily in order of creation) by an administrator at any time.
+
+> [!IMPORTANT]
+> The snapshot deletion operation cannot be undone. You should retain offline copies of the volume for data protection and retention purposes.
+
+When a snapshot is deleted, all pointers from that snapshot to existing data blocks will be removed. When a data block has no more pointers pointing at it (by the active volume, or other snapshots in the volume), the data block is returned to the volume free space for future use. Therefore, removing snapshots usually frees up more capacity in a volume than deleting data from the active volume, because data blocks are often captured in previously created snapshots.
+
+The following diagram shows the effect on storage consumption of Snapshot 3 deletion from a volume:
+
+![Diagram that shows storage consumption effect of snapshot deletion](../media/azure-netapp-files/snapshot-delete-storage-consumption.png)
+
+Be sure to [monitor volume and snapshot consumption](azure-netapp-files-metrics.md#volumes) and understand how the application, active volume, and snapshot consumption interact.
+
+See [Delete snapshots](azure-netapp-files-manage-snapshots.md#delete-snapshots) about how to manage snapshot deletion. See [Manage snapshot policies](azure-netapp-files-manage-snapshots.md#manage-snapshot-policies) about how to automate this process.
+
+## Next steps
+
+* [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md)
+* [Monitor volume and snapshot metrics](azure-netapp-files-metrics.md#volumes)
+* [Troubleshoot snapshot policies](troubleshoot-snapshot-policies.md)
+* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
+* [Azure NetApp Files Snapshots 101 video](https://www.youtube.com/watch?v=uxbTXhtXCkw)
+* [NetApp Snapshot - NetApp Video Library](https://tv.netapp.com/detail/video/2579133646001/snapshot)
+++
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-dual-protocol-volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-dual-protocol-volumes.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: troubleshooting
-ms.date: 10/01/2020
+ms.date: 01/12/2021
ms.author: b-juche --- # Troubleshoot dual-protocol volumes
@@ -25,7 +25,7 @@ This article describes resolutions to error conditions you might have when creat
| Error conditions | Resolution | |-|-| | Dual-protocol volume creation fails with the error `This Active Directory has no Server root CA Certificate`. | If this error occurs when you are creating a dual-protocol volume, make sure that the root CA certificate is uploaded in your NetApp account. |
-| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | Consider the following resolutions: <ul><li>Make sure that the required root certificate is added when you join Active Directory (AD) to the NetApp account. See [Upload Active Directory Certificate Authority public root certificate](create-volumes-dual-protocol.md#upload-active-directory-certificate-authority-public-root-certificate). </li><li>The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `1.1.1.1`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `myDomain.com`. The PTR record added to the reverse lookup zone should be `1.1.1.1` -> `AD1.myDomain.com`. </li></ul> |
+| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `1.1.1.1`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `1.1.1.1` -> `contoso.com`. |
| Dual-protocol volume creation fails with the error `Failed to create the Active Directory machine account \\\"TESTAD-C8DD\\\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\\n [ 434] Loaded the preliminary configuration.\\n [ 537] Successfully connected to ip 1.1.1.1, port 88 using TCP\\n**[ 950] FAILURE`. | This error indicates that the AD password is incorrect when Active Directory is joined to the NetApp account. Update the AD connection with the correct password and try again. | | Dual-protocol volume creation fails with the error `Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available`. | This error indicates that DNS is not reachable. The reason might be because DNS IP is incorrect, or there is a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. <br> Also, make sure that the AD and the volume are in same region and in same VNet. If they are in different VNETs, ensure that VNet peering is established between the two VNets.| | Permission is denied error when mounting a dual-protocol volume. | A dual-protocol volume supports both the NFS and SMB protocols. When you try to access the mounted volume on the UNIX system, the system attempts to map the UNIX user you use to a Windows user. If no mapping is found, the ΓÇ£Permission deniedΓÇ¥ error occurs. <br> This situation applies also when you use the ΓÇÿrootΓÇÖ user for the access. <br> To avoid the ΓÇ£Permission deniedΓÇ¥ issue, make sure that Windows Active Directory includes `pcuser` before you access the mount point. If you add `pcuser` after encountering the ΓÇ£Permission deniedΓÇ¥ issue, wait 24 hours for the cache entry to clear before trying the access again. |
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-nfsv41-kerberos-volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-nfsv41-kerberos-volumes.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: troubleshooting
-ms.date: 01/05/2020
+ms.date: 01/12/2021
ms.author: b-juche --- # Troubleshoot NFSv4.1 Kerberos volume issues
@@ -27,9 +27,9 @@ This article describes resolutions to error conditions you might have when creat
|`Error allocating volume - Export policy rules does not match kerberosEnabled flag` | Azure NetApp Files does not support Kerberos for NFSv3 volumes. Kerberos is supported only for the NFSv4.1 protocol. | |`This NetApp account has no configured Active Directory connections` | Configure Active Directory for the NetApp account with fields **KDC IP** and **AD Server Name**. See [Configure the Azure portal](configure-kerberos-encryption.md#configure-the-azure-portal) for instructions. | |`Mismatch between KerberosEnabled flag value and ExportPolicyRule's access type parameter values.` | Azure NetApp Files does not support converting a plain NFSv4.1 volume to Kerberos NFSv4.1 volume, and vice-versa. |
-|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.xyz.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.xyz.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.xyz.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.xyz.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to less than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpcgssd` service as follows. The command might vary depending on the OS.<br> RHEL 7: <br> `service nfs restart` <br> `service rpcgssd restart` <br> CentOS 8: <br> `systemctl enable nfs-client.target && systemctl start nfs-client.target` <br> Ubuntu: <br> (Restart the `rpc-gssd` service.) <br> `sudo systemctl start rpc-gssd.service` </ul>|
+|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to less than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpcgssd` service as follows. The command might vary depending on the OS.<br> RHEL 7: <br> `service nfs restart` <br> `service rpcgssd restart` <br> CentOS 8: <br> `systemctl enable nfs-client.target && systemctl start nfs-client.target` <br> Ubuntu: <br> (Restart the `rpc-gssd` service.) <br> `sudo systemctl start rpc-gssd.service` </ul>|
|`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. |
-|`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `myDomain.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.myDomain.com`. |
+|`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.contoso.com`. |
|`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> | |NFSv4.1 Kerberos volume creation fails with an error similar to the following example: <br> `Failed to enable NFS Kerberos on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". Failed to bind service principal name on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". SecD Error: server create fail join user auth.` |The KDC IP is wrong and the Kerberos volume has been created. Update the KDC IP with a correct address. <br> After you update the KDC IP, the error will not go away. You need to re-create the volume. |
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/network-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/network-security.md
@@ -9,7 +9,7 @@ ms.date: 06/23/2020
This article describes how to use the following security features with Azure Relay: - IP firewall rules (preview)-- Private endpoints (preview)
+- Private endpoints
> [!NOTE] > Azure Relay doesn't support network service endpoints.
@@ -24,15 +24,15 @@ The IP firewall rules are applied at the Relay namespace level. Therefore, the r
For more information, see [How to configure IP firewall for a Relay namespace](ip-firewall-virtual-networks.md)
+> [!NOTE]
+> This feature is currently in **preview**.
+ ## Private endpoints
-Azure **Private Link Service** enables you to access Azure services (for example, Azure Relay, Azure Service Bus, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a private endpoint in your virtual network. For more information, see [What is Azure Private Link (Preview)?](../private-link/private-link-overview.md)
+Azure **Private Link Service** enables you to access Azure services (for example, Azure Relay, Azure Service Bus, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a private endpoint in your virtual network. For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md)
A **private endpoint** is a network interface that allows your workloads running in a virtual network to connect privately and securely to a service that has a **private link resource** (for example, a Relay namespace). The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute, VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network eliminating exposure from the public Internet. You can provide a level of granularity in access control by allowing connections to specific Azure Relay namespaces.
-> [!NOTE]
-> This feature is currently in **preview**.
- For more information, see [How to configure private endpoints](private-link-service.md)
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/private-link-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/private-link-service.md
@@ -5,8 +5,8 @@ ms.date: 09/24/2020
ms.topic: article ---
-# Integrate Azure Relay with Azure Private Link (Preview)
-Azure **Private Link Service** enables you to access Azure services (for example, Azure Relay, Azure Service Bus, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a private endpoint in your virtual network. For more information, see [What is Azure Private Link (Preview)?](../private-link/private-link-overview.md)
+# Integrate Azure Relay with Azure Private Link
+Azure **Private Link Service** enables you to access Azure services (for example, Azure Relay, Azure Service Bus, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a private endpoint in your virtual network. For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md)
A **private endpoint** is a network interface that allows your workloads running in a virtual network to connect privately and securely to a service that has a **private link resource** (for example, a Relay namespace). The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute, VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network eliminating exposure from the public Internet. You can provide a level of granularity in access control by allowing connections to specific Azure Relay namespaces.
@@ -14,7 +14,7 @@ A **private endpoint** is a network interface that allows your workloads running
## Add a private endpoint using Azure portal ### Prerequisites
-To integrate an Azure Relay namespace with Azure Private Link (Preview), you'll need the following entities or permissions:
+To integrate an Azure Relay namespace with Azure Private Link, you'll need the following entities or permissions:
- An Azure Relay namespace. - An Azure virtual network.
@@ -32,7 +32,7 @@ For step-by-step instructions on creating a new Azure Relay namespace and entiti
2. In the search bar, type in **Relays**. 3. Select the **namespace** from the list to which you want to add a private endpoint. 4. Select the **Networking** tab under **Settings**.
-5. Select the **Private endpoint connections (preview)** tab at the top of the page
+5. Select the **Private endpoint connections** tab at the top of the page
6. Select the **+ Private Endpoint** button at the top of the page. ![Add private endpoint button](./media/private-link-service/add-private-endpoint-button.png)
@@ -76,7 +76,7 @@ For step-by-step instructions on creating a new Azure Relay namespace and entiti
12. On the **Private endpoint** page, you can see the status of the private endpoint connection. If you are the owner of the Relay namespace or have the manage access over it and had selected **Connect to an Azure resource in my directory** option for the **Connection method**, the endpoint connection should be **auto-approved**. If it's in the **pending** state, see the [Manage private endpoints using Azure portal](#manage-private-endpoints-using-azure-portal) section. ![Private endpoint page](./media/private-link-service/private-endpoint-page.png)
-13. Navigate back to the **Networking** page of the **namespace**, and switch to the **Private endpoint connections (preview)** tab. You should see the private endpoint that you created.
+13. Navigate back to the **Networking** page of the **namespace**, and switch to the **Private endpoint connections** tab. You should see the private endpoint that you created.
![Private endpoint created](./media/private-link-service/private-endpoint-created.png)
@@ -225,8 +225,7 @@ Aliases: <namespace-name>.servicebus.windows.net
## Limitations and Design Considerations ### Design considerations-- Private Endpoint for Azure Relay is in **public preview**. -- For pricing information, see [Azure Private Link (preview) pricing](https://azure.microsoft.com/pricing/details/private-link/).
+- For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
### Limitations - Maximum number of private endpoints per Azure Relay namespace: 64.
@@ -235,5 +234,5 @@ Aliases: <namespace-name>.servicebus.windows.net
## Next Steps -- Learn more about [Azure Private Link (Preview)](../private-link/private-link-service-overview.md)
+- Learn more about [Azure Private Link](../private-link/private-link-service-overview.md)
- Learn more about [Azure Relay](relay-what-is-it.md)
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/relay-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-faq.md
@@ -45,7 +45,6 @@ Here are three example billing scenarios for Hybrid Connections:
* You send 6 GB of data across connection B during the month. * Your total charge is $10.50. That's $5 for connection A + $5 for connection B + $0.50 (for the sixth gigabyte on connection B).
-Note that the prices used in the examples are applicable only during the Hybrid Connections preview period. Prices are subject to change upon general availability of Hybrid Connections.
### How are hours calculated for Relay?
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/service-bus-relay-security-controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/service-bus-relay-security-controls.md
@@ -15,7 +15,7 @@ This article documents the security controls built into Azure Relay.
| Security control | Yes/No | Notes | Documentation | |---|---|--|--|
-| Service endpoint support| No | | |
+| Private endpoint support| No | | |
| Network isolation and firewalling support| No | | | | Forced tunneling support| N/A | Relay is the TLS tunnel | |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
@@ -78,7 +78,7 @@ In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | --- | --- | --- | --- | > | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. |
-> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't include `%` and can't end with period or space. |
+> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't include `%` and can't end with period or space. |
> | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't include `%` and can't end with period or space. | > | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't include `%` and can't end with period or space. |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-tutorial-linked-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-tutorial-linked-template.md
@@ -1,7 +1,7 @@
--- title: Tutorial - Deploy a linked template description: Learn how to deploy a linked template
-ms.date: 03/13/2020
+ms.date: 01/12/2021
ms.topic: tutorial ms.author: jgao ms.custom:
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-tutorial-local-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-tutorial-local-template.md
@@ -1,7 +1,7 @@
--- title: Tutorial - Deploy a local Azure Resource Manager template description: Learn how to deploy an Azure Resource Manager template (ARM template) from your local computer
-ms.date: 05/20/2020
+ms.date: 01/12/2021
ms.topic: tutorial ms.author: jgao ms.custom:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
@@ -22,7 +22,7 @@ Data Discovery & Classification is built into Azure SQL Database, Azure SQL Mana
Your most sensitive data might include business, financial, healthcare, or personal information. Discovering and classifying this data can play a pivotal role in your organization's information-protection approach. It can serve as infrastructure for: - Helping to meet standards for data privacy and requirements for regulatory compliance.-- Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data.
+- Various security scenarios, such as monitoring (auditing) access to sensitive data.
- Controlling access to and hardening the security of databases that contain highly sensitive data. > [!NOTE]
@@ -181,4 +181,4 @@ You can use the REST API to programmatically manage classifications and recommen
## <a id="next-steps"></a>Next steps - Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data.-- For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).\ No newline at end of file
+- For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-powershell-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-jobs-powershell-create.md
@@ -117,14 +117,6 @@ $db2 = New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $targ
$db2 ```
-## Use Elastic Jobs
-
-To use Elastic Jobs, register the feature in your Azure subscription by running the following command. Run this command once for the subscription in which you intend to provision the Elastic Job agent. Subscriptions that only contain databases that are job targets don't need to be registered.
-
-```powershell
-Register-AzProviderFeature -FeatureName sqldb-JobAccounts -ProviderNamespace Microsoft.Sql
-```
- ### Create the Elastic Job agent An Elastic Job agent is an Azure resource for creating, running, and managing jobs. The agent executes jobs based on a schedule or as a one-time job.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-database-vulnerability-assessment-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-vulnerability-assessment-storage.md
@@ -5,8 +5,8 @@ services: sql-database
ms.service: sql-db-mi ms.subservice: security ms.topic: how-to
-author: barmichal
-ms.author: mibar
+author: davidtrigano
+ms.author: datrigan
ms.reviewer: vanto ms.date: 12/01/2020 ---
@@ -18,11 +18,10 @@ If you are limiting access to your storage account in Azure for certain VNets or
## Enable Azure SQL Database VA scanning access to the storage account
-If you have configured your VA storage account to only be accessible by certain networks or services, you'll need to ensure that VA scans for your Azure SQL Database are able to store the scans on the storage account. To find out which storage account is being used, go to your **SQL server** pane in the [Azure portal](https://portal.azure.com), under **Security**, select **Security Center**.
+If you have configured your VA storage account to only be accessible by certain networks or services, you'll need to ensure that VA scans for your Azure SQL Database are able to store the scans on the storage account. You can use the existing storage account, or create a new storage account to store VA scan results for all databases on your [logical SQL server](logical-servers.md).
-:::image type="content" source="../database/media/azure-defender-for-sql/va-storage.png" alt-text="set up vulnerability assessment":::
-
-You can use the existing storage account, or create a new storage account to store VA scan results for all databases on your [logical SQL server](logical-servers.md).
+> [!NOTE]
+> The vulnerability assessment service can't access storage accounts protected with firewalls or VNets if they require storage access keys.
Go to your **Resource group** that contains the storage account and access the **Storage account** pane. Under **Settings**, select **Firewall and virtual networks**.
@@ -30,6 +29,10 @@ Ensure that **Allow trusted Microsoft services access to this storage account**
:::image type="content" source="media/sql-database-vulnerability-assessment-storage/storage-allow-microsoft-services.png" alt-text="Screenshot shows Firewall and virtual networks dialog box, with Allow trusted Microsoft services to access this storage account selected.":::
+To find out which storage account is being used, go to your **SQL server** pane in the [Azure portal](https://portal.azure.com), under **Security**, select **Security Center**.
+
+:::image type="content" source="../database/media/azure-defender-for-sql/va-storage.png" alt-text="set up vulnerability assessment":::
+ ## Store VA scan results for Azure SQL Managed Instance in a storage account that can be accessed behind a firewall or VNet Since Managed Instance is not a trusted Microsoft Service and has a different VNet from the storage account, executing a VA scan will result in an error.
@@ -58,4 +61,4 @@ You should now be able to store your VA scans for Managed Instances in your stor
- [Vulnerability Assessment](sql-vulnerability-assessment.md) - [Create an Azure Storage account](../../storage/common/storage-account-create.md)-- [Azure Defender for SQL](azure-defender-for-sql.md)\ No newline at end of file
+- [Azure Defender for SQL](azure-defender-for-sql.md)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-overview.md
@@ -61,7 +61,7 @@ To get started, see [configure a load balancer](availability-group-vnn-azure-loa
### DNN listener
-SQL Server 2019 CU8 introduces support for the distributed network name (DNN) listener. The DNN listener replaces the traditional availability group listener, negating the need for an Azure Loud Balancer to route traffic on the Azure network.
+SQL Server 2019 CU8 introduces support for the distributed network name (DNN) listener. The DNN listener replaces the traditional availability group listener, negating the need for an Azure Load Balancer to route traffic on the Azure network.
The DNN listener is the recommended HADR connectivity solution in Azure as it simplifies deployment, reduces maintenance and cost, and reduces failover time in the event of a failure.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/ecosystem-back-up-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-back-up-vms.md
@@ -2,18 +2,18 @@
title: Backup solutions for Azure VMware Solution virtual machines description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. ms.topic: how-to
-ms.date: 09/22/2020
+ms.date: 01/11/2021
--- # Backup solutions for Azure VMware Solution virtual machines (VMs) A key principle of Azure VMware Solution is to enable you to continue to use your investments and your favorite VMware solutions running on Azure. Independent software vendor (ISV) technology support, validated with Azure VMware Solution, is an important part of this strategy.
-Commvault, Veritas, and Veeam have leading backup and restore solutions in VMware-based environments. Customers have widely adopted these solutions for their on-premises deployments. Now these partners have extended their solutions to Azure VMware Solution, using Azure to provide a back-up repository and a storage target for long-term retention and archival.
+Our backup partners have industry-leading backup and restore solutions in VMware-based environments. Customers have widely adopted these solutions for their on-premises deployments. Now these partners have extended their solutions to Azure VMware Solution, using Azure to provide a backup repository and a storage target for long-term retention and archival.
Backup network traffic between Azure VMware Solution VMs and the backup repository in Azure travels over a high-bandwidth, low-latency link. Replication traffic across regions travels over the internal Azure backplane network, which lowers bandwidth costs for users.
-You can find more information on these back-up solutions here:
+You can find more information on these backup solutions here:
- [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html) - [Veritas](https://vrt.as/nb4avs) - [Veeam](https://www.veeam.com/kb4012)
backup https://docs.microsoft.com/en-us/azure/backup/selective-disk-backup-restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
@@ -184,14 +184,25 @@ When you execute these commands, you'll see `"diskExclusionProperties": null`.
Ensure you're using Azure PowerShell version 3.7.0 or higher.
+During the configure protection operation, you need to specify the disk list setting with an inclusion / exclusion parameter, giving the LUN numbers of the disks to be included or excluded in the backup.
+ ### Enable backup with PowerShell
+For example:
+
+```azurepowershell
+$disks = ("0","1")
+$targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "rg-p-recovery_vaults" -Name "rsv-p-servers"
+Get-AzRecoveryServicesBackupProtectionPolicy
+$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "P-Servers"
+```
+ ```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -InclusionDisksList[Strings] -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -InclusionDisksList $disks -VaultId $targetVault.ID
``` ```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -ExclusionDisksList[Strings] -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -ExclusionDisksList $disks -VaultId $targetVault.ID
``` ### Backup only OS disk during configure backup with PowerShell
bastion https://docs.microsoft.com/en-us/azure/bastion/troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/troubleshoot.md
@@ -19,11 +19,7 @@ This article shows you how to troubleshoot Azure Bastion.
**Q:** When I try to create an NSG on the Azure Bastion subnet, I get the following error: *'Network security group <NSG name> does not have necessary rules for Azure Bastion Subnet AzureBastionSubnet"*.
-**A:** If you create and apply an NSG to *AzureBastionSubnet*, make sure you have added the following rules in your NSG. If you do not add these rules, the NSG creation/update will fail.
-
-1. Control plane connectivity ΓÇô Inbound on 443 from GatewayManager
-2. Diagnostics logging and others ΓÇô Outbound on 443 to AzureCloud (Regional tags within this service tag are not supported yet.)
-3. Target VM ΓÇô Outbound for 3389 and 22 to VirtualNetwork
+**A:** If you create and apply an NSG to *AzureBastionSubnet*, make sure you have added the required rules to the NSG. For a list of required rules, see [Working with NSG access and Azure Bastion](./bastion-nsg.md). If you do not add these rules, the NSG creation/update will fail.
An example of the NSG rules is available for reference in the [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-azure-bastion-nsg). For more information, see [NSG guidance for Azure Bastion](bastion-nsg.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/get-started-build-detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
@@ -100,10 +100,14 @@ After training has completed, the model's performance is calculated and displaye
![The training results show the overall precision and recall, and mean average precision.](./media/get-started-build-detector/trained-performance.png)
-### Probability Threshold
+### Probability threshold
[!INCLUDE [probability threshold](includes/probability-threshold.md)]
+### Overlap threshold
+
+The **Overlap Threshold** slider deals with how correct an object prediction must be to be considered "correct" in training. It sets the minimum allowed overlap between the predicted object bounding box and the actual user-entered bounding box. If the bounding boxes don't overlap to this degree, the prediction won't be considered correct.
+ ## Manage training iterations Each time you train your detector, you create a new _iteration_ with its own updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. In the left pane you will also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
@@ -101,7 +101,7 @@ After training has completed, the model's performance is estimated and displayed
![The training results show the overall precision and recall, and the precision and recall for each tag in the classifier.](./media/getting-started-build-a-classifier/train03.png)
-### Probability Threshold
+### Probability threshold
[!INCLUDE [probability threshold](includes/probability-threshold.md)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/developer-reference-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/developer-reference-resource.md
@@ -4,7 +4,7 @@ description: SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: reference
-ms.date: 05/19/2020
+ms.date: 01/12/2021
ms.custom: "devx-track-js, devx-track-csharp" ---
@@ -114,18 +114,14 @@ Importing and exporting these formats is available from the APIs and from the LU
The bot framework is available as [an SDK](https://github.com/Microsoft/botframework) in a variety of languages and as a service using [Azure Bot Service](https://dev.botframework.com/). Bot framework provides [several tools](https://github.com/microsoft/botbuilder-tools) to help with Language Understanding, including:-
-* [LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown) - Build LUIS language understanding models using markdown files
-* [LUIS CLI](https://github.com/microsoft/botbuilder-tools/blob/master/packages/LUIS) - Create and manage your LUIS.ai applications
-* [Dispatch](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Dispatch)- manage parent and child apps
-* [LUISGen](https://github.com/microsoft/botbuilder-tools/blob/master/packages/LUISGen) - Auto generate backing C#/Typescript classes for your LUIS intents and entities.
* [Bot Framework emulator](https://github.com/Microsoft/BotFramework-Emulator/releases) - a desktop application that allows bot developers to test and debug bots built using the Bot Framework SDK * [Bot Framework Composer](https://github.com/microsoft/BotFramework-Composer/blob/stable/README.md) - an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework * [Bot Framework Samples](https://github.com/microsoft/botbuilder-samples) - in #C, JavaScript, TypeScript, and Python+ ## Next steps * Learn about the common [HTTP error codes](luis-reference-response-codes.md) * [Reference documentation](../../index.yml) for all APIs and SDKs * [Bot framework](https://github.com/Microsoft/botbuilder-dotnet) and [Azure Bot Service](https://dev.botframework.com/) * [LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)
-* [Cognitive Containers](../cognitive-services-container-support.md)
\ No newline at end of file
+* [Cognitive Containers](../cognitive-services-container-support.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/How-To/add-sharepoint-datasources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/add-sharepoint-datasources.md
@@ -122,12 +122,16 @@ The Active Directory manager will get a pop-up window requesting permissions to
-<!--
## Add SharePoint data source with APIs
-You need to get the SharePoint file's URI before adding it to QnA Maker.
+There is a workaround to add latest SharePoint content via API using Azure blob storage, below are the steps:
+1. Download the SharePoint files locally. The user calling the API needs to have access to SharePoint.
+1. Upload them on the Azure blob stoarge. This will create a secure shared access by [using SAS token.](https://docs.microsoft.com/azure/storage/common/storage-sas-overview#how-a-shared-access-signature-works)
+1. Pass the blob URL generated with the SAS token to the QnA Maker API. To allow the Question Answers extraction from the files, you need to add the suffix file type as '&ext=pdf' or '&ext=doc' at the end of the URL before passing it to QnA Maker API>
+
+<!--
## Get SharePoint File URI Use the following steps to transform the SharePoint URL into a sharing token.
@@ -183,4 +187,4 @@ Use the **@microsoft.graph.downloadUrl** from the previous section as the `fileu
## Next steps > [!div class="nextstepaction"]
-> [Collaborate on your knowledge base](../index.yml)
\ No newline at end of file
+> [Collaborate on your knowledge base](../index.yml)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
@@ -17,7 +17,10 @@ ms.author: erhopf
If you're looking to improve recognition accuracy, especially issues that are caused when words are deleted or incorrectly substituted, you'll want to use human-labeled transcriptions along with your audio data. What are human-labeled transcriptions? That's easy, they're word-by-word, verbatim transcriptions of an audio file.
-A large sample of transcription data is required to improve recognition, we suggest providing between 10 and 1,000 hours of transcription data. On this page, we'll review guidelines designed to help you create high-quality transcriptions. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
+A large sample of transcription data is required to improve recognition, we suggest providing between 10 and 20 hours of transcription data. On this page, we'll review guidelines designed to help you create high-quality transcriptions. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
+
+> [!NOTE]
+> Not all base models support customization with audio files. If a base model does not support it, training will just use the text of the transcriptions in the same way as related text is used.
## US English (en-US)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-to-text-basics/speech-to-text-basics-go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-to-text-basics/speech-to-text-basics-go.md
@@ -94,6 +94,20 @@ func main() {
} ```
+Run the following commands to create a go.mod file that links to components hosted on Github.
+
+```cmd
+go mod init quickstart
+go get github.com/Microsoft/cognitive-services-speech-sdk-go
+```
+
+Now build and run the code.
+
+```cmd
+go build
+go run quickstart
+```
+ See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechRecognizer) classes. ## Speech-to-text from audio file
@@ -160,4 +174,18 @@ func main() {
} ```
+Run the following commands to create a go.mod file that links to components hosted on Github.
+
+```cmd
+go mod init quickstart
+go get github.com/Microsoft/cognitive-services-speech-sdk-go
+```
+
+Now build and run the code.
+
+```cmd
+go build
+go run quickstart
+```
+ See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechRecognizer) classes.\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/concept-business-cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-business-cards.md
@@ -13,32 +13,36 @@ ms.date: 08/17/2019
ms.author: pafarley ---
-# Business card concepts
+# Form Recognizer prebuilt business cards model
-Azure Form Recognizer can analyze and extract contact information from business cards using one of its prebuilt models. The Business Card API combines powerful Optical Character Recognition (OCR) capabilities with our Business Card Understanding model to extract key information from business cards in English. It extracts personal contact info, company name, job title, and more. The Prebuilt Business Card API is publicly available in the Form Recognizer v2.1 preview.
+Azure Form Recognizer can analyze and extract contact information from business cards using its prebuilt business cards model. It combines powerful Optical Character Recognition (OCR) capabilities with our business card understanding model to extract key information from business cards in English. It extracts personal contact info, company name, job title, and more. The Prebuilt Business Card API is publicly available in the Form Recognizer v2.1 preview.
-## What does the Business Card API do?
+## What does the Business Card service do?
+
+The prebuilt Business Card API extracts key fields from business cards and returns them in an organized JSON response.
+
+![Contoso itemized image from FOTT + JSON output](./media/business-card-example.jpg)
-The Business Card API extracts key fields from business cards and returns them in an organized JSON response.
-![Contoso itemized image from FOTT + JSON output](./media/business-card-english.jpg)
### Fields extracted:
-* Contact names
- * First names
- * Last names
-* Company names
-* Departments
-* Job titles
-* Emails
-* Websites
-* Addresses
-* Phone numbers
- * Mobile phones
- * Faxes
- * Work phones
- * Other phones
+|Name| Type | Description | Text |
+|:-----|:----|:----|:----|
+| ContactNames | array of objects | Contact name extracted from business card | [{ "FirstName": "John", "LastName": "Doe" }] |
+| FirstName | string | First (given) name of contact | "John" |
+| LastName | string | Last (family) name of contact | "Doe" |
+| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
+| Departments | array of strings | Department or organization of contact | ["R&D"] |
+| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
+| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
+| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
+| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
+| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] |
+| Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] |
+| WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] |
+| OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+19876543233"] |
+ The Business Card API can also return all recognized text from the Business Card. This OCR output is included in the JSON response.
@@ -391,4 +395,4 @@ The Business Card API also powers the [AI Builder Business Card Processing featu
## See also * [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)
\ No newline at end of file
+* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/concept-receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
@@ -13,9 +13,9 @@ ms.date: 08/17/2019
ms.author: pafarley ---
-# Receipt concepts
+# Form Recognizer prebuilt receipt model
-Azure Form Recognizer can analyze receipts using one of its prebuilt models. The Receipt API extracts key information from sales receipts in English, such as merchant name, transaction date, transaction total, line items, and more.
+Azure Form Recognizer can analyze and extract information from sales receipts using its prebuilt receipt model. It combines our powerful [Optical Character Recognition (OCR)](https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text) capabilities with receipt understanding deep learning models to extract key information from receipts in English. The Receipt API extracts key information from sales receipts in English, such as merchant name, transaction date, transaction total, line items, and more.
## Understanding Receipts
@@ -23,32 +23,39 @@ Many businesses and individuals still rely on manually extracting data from thei
Automatically extracting data from these Receipts can be complicated. Receipts may be crumpled and hard to read, printed or handwritten parts and smartphone images of receipts may be low quality. Also, receipt templates and fields can vary greatly by market, region, and merchant. These challenges in both data extraction and field detection make receipt processing a unique problem.
-Using Optical Character Recognition (OCR) and our prebuilt receipt model, the Receipt API enables these receipt processing scenarios and extract data from the receipts e.g merchant name, tip, total, line items and more. With this API there is no need to train a model you just send the receipt to the Analyze Receipt API and the data is extracted.
+Using Optical Character Recognition (OCR) and our prebuilt receipt model, the Receipt API enables these receipt processing scenarios and extract data from the receipts e.g merchant name, tip, total, line items and more. With this API there is no need to train a model, just send the receipt image to the Analyze Receipt API and the data is extracted.
-![sample receipt](./media/contoso-receipt-small.png)
+![sample receipt](./media/receipts-example.jpg)
-## What does the Receipt API do?
-The prebuilt Receipt API extracts the contents of sales receipts&mdash;the type of receipt you would commonly get at a restaurant, retailer, or grocery store.
+## What does the Receipt service do?
+
+The prebuilt Receipt service extracts the contents of sales receipts&mdash;the type of receipt you would commonly get at a restaurant, retailer, or grocery store.
### Fields extracted
-* Merchant Name
-* Merchant Address
-* Merchant Phone Number
-* Transaction Date
-* Transaction Time
-* Subtotal
-* Tax
-* Total
-* Tip
-* Line-item extraction (for example item quantity, item price, item name)
+|Name| Type | Description | Text | Value (standardized output) |
+|:-----|:----|:----|:----| :----|
+| ReceiptType | string | Type of sales receipt | Itemized | |
+| MerchantName | string | Name of the merchant issuing the receipt | Contoso | |
+| MerchantPhoneNumber | phoneNumber | Listed phone number of merchant | 987-654-3210 | +19876543210 |
+| MerchantAddress | string | Listed address of merchant | 123 Main St Redmond WA 98052 | |
+| TransactionDate | date | Date the receipt was issued | June 06, 2019 | 2019-06-26 |
+| TransactionTime | time | Time the receipt was issued | 4:49 PM | 16:49:00 |
+| Total | number | Full transaction total of receipt | $14.34 | 14.34 |
+| Subtotal | number | Subtotal of receipt, often before taxes are applied | $12.34 | 12.34 |
+| Tax | number | Tax on receipt, often sales tax or equivalent | $2.00 | 2.00 |
+| Tip | number | Tip included by buyer | $1.00 | 1.00 |
+| Items | array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
+| Name | string | Item name | Surface Pro 6 | |
+| Quantity | number | Quantity of each item | 1 | |
+| Price | number | Individual price of each item unit | $999.00 | 999.00 |
+| Total Price | number | Total price of line item | $999.00 | 999.00 |
### Additional features The Receipt API also returns the following information:
-* Receipt Type (such as itemized, credit card, and so on)
* Field confidence level (each field returns an associated confidence value) * OCR raw text (OCR-extracted text output for the entire receipt) * Bounding box for each value, line and word
@@ -462,4 +469,4 @@ The Receipt API also powers the [AI Builder Receipt Processing feature](/ai-buil
## See also * [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](./index.yml)
\ No newline at end of file
+* [REST API reference docs](./index.yml)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/csharp-sdk.md
@@ -108,8 +108,8 @@ With Form Recognizer, you can create two different client types. The first, `For
`FormRecognizerClient` provides operations for: - Recognizing form fields and content, using custom models trained to recognize your custom forms. These values are returned in a collection of `RecognizedForm` objects. See example [Analyze custom forms](#analyze-forms-with-a-custom-model).
- - Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of `FormPage` objects. See example [Recognize form content](#recognize-form-content).
- - Recognizing common fields from US receipts, using a pre-trained receipt model on the Form Recognizer service. These fields and meta-data are returned in a collection of `RecognizedForm` objects. See example [Recognize receipts](#recognize-receipts).
+ - Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of `FormPage` objects. See example [Analyze layout](#analyze-layout).
+ - Recognizing common fields from US receipts, using a pre-trained receipt model on the Form Recognizer service. These fields and meta-data are returned in a collection of `RecognizedForm` objects. See example [Analyze receipts](#analyze-receipts).
### FormTrainingClient
@@ -132,8 +132,8 @@ These code snippets show you how to do the following tasks with the Form Recogni
#### [version 2.0](#tab/ga) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -141,10 +141,10 @@ These code snippets show you how to do the following tasks with the Form Recogni
#### [version 2.1 preview](#tab/preview) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
-* [Recognize business cards](#recognize-business-cards)
-* [Recognize invoices](#recognize-invoices)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -184,7 +184,7 @@ You'll also need to add references to the URLs for your training and testing dat
---
-## Recognize form content
+## Analyze layout
You can use Form Recognizer to recognize tables, lines, and words in documents, without needing to train a model. The returned value is a collection of **FormPage** objects: one for each page in the submitted document.
@@ -234,7 +234,7 @@ Table 0 has 2 rows and 6 columns.
Cell (1, 5) contains text: 'PT'. ```
-## Recognize receipts
+## Analyze receipts
This section demonstrates how to recognize and extract common fields from US receipts, using a pre-trained receipt model.
@@ -293,7 +293,7 @@ Item:
Total: '1203.39', with confidence '0.774' ```
-## Recognize business cards
+## Analyze business cards
#### [version 2.0](#tab/ga)
@@ -318,7 +318,7 @@ The returned value is a collection of `RecognizedForm` objects: one for each car
---
-## Recognize invoices
+## Analyze invoices
#### [version 2.0](#tab/ga)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/java-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/java-sdk.md
@@ -151,8 +151,8 @@ With Form Recognizer, you can create two different client types. The first, `For
`FormRecognizerClient` provides operations for: - Recognizing form fields and content, using custom models trained to recognize your custom forms. These values are returned in a collection of `RecognizedForm` objects. See example [Analyze custom forms](#analyze-forms-with-a-custom-model).-- Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of `FormPage` objects. See example [Recognize form content](#recognize-form-content).-- Recognizing common fields from US receipts, using a pre-trained receipt model on the Form Recognizer service. These fields and meta-data are returned in a collection of `RecognizedForm` objects. See example [Recognize receipts](#recognize-receipts).
+- Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of `FormPage` objects. See example [Analyze layout](#analyze-layout).
+- Recognizing common fields from US receipts, using a pre-trained receipt model on the Form Recognizer service. These fields and meta-data are returned in a collection of `RecognizedForm` objects. See example [Analyze receipts](#analyze-receipts).
### FormTrainingClient
@@ -172,17 +172,17 @@ These code snippets show you how to do the following tasks with the Form Recogni
#### [version 2.0](#tab/ga) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models) #### [version 2.1 preview](#tab/preview) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
-* [Recognize business cards](#recognize-business-cards)
-* [Recognize invoices](#recognize-invoices)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -195,7 +195,7 @@ At the top of your **main** method, add the following code. Here, you'll authent
[!code-java[](~/cognitive-services-quickstart-code/java/FormRecognizer/FormRecognizer.java?name=snippet_auth)]
-## Recognize form content
+## Analyze layout
You can use Form Recognizer to recognize tables, lines, and words in documents, without needing to train a model.
@@ -228,7 +228,7 @@ Cell has text $89,024.34.
Cell has text ET. ```
-## Recognize receipts
+## Analyze receipts
This section demonstrates how to recognize and extract common fields from US receipts, using a pre-trained receipt model.
@@ -264,7 +264,7 @@ Quantity: null, confidence: 0.927s]
Total Price: null, confidence: 0.93 ```
-## Recognize business cards
+## Analyze business cards
#### [version 2.0](#tab/ga)
@@ -288,7 +288,7 @@ The returned value is a collection of **RecognizedForm** objects: one for each c
---
-## Recognize invoices
+## Analyze invoices
#### [version 2.0](#tab/ga)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/javascript-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/javascript-sdk.md
@@ -97,8 +97,8 @@ With Form Recognizer, you can create two different client types. The first, `For
These code snippets show you how to do the following tasks with the Form Recognizer client library for JavaScript: * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -121,7 +121,7 @@ You'll also need to add references to the URLs for your training and testing dat
* Use the sample from and receipt images included in the samples below (also available on [GitHub](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/formrecognizer/ai-form-recognizer/test-assets)) or you can use the above steps to get the SAS URL of an individual document in blob storage.
-## Recognize form content
+## Analyze layout
You can use Form Recognizer to recognize tables, lines, and words in documents, without needing to train a model. To recognize the content of a file at a given URI, use the `beginRecognizeContentFromUrl` method.
@@ -147,7 +147,7 @@ cell [1,3] has text $56,651.49
cell [1,5] has text PT ```
-## Recognize receipts
+## Analyze receipts
This section demonstrates how to recognize and extract common fields from US receipts, using a pre-trained receipt model.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/python-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/python-sdk.md
@@ -95,8 +95,8 @@ These code snippets show you how to do the following tasks with the Form Recogni
#### [version 2.0](#tab/ga) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -104,10 +104,10 @@ These code snippets show you how to do the following tasks with the Form Recogni
#### [version 2.1 preview](#tab/preview) * [Authenticate the client](#authenticate-the-client)
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
-* [Recognize business cards](#recognize-business-cards)
-* [Recognize invoices](#recognize-invoices)
+* [Analyze layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -132,7 +132,7 @@ You'll need to add references to the URLs for your training and testing data.
> [!NOTE] > The code snippets in this guide use remote forms accessed by URLs. If you want to process local form documents instead, see the related methods in the [reference documentation](/python/api/azure-ai-formrecognizer) and [samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/formrecognizer/azure-ai-formrecognizer/samples).
-## Recognize form content
+## Analyze layout
You can use Form Recognizer to recognize tables, lines, and words in documents, without needing to train a model.
@@ -166,7 +166,7 @@ Confidence score: 1.0
```
-## Recognize receipts
+## Analyze receipts
This section demonstrates how to recognize and extract common fields from US receipts, using a pre-trained receipt model. To recognize receipts from a URL, use the `begin_recognize_receipts_from_url` method.
@@ -198,7 +198,7 @@ Total: 1203.39 has confidence 0.774
```
-## Recognize business cards
+## Analyze business cards
#### [version 2.0](#tab/ga)
@@ -216,7 +216,7 @@ This section demonstrates how to recognize and extract common fields from Englis
---
-## Recognize invoices
+## Analyze invoices
#### [version 2.0](#tab/ga)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/rest-api.md
@@ -27,7 +27,7 @@ ms.author: pafarley
* A URL for an image of an invoice. You can use a [sample document](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf) for this quickstart.
-## Recognize form content
+## Analyze layout
You can use Form Recognizer to recognize and extract tables, lines, and words in documents, without needing to train a model. Before you run the command, make these changes:
@@ -314,7 +314,7 @@ See the following invoice image and its corresponding JSON output. The output ha
---
-## Recognize receipts
+## Analyze receipts
To start analyzing a receipt, call the **[Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeReceiptAsync)** API using the cURL command below. Before you run the command, make these changes:
@@ -694,7 +694,7 @@ The `"readResults"` node contains all of the recognized text (if you set the opt
} ```
-## Recognize business cards
+## Analyze business cards
# [v2.0](#tab/v2-0)
@@ -856,7 +856,7 @@ The script will print responses to the console until the **Analyze Business Card
---
-## Recognize invoices
+## Analyze invoices
# [version 2.0](#tab/v2-0)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/client-library.md
@@ -1,7 +1,7 @@
---
-title: "Quickstart: Form Recognizer client library"
+title: "Quickstart: Form Recognizer client library or REST API"
titleSuffix: Azure Cognitive Services
-description: Use the Form Recognizer client library to create a forms processing app that extracts key/value pairs and table data from your custom documents.
+description: Use the Form Recognizer client library or REST API to create a forms processing app that extracts key/value pairs and table data from your custom documents.
services: cognitive-services author: PatrickFarley manager: nitinme
@@ -16,16 +16,16 @@ ms.custom: "devx-track-js, devx-track-csharp, cog-serv-seo-aug-2020"
keywords: forms processing, automated data processing ---
-# Quickstart: Use the Form Recognizer client library
+# Quickstart: Use the Form Recognizer client library or REST API
-Get started with the Form Recognizer using the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs and table data from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. Follow these steps to install the SDK package and try out the example code for basic tasks. The Form Recognizer client library currently targets v2.0 of the Form Recognizer service.
+Get started with the Form Recognizer using the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, table data and more from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. You can use Form Recognizer via the REST API or SDK. Follow these steps to install the SDK package and try out the example code for basic tasks.
-Use the Form Recognizer client library to:
+Use Form Recognizer to:
-* [Recognize form content](#recognize-form-content)
-* [Recognize receipts](#recognize-receipts)
-* [Recognize business cards](#recognize-business-cards)
-* [Recognize invoices](#recognize-invoices)
+* [Analyze Layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/entity-types/health-entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/entity-types/health-entities.md
@@ -18,15 +18,15 @@ ms.author: aahi
| Category | Description | |---------|---------|
-| ANATOMY | concepts that capture information about body and anatomic systems, sites, locations or regions. |
- | DEMOGRAPHICS | concepts that capture information about gender and age. |
- | EXAMINATION | concepts that capture information about diagnostic procedures and tests. |
- | GENOMICS | concepts that capture information about genes and variants. |
- | HEALTHCARE | concepts that capture information about administrative events, care environments and healthcare professions. |
- | MEDICAL CONDITION | concepts that capture information about diagnoses, symptoms or signs. |
- | MEDICATION | concepts that capture information about medication including medication names, classes, dosage and route of administration. |
- | SOCIAL | concepts that capture information about medically relevant social aspects such as family relation. |
- | TREATMENT | concepts that capture information about therapeutic procedures. |
+| [ANATOMY](#anatomy) | concepts that capture information about body and anatomic systems, sites, locations or regions. |
+ | [DEMOGRAPHICS](#demographics) | concepts that capture information about gender and age. |
+ | [EXAMINATION](#examinations) | concepts that capture information about diagnostic procedures and tests. |
+ | [GENOMICS](#genomics) | concepts that capture information about genes and variants. |
+ | [HEALTHCARE](#healthcare) | concepts that capture information about administrative events, care environments and healthcare professions. |
+ | [MEDICAL CONDITION](#medical-condition) | concepts that capture information about diagnoses, symptoms or signs. |
+ | [MEDICATION](#medication) | concepts that capture information about medication including medication names, classes, dosage and route of administration. |
+ | [SOCIAL](#social) | concepts that capture information about medically relevant social aspects such as family relation. |
+ | [TREATMENT](#treatment) | concepts that capture information about therapeutic procedures. |
Each category may include two concept groups:
@@ -262,7 +262,15 @@ Additionally, the service recognizes relations between the different concepts in
+ **FREQUENCY_OF_MEDICATION** + **ROUTE_OF_MEDICATION** + **TIME_OF_MEDICATION**
-
+
+## Social
+
+### Entities
+
+**FAMILY_RELATION** ΓÇô Mentions of family relatives of the subject. For example, father, daughter, siblings, parents.
+
+:::image type="content" source="../../media/ta-for-health/family-relation.png" alt-text="Screenshot shows another example of a treatment time attribute.":::
+ ## Treatment ### Entities
@@ -289,17 +297,8 @@ Additionally, the service recognizes relations between the different concepts in
:::image type="content" source="../../media/ta-for-health/treatment-time.png" alt-text="Screenshot shows an example of a treatment time attribute."::: - ### Supported relations + **DIRECTION_OF_TREATMENT** + **TIME_OF_TREATMENT** + **FREQUENCY_OF_TREATMENT**-
-## Social
-
-### Entities
-
-**FAMILY_RELATION** ΓÇô Mentions of family relatives of the subject. For example, father, daughter, siblings, parents.
-
-:::image type="content" source="../../media/ta-for-health/family-relation.png" alt-text="Screenshot shows another example of a treatment time attribute.":::
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/quickstarts/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/quickstarts/csharp-sdk.md
@@ -263,7 +263,7 @@ static void SentimentAnalysisWithOpinionMiningExample(TextAnalyticsClient client
AnalyzeSentimentResultCollection reviews = client.AnalyzeSentimentBatch(documents, options: new AnalyzeSentimentOptions() {
- AdditionalSentimentAnalyses = AdditionalSentimentAnalyses.OpinionMining
+ IncludeOpinionMining = true
}); foreach (AnalyzeSentimentResult review in reviews)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
@@ -80,11 +80,9 @@ ms.author: aahi
#### [Key phrase extraction](#tab/key-phrase-extraction)
-> [!NOTE]
-> Model versions of Key Phrase Extraction prior to 2020-07-01 have a 64 character limit. This limit is not present in later model versions.
- | Language | Language code | v2 support | v3 support | Available starting with v3 model version: | Notes | |:----------------------|:-------------:|:----------:|:----------:|:-----------------------------------------:|:------------------:|
+| Danish | `da` | Γ£ô | Γ£ô | 2019-10-01 | |
| Dutch                 |     `nl`      |     ✓      |     ✓      |                2019-10-01                 |                    | | English               |     `en`      |     ✓      |     ✓      |                2019-10-01                 |                    | | Finnish               |     `fi`      |     ✓      |     ✓      |                2019-10-01                 |                    |
container-registry https://docs.microsoft.com/en-us/azure/container-registry/zone-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md
@@ -1,15 +1,15 @@
--- title: Zone-redundant registry for high availability
-description: Learn about enabling zone redundancy in Azure Container Registry by creating a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier.
+description: Learn about enabling zone redundancy in Azure Container Registry. Create a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier.
ms.topic: article
-ms.date: 12/11/2020
+ms.date: 01/07/2021
--- # Enable zone redundancy in Azure Container Registry for resiliency and high availability In addition to [geo-replication](container-registry-geo-replication.md), which replicates registry data across one or more Azure regions to provide availability and reduce latency for regional operations, Azure Container Registry supports optional *zone redundancy*. [Zone redundancy](../availability-zones/az-overview.md#availability-zones) provides resiliency and high availability to a registry or replication resource (replica) in a specific region.
-This article shows how to set up a zone-redundant container registry or zone-redundant replica by using the Azure portal or an Azure Resource Manager template.
+This article shows how to set up a zone-redundant container registry or replica by using the Azure CLI, Azure portal, or Azure Resource Manager template.
Zone redundancy is a **preview** feature of the Premium container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
@@ -19,7 +19,6 @@ Zone redundancy is a **preview** feature of the Premium container registry servi
* Region conversions to availability zones aren't currently supported. To enable availability zone support in a region, the registry must either be created in the desired region, with availability zone support enabled, or a replicated region must be added with availability zone support enabled. * Zone redundancy can't be disabled in a region. * [ACR Tasks](container-registry-tasks-overview.md) doesn't yet support availability zones.
-* Currently supported through Azure Resource Manager templates or the Azure portal. Azure CLI support will be enabled in a future release.
## About zone redundancy
@@ -29,6 +28,61 @@ Azure Container Registry also supports [geo-replication](container-registry-geo-
Availability zones are unique physical locations within an Azure region. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. Each zone has one or more datacenters equipped with independent power, cooling, and networking. When configured for zone redundancy, a registry (or a registry replica in a different region) is replicated across all availability zones in the region, keeping it available if there are datacenter failures.
+## Create a zone-redundant registry - CLI
+
+To use the Azure CLI to enable zone redundancy, you need Azure CLI version 2.17.0 or later, or Azure Cloud Shell. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+### Create a resource group
+
+If needed, run the [az group create](/cli/az/group#az_group_create) command to create a resource group for the registry.
+
+```azurecli
+az group create --name <resource-group-name> --location <location>
+```
+
+### Create zone-enabled registry
+
+Run the [az acr create](/cli/az/acr#az_acr_create) command to create a zone-redundant registry in the Premium service tier. Choose a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry. In the following example, zone redundancy is enabled in the *eastus* region. See the `az acr create` command help for more registry options.
+
+```azurecli
+az acr create \
+ --resource-group <resource-group-name> \
+ --name <container-registry-name> \
+ --location eastus \
+ --zone-redundancy enabled \
+ --sku Premium
+```
+
+In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant:
+
+```JSON
+{
+ [...]
+"zoneRedundancy": "Enabled",
+}
+```
+
+### Create zone-redundant replication
+
+Run the [az acr replication create](/cli/az/acr/replication#az_acr_replication_create) command to create a zone-redundant registry replica in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *westus2*.
+
+```azurecli
+az acr replication create \
+ --location westus2 \
+ --resource-group <resource-group-name> \
+ --registry <container-registry-name> \
+ --zone-redundancy enabled
+```
+
+In the command output, note the `zoneRedundancy` property for the replica. When enabled, the replica is zone redundant:
+
+```JSON
+{
+ [...]
+"zoneRedundancy": "Enabled",
+}
+```
+ ## Create a zone-redundant registry - portal 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
@@ -45,22 +99,24 @@ Availability zones are unique physical locations within an Azure region. To ensu
To create a zone-redundant replication: 1. Navigate to your Premium tier container registry, and select **Replications**.
-1. On the map that appears, select a green hexagon in a region that supports zone redundancy for Azure Container Registry, such as **West US 2**. Then select **Create**.
-1. In the **Create replication** window, in **Availability zones**, select **Enabled**, and then select **Create**.
+1. On the map that appears, select a green hexagon in a region that supports zone redundancy for Azure Container Registry, such as **West US 2**. Or select **+ Add**.
+1. In the **Create replication** window, confirm the **Location**. In **Availability zones**, select **Enabled**, and then select **Create**.
+
+ :::image type="content" source="media/zone-redundancy/enable-availability-zones-replication-portal.png" alt-text="Enable zone-redundant replication in Azure portal":::
## Create a zone-redundant registry - template ### Create a resource group
-If needed, run the [az group create](/cli/azure/group) command to create a resource group for the registry in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *eastus*.
+If needed, run the [az group create](/cli/az/group#az_group_create) command to create a resource group for the registry in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *eastus*. This region is used by the template to set the registry location.
```azurecli
-az group create --name <resource-group-name> --location <location>
+az group create --name <resource-group-name> --location eastus
``` ### Deploy the template
-You can use the following Resource Manager template to create a zone-redundant, geo-replicated registry. The template by default enables zone redundancy in the registry and an additional regional replica.
+You can use the following Resource Manager template to create a zone-redundant, geo-replicated registry. The template by default enables zone redundancy in the registry and a regional replica.
Copy the following contents to a new file and save it using a filename such as `registryZone.json`.
@@ -158,7 +214,7 @@ Copy the following contents to a new file and save it using a filename such as `
} ```
-Run the following [az deployment group create](/cli/azure/deployment?view=azure-cli-latest) command to create the registry using the preceding template file. Where indicated, provide:
+Run the following [az deployment group create](/cli/az/deployment#az_group_deployment_create) command to create the registry using the preceding template file. Where indicated, provide:
* a unique registry name, or deploy the template without parameters and it will create a unique name for you * a location for the replica that supports availability zones, such as *westus2*
@@ -182,4 +238,4 @@ In the command output, note the `zoneRedundancy` property for the registry and t
## Next steps * Learn more about [regions that support availability zones](../availability-zones/az-region.md).
-* Learn more about building for [reliability](/azure/architecture/framework/resiliency/overview) in Azure.
+* Learn more about building for [reliability](/azure/architecture/framework/resiliency/overview) in Azure.
\ No newline at end of file
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-support.md
@@ -79,10 +79,11 @@ Azure Cosmos DB Cassandra API supports the following CQL functions:
| Token * | Yes | | ttl | Yes | | writetime | Yes |
-| cast | No |
+| cast ** | Yes |
-> [!NOTE]
-> \* Cassandra API supports token as a projection/selector, and only allows token(pk) on the left-hand side of a where clause. For example, `WHERE token(pk) > 1024` is supported, but `WHERE token(pk) > token(100)` is **not** supported.
+> [!NOTE]
+> \* Cassandra API supports token as a projection/selector, and only allows token(pk) on the left-hand side of a where clause. For example, `WHERE token(pk) > 1024` is supported, but `WHERE token(pk) > token(100)` is **not** supported.
+> \*\* The `cast()` function is not nestable in Cassandra API. For example, `SELECT cast(count as double) FROM myTable` is supported, but `SELECT avg(cast(count as double)) FROM myTable` is **not** supported.
@@ -179,6 +180,30 @@ Azure Cosmos DB supports the following database commands on Cassandra API accoun
| TRUNCATE | No | | USE | Yes |
+## CQL Shell commands
+
+Azure Cosmos DB supports the following database commands on Cassandra API accounts.
+
+|Command |Supported |
+|---------|---------|
+| CAPTURE | Yes |
+| CLEAR | Yes |
+| CONSISTENCY * | N/A |
+| COPY | No |
+| DESCRIBE | Yes |
+| cqlshExpand | No |
+| EXIT | Yes |
+| LOGIN | N/A (CQL function `USER` is not supported, hence `LOGIN` is redundant) |
+| PAGING | Yes |
+| SERIAL CONSISTENCY * | N/A |
+| SHOW | Yes |
+| SOURCE | Yes |
+| TRACING | N/A (Cassandra API is backed by Azure Cosmos DB - use [diagnostic logging](cosmosdb-monitor-resource-logs.md) for troubleshooting) |
+
+> [!NOTE]
+> \* Consistency works differently in Azure Cosmos DB, see [here](cassandra-consistency.md) for more information.
++ ## JSON Support |Command |Supported | |---------|---------|
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-troubleshoot.md
@@ -26,7 +26,7 @@ The following article describes common errors and solutions for deployments usin
| 13 | Unauthorized | The request lacks the permissions to complete. | Ensure that you set proper permissions for your database and collection. | | 16 | InvalidLength | The request specified has an invalid length. | If you are using the explain() function, ensure that you supply only one operation. | | 26 | NamespaceNotFound | The database or collection being referenced in the query cannot be found. | Ensure your database/collection name precisely matches the name in your query.|
-| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones.|
+| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write.|
| 61 | ShardKeyNotFound | The document in your request did not contain the collection's shard key (Azure Cosmos DB partition key). | Ensure the collection's shard key is being used in the request.| | 66 | ImmutableField | The request is attempting to change an immutable field | "id" fields are immutable. Ensure that your request does not attempt to update that field. | | 67 | CannotCreateIndex | The request to create an index cannot be completed. | Up to 500 single field indexes can be created in a container. Up to eight fields can be included in a compound index (compound indexes are supported in version 3.6+). |
@@ -36,6 +36,7 @@ The following article describes common errors and solutions for deployments usin
| 16501 | ExceededMemoryLimit | As a multi-tenant service, the operation has gone over the client's memory allotment. | Reduce the scope of the operation through more restrictive query criteria or contact support from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Example: `db.getCollection('users').aggregate([{$match: {name: "Andy"}}, {$sort: {age: -1}}]))` | | 40324 | Unrecognized pipeline stage name. | The stage name in your aggregation pipeline request was not recognized. | Ensure that all aggregation pipeline names are valid in your request. | | - | MongoDB wire version issues | The older versions of MongoDB drivers are unable to detect the Azure Cosmos account's name in the connection strings. | Append *appName=@**accountName**@* at the end of your Cosmos DB's API for MongoDB connection string, where ***accountName*** is your Cosmos DB account name. |
+| - | MongoDB client networking issues (such as socket or endOfStream exceptions)| The network request has failed. This is often caused by an inactive TCP connection that the MongoDB client is attempting to use. MongoDB drivers often utilize connection pooling, which results in a random connection chosen from the pool being used for a request. Inactive connections typically timeout on the Azure Cosmos DB end after four minutes. | You can either retry these failed requests in your application code, change your MongoDB client (driver) settings to teardown inactive TCP connections before the four-minute timeout window, or configure your OS keepalive settings to maintain the TCP connections in an active state. |
## Next steps
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/mosp-new-customer-experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/mosp-new-customer-experience.md
@@ -6,7 +6,7 @@ ms.reviewer: amberbhargava
ms.service: cost-management-billing ms.subservice: billing ms.topic: conceptual
-ms.date: 08/03/2020
+ms.date: 01/11/2021
ms.author: banders ---
@@ -35,7 +35,7 @@ A billing profile is used to manage your invoice and payment methods. A monthly
When your account is updated, a billing profile is automatically created for each subscription. Subscription's charges are billed to its respective billing profile and displayed on its invoice.
-Roles on the billing profiles have permissions to view and manage invoices and payment methods. These roles should be assigned to users who pay invoices like members of the accounting team in an organization. For more information, see [billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).
+Roles on the billing profiles have permissions to view and manage invoices and payment methods. These roles should be assigned to users who pay invoices like members of the accounting team in an organization. For more information, see [billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).
When your account is updated, for each subscription on which you've given others permission to [view invoices](download-azure-invoice.md#allow-others-to-download-the-your-subscription-invoice), users who have an owner, a contributor, a reader, or a billing reader Azure role are given the reader role on the respective billing profile.
@@ -43,7 +43,7 @@ When your account is updated, for each subscription on which you've given others
An invoice section is used to organize the costs on your invoice. For example, you might need a single invoice but want to organize costs by department, team, or project. For this scenario, you have a single billing profile where you create an invoice section for each department, team, or project.
-When your account is updated, an invoice section is created for each billing profile and the related subscription is assigned to the invoice section. When you add more subscriptions, you can create additional sections and assign the subscriptions to the invoice sections. You'll see the sections on the billing profile's invoice reflecting the usage of each subscription you've assigned to them.
+When your account is updated, an invoice section is created for each billing profile and the related subscription is assigned to the invoice section. When you add more subscriptions, you can create more sections and assign the subscriptions to the invoice sections. You'll see the sections on the billing profile's invoice reflecting the usage of each subscription you've assigned to them.
Roles on the invoice section have permissions to control who creates Azure subscriptions. The roles should be assigned to users who set up the Azure environment for teams in an organization like engineering leads and technical architects. For more information, see [invoice section roles and tasks](../manage/understand-mca-roles.md#invoice-section-roles-and-tasks).
@@ -53,7 +53,7 @@ Your new experience includes the following cost management and billing capabilit
#### Invoice management
-**More predictable monthly billing period** - In your new account, the billing period begins from the first day of the month and ends at the last day of the month, regardless of when you sign up to use Azure. An invoice will be generated at the beginning of each month, and will contain all charges from the previous month.
+**More predictable monthly billing period** - In your new account, the billing period begins from the first day of the month and ends at the last day of the month, no matter when you sign up to use Azure. An invoice will be generated at the beginning of each month, and will contain all charges from the previous month.
**Get a single monthly invoice for multiple subscriptions** - You have the flexibility of either getting one monthly invoice for each of your subscriptions or a single invoice for multiple subscriptions.
@@ -75,15 +75,15 @@ Your new experience includes the following cost management and billing capabilit
#### Account and subscription management
-**Assign multiple administrators to perform billing operations** - Assign billing permissions to multiple users to manage billing for your account. Get flexibility by providing read, write, or both permissions to others.
+**Assign multiple administrators to perform billing operations** - Assign billing permissions to multiple users to manage billing for your account. Get flexibility by giving read, write, or both permissions to others.
-**Create additional subscriptions directly in the Azure portal** - Create all your subscriptions with a single click in the Azure portal.
+**Create more subscriptions directly in the Azure portal** - Create all your subscriptions with a single selection in the Azure portal.
#### API support
-**Perform billing and cost management operations through APIs, SDK, and PowerShell** - Use cost management, billing, and consumption APIs to pull billing and cost data into your preferred data analysis tools.
+**Do billing and cost management operations through APIs, SDK, and PowerShell** - Use cost management, billing, and consumption APIs to pull billing and cost data into your preferred data analysis tools.
-**Perform all subscription operations through APIs, SDK, and PowerShell** - Use Azure subscription APIs to automate the management of your Azure subscriptions, including creating, renaming, and canceling a subscription.
+**Do all subscription operations through APIs, SDK, and PowerShell** - Use Azure subscription APIs to automate the management of your Azure subscriptions, including creating, renaming, and canceling a subscription.
## Get prepared for your new experience
@@ -95,13 +95,64 @@ In the new experience, your invoice will be generated around the ninth day of ea
**New billing and cost management APIs**
-If you are using Cost Management or Billing APIs to query and update your billing or cost data, then you must use new APIs. The table below lists the APIs that won't work with the new billing account and the changes that you need to make in your new billing account.
+If you're using Cost Management or Billing APIs to query and update your billing or cost data, then you must use new APIs. The table below lists the APIs that won't work with the new billing account and the changes that you need to make in your new billing account.
|API | Changes | |---------|---------|
-|[Billing Accounts - List](/rest/api/billing/2019-10-01-preview/billingaccounts/list) | In the Billing Accounts - List API, your old billing account has agreementType **MicrosoftOnlineServiceProgram**, your new billing account would have agreementType **MicrosoftCustomerAgreement**. If you take a dependency on agreementType, please update it. |
+|[Billing Accounts - List](/rest/api/billing/2019-10-01-preview/billingaccounts/list) | In the Billing Accounts - List API, your old billing account has agreementType **MicrosoftOnlineServiceProgram**, your new billing account would have agreementType **MicrosoftCustomerAgreement**. If you take a dependency on agreementType, update it. |
|[Invoices - List By Billing Subscription](/rest/api/billing/2019-10-01-preview/invoices/listbybillingsubscription) | This API will only return invoices that were generated before your account was updated. You would have to use [Invoices - List By Billing Account](/rest/api/billing/2019-10-01-preview/invoices/listbybillingaccount) API to get invoices that are generated in your new billing account. |
+## Cost Management updates after account update
+
+Your updated Azure billing account for your Microsoft Customer Agreement gives you access to new and expanded Cost Management experiences in the Azure portal that you didn't have with your pay-as-you-go account.
+
+### New capabilities
+
+The following new capabilities are available with your Azure billing account.
+
+#### New billing scopes
+
+As part of your updated account, you have new scopes in Cost Management + Billing. Besides aiding with hierarchical organization and invoicing, they are also a way to view combined charges from multiple underlying subscriptions. For more information about billing scopes, see [Microsoft Customer Agreement scopes](../costs/understand-work-scopes.md#microsoft-customer-agreement-scopes).
+
+You can also access Cost Management APIs to get combined cost views at higher scopes. All Cost Management APIs that use the subscription scope are still available with some minor changes in the schema. For more information about the APIs, see [Azure Cost Management APIs](/rest/api/cost-management/) and [Azure Consumption APIs](/rest/api/consumption/).
+
+#### Cost allocation
+
+With your updated account, you can use cost allocation capabilities to distribute costs from shared services in your organization. For more information about allocating costs, see [Create and manage Azure cost allocation rules](../costs/allocate-costs.md).
+
+#### Power BI
+
+The Azure Cost Management connector for Power BI Desktop helps you build custom visualizations and reports of your Azure usage and spending. You access your cost and usage data after you connect to your updated account. For more information about the Azure Cost Management connector for Power BI Desktop, see [Create visuals and reports with the Azure Cost Management connector in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+
+### Updated capabilities
+
+The following updated capabilities are available with your Azure billing account.
+
+#### Cost analysis
+
+You can continue to view and trace your month-over-month consumption costs and now you can view reservation and Marketplace purchase costs in Cost analysis.
+
+With your updated account, you receive a single invoice for all Azure charges. You also now have a simplified single monthly calendar view to replace the billing periods view you had earlier.
+
+For example, if your billing period was November 24 to December 23 for your old account, then after the upgrade the period becomes November 1 to November 30, December 1 to December 31 and so on.
+
+:::image type="content" source="./media/mosp-new-customer-experience/billing-periods.png" alt-text="Image showing a comparison between old and new billing periods " lightbox="./media/mosp-new-customer-experience/billing-periods.png" :::
+
+#### Budgets
+
+You can now create budgets for the billing account, allowing you to track costs across subscriptions. You can also stay on top of your purchase charges using budgets. For more information about budgets, see [Create and manage Azure budgets](../costs/tutorial-acm-create-budgets.md).
+
+#### Exports
+
+Your new billing account provides improved export functionality. For example, you can create exports for actual costs that include purchases or amortized costs (reservation purchase costs spread across the purchase term). You can also create an export for the billing account to get usage and charges data across all subscriptions in the billing account. For more information about exports, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+
+> [!NOTE]
+> Exports created before your account update with the **Monthly export of last month's costs** type will export data for the last calendar month, not the last billing period.
+
+For example, for a billing period from December 23 to January 22, the exported CSV file would have cost and usage data for that period. After the update, the export will contain data for the calendar month. For example, January 1 to January 31 and so on.
+
+:::image type="content" source="./media/mosp-new-customer-experience/export-amortized-costs.png" alt-text="Image showing a comparison between old and new export details" lightbox="./media/mosp-new-customer-experience/export-amortized-costs.png" :::
+ ## Additional information The following sections provide additional information about your new experience.
@@ -116,7 +167,7 @@ Access to Azure resources that were set using Azure role-based access control (A
Invoices generated before your account was updated are still available in the Azure portal. **Invoices for account updated in the middle of the month**
-If your account is updated in the middle of the month, you'll get one invoice for charges accumulated until the day your account is updated. You'll get another invoice for the remainder of the month. For example, your account has one subscription and it is updated on 15 September. You will get one invoice for charges accumulated until 15 September. You'll get another invoice for the period between 15 September through 30 September. After September, you'll get one invoice per month.
+If your account is updated in the middle of the month, you'll get one invoice for charges accumulated until the day your account is updated. You'll get another invoice for the rest of the month. For example, your account has one subscription and it's updated on 15 September. You'll get one invoice for charges accumulated until 15 September. You'll get another invoice for the period between 15 September through 30 September. After September, you'll get one invoice per month.
## Need help? Contact support.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-system-variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
@@ -23,29 +23,43 @@ These system variables can be referenced anywhere in the pipeline JSON.
| Variable Name | Description | | --- | --- |
-| @pipeline().DataFactory |Name of the data factory the pipeline run is running within |
+| @pipeline().DataFactory |Name of the data factory the pipeline run is running in |
| @pipeline().Pipeline |Name of the pipeline |
-| @pipeline().RunId | ID of the specific pipeline run |
-| @pipeline().TriggerType | Type of the trigger that invoked the pipeline (Manual, Scheduler) |
-| @pipeline().TriggerId| ID of the trigger that invokes the pipeline |
-| @pipeline().TriggerName| Name of the trigger that invokes the pipeline |
-| @pipeline().TriggerTime| Time when the trigger that invoked the pipeline. The trigger time is the actual fired time, not the scheduled time. For example, `13:20:08.0149599Z` is returned instead of `13:20:00.00Z` |
+| @pipeline().RunId |ID of the specific pipeline run |
+| @pipeline().TriggerType |The type of trigger that invoked the pipeline (for example, `ScheduleTrigger`, `BlobEventsTrigger`). For a list of supported trigger types, see [Pipeline execution and triggers in Azure Data Factory](concepts-pipeline-execution-triggers.md). A trigger type of `Manual` indicates that the pipeline was triggered manually. |
+| @pipeline().TriggerId|ID of the trigger that invoked the pipeline |
+| @pipeline().TriggerName|Name of the trigger that invoked the pipeline |
+| @pipeline().TriggerTime|Time of the trigger run that invoked the pipeline. This is the time at which the trigger **actually** fired to invoke the pipeline run, and it may differ slightly from the trigger's scheduled time. |
-## Schedule Trigger scope
-These system variables can be referenced anywhere in the trigger JSON if the trigger is of type: "ScheduleTrigger."
+>[!NOTE]
+>Trigger-related date/time system variables (in both pipeline and trigger scopes) return UTC dates in ISO 8601 format, for example, `2017-06-01T22:20:00.4061448Z`.
+
+## Schedule trigger scope
+These system variables can be referenced anywhere in the trigger JSON for triggers of type [ScheduleTrigger](concepts-pipeline-execution-triggers.md#schedule-trigger).
+
+| Variable Name | Description |
+| --- | --- |
+| @trigger().scheduledTime |Time at which the trigger was scheduled to invoke the pipeline run. |
+| @trigger().startTime |Time at which the trigger **actually** fired to invoke the pipeline run. This may differ slightly from the trigger's scheduled time. |
+
+## Tumbling window trigger scope
+These system variables can be referenced anywhere in the trigger JSON for triggers of type [TumblingWindowTrigger](concepts-pipeline-execution-triggers.md#tumbling-window-trigger).
| Variable Name | Description | | --- | --- |
-| @trigger().scheduledTime |Time when the trigger was scheduled to invoke the pipeline run. For example, for a trigger that fires every 5 min, this variable would return `2017-06-01T22:20:00Z`, `2017-06-01T22:25:00Z`, `2017-06-01T22:30:00Z` respectively.|
-| @trigger().startTime |Time when the trigger **actually** fired to invoke the pipeline run. For example, for a trigger that fires every 5 min, this variable might return something like this `2017-06-01T22:20:00.4061448Z`, `2017-06-01T22:25:00.7958577Z`, `2017-06-01T22:30:00.9935483Z` respectively. (Note: The timestamp is by default in ISO 8601 format)|
+| @trigger().outputs.windowStartTime |Start of the window associated with the trigger run. |
+| @trigger().outputs.windowEndTime |End of the window associated with the trigger run. |
+| @trigger().scheduledTime |Time at which the trigger was scheduled to invoke the pipeline run. |
+| @trigger().startTime |Time at which the trigger **actually** fired to invoke the pipeline run. This may differ slightly from the trigger's scheduled time. |
-## Tumbling Window Trigger scope
-These system variables can be referenced anywhere in the trigger JSON if the trigger is of type: "TumblingWindowTrigger."
-(Note: The timestamp is by default in ISO 8601 format)
+## Event-based trigger scope
+These system variables can be referenced anywhere in the trigger JSON for triggers of type [BlobEventsTrigger](concepts-pipeline-execution-triggers.md#event-based-trigger).
| Variable Name | Description | | --- | --- |
-| @trigger().outputs.windowStartTime |Start of the window when the trigger was scheduled to invoke the pipeline run. If the tumbling window trigger has a frequency of "hourly" this would be the time at the beginning of the hour.|
-| @trigger().outputs.windowEndTime |End of the window when the trigger was scheduled to invoke the pipeline run. If the tumbling window trigger has a frequency of "hourly" this would be the time at the end of the hour.|
+| @triggerBody().fileName |Name of the file whose creation or deletion caused the trigger to fire. |
+| @triggerBody().folderName |Path to the folder that contains the file specified by `@triggerBody().fileName`. The first segment of the folder path is the name of the Azure Blob Storage container. |
+| @trigger().startTime |Time at which the trigger fired to invoke the pipeline run. |
+ ## Next steps For information about how these variables are used in expressions, see [Expression language & functions](control-flow-expression-language-functions.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
@@ -6,11 +6,11 @@ documentationcenter: ''
ms.service: data-factory ms.workload: data-services ms.topic: conceptual
-author: nabhishek
-ms.author: abnarain
-manager: anandsub
+author: lrtoyou1223
+ms.author: lle
+manager: shwang
ms.custom: seo-lt-2019
-ms.date: 11/25/2020
+ms.date: 12/25/2020
--- # Create and configure a self-hosted integration runtime
@@ -25,6 +25,54 @@ This article describes how you can create and configure a self-hosted IR.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] +
+## Considerations for using a self-hosted IR
+
+- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).
+- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory.
+- The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources.
+- You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories.
+- Use a self-hosted integration runtime to support data integration within an Azure virtual network.
+- Treat your data source as an on-premises data source that is behind a firewall, even when you use Azure ExpressRoute. Use the self-hosted integration runtime to connect the service to the data source.
+- Use the self-hosted integration runtime even if the data store is in the cloud on an Azure Infrastructure as a Service (IaaS) virtual machine.
+- Tasks might fail in a self-hosted integration runtime that you installed on a Windows server for which FIPS-compliant encryption is enabled. To work around this problem, you have two options: store credentials/secret values in an Azure Key Vault or disable FIPS-compliant encryption on the server. To disable FIPS-compliant encryption, change the following registry subkey's value from 1 (enabled) to 0 (disabled): `HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled`. If you use the [self-hosted integration runtime as a proxy for SSIS integration runtime](./self-hosted-integration-runtime-proxy-ssis.md), FIPS-compliant encryption can be enabled and will be used when moving data from on premises to Azure Blob Storage as a staging area.
++
+## Command flow and data flow
+
+When you move data between on-premises and the cloud, the activity uses a self-hosted integration runtime to transfer the data between an on-premises data source and the cloud.
+
+Here is a high-level summary of the data-flow steps for copying with a self-hosted IR:
+
+![The high-level overview of data flow](media/create-self-hosted-integration-runtime/high-level-overview.png)
+
+1. A data developer creates a self-hosted integration runtime within an Azure data factory by using a PowerShell cmdlet. Currently, the Azure portal doesn't support this feature.
+2. The data developer creates a linked service for an on-premises data store. The developer does so by specifying the self-hosted integration runtime instance that the service should use to connect to data stores.
+3. The self-hosted integration runtime node encrypts the credentials by using Windows Data Protection Application Programming Interface (DPAPI) and saves the credentials locally. If multiple nodes are set for high availability, the credentials are further synchronized across other nodes. Each node encrypts the credentials by using DPAPI and stores them locally. Credential synchronization is transparent to the data developer and is handled by the self-hosted IR.
+4. Azure Data Factory communicates with the self-hosted integration runtime to schedule and manage jobs. Communication is via a control channel that uses a shared [Azure Relay](../azure-relay/relay-what-is-it.md#wcf-relay) connection. When an activity job needs to be run, Data Factory queues the request along with any credential information. It does so in case credentials aren't already stored on the self-hosted integration runtime. The self-hosted integration runtime starts the job after it polls the queue.
+5. The self-hosted integration runtime copies data between an on-premises store and cloud storage. The direction of the copy depends on how the copy activity is configured in the data pipeline. For this step, the self-hosted integration runtime directly communicates with cloud-based storage services like Azure Blob storage over a secure HTTPS channel.
++
+## Prerequisites
+
+- The supported versions of Windows are:
+ + Windows 8.1
+ + Windows 10
+ + Windows Server 2012
+ + Windows Server 2012 R2
+ + Windows Server 2016
+ + Windows Server 2019
+
+Installation of the self-hosted integration runtime on a domain controller isn't supported.
+- Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details.
+- The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717).
+- If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message.
+- You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime.
+- Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times.
+- Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites:
+ - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)
+ - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the `JAVA_HOME` environment variable is set.
+ ## Setting up a self-hosted integration runtime To create and set up a self-hosted integration runtime, use the following procedures.
@@ -121,84 +169,49 @@ Here are details of the application's actions and arguments:
|-ssa,<br/>-SwitchServiceAccount|"`<domain\user>`" ["`<password>`"]|Set DIAHostService to run as a new account. Use the empty password "" for system accounts and virtual accounts.|
-## Command flow and data flow
-
-When you move data between on-premises and the cloud, the activity uses a self-hosted integration runtime to transfer the data between an on-premises data source and the cloud.
+## Install and register a self-hosted IR from Microsoft Download Center
-Here is a high-level summary of the data-flow steps for copying with a self-hosted IR:
+1. Go to the [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717).
+2. Select **Download**, select the 64-bit version, and select **Next**. The 32-bit version isn't supported.
+3. Run the Managed Identity file directly, or save it to your hard drive and run it.
+4. On the **Welcome** window, select a language and select **Next**.
+5. Accept the Microsoft Software License Terms and select **Next**.
+6. Select **folder** to install the self-hosted integration runtime, and select **Next**.
+7. On the **Ready to install** page, select **Install**.
+8. Select **Finish** to complete installation.
+9. Get the authentication key by using PowerShell. Here's a PowerShell example for retrieving the authentication key:
-![The high-level overview of data flow](media/create-self-hosted-integration-runtime/high-level-overview.png)
+ ```powershell
+ Get-AzDataFactoryV2IntegrationRuntimeKey -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $selfHostedIntegrationRuntime
+ ```
-1. A data developer creates a self-hosted integration runtime within an Azure data factory by using a PowerShell cmdlet. Currently, the Azure portal doesn't support this feature.
-1. The data developer creates a linked service for an on-premises data store. The developer does so by specifying the self-hosted integration runtime instance that the service should use to connect to data stores.
-1. The self-hosted integration runtime node encrypts the credentials by using Windows Data Protection Application Programming Interface (DPAPI) and saves the credentials locally. If multiple nodes are set for high availability, the credentials are further synchronized across other nodes. Each node encrypts the credentials by using DPAPI and stores them locally. Credential synchronization is transparent to the data developer and is handled by the self-hosted IR.
-1. Azure Data Factory communicates with the self-hosted integration runtime to schedule and manage jobs. Communication is via a control channel that uses a shared [Azure Service Bus Relay](../azure-relay/relay-what-is-it.md#wcf-relay) connection. When an activity job needs to be run, Data Factory queues the request along with any credential information. It does so in case credentials aren't already stored on the self-hosted integration runtime. The self-hosted integration runtime starts the job after it polls the queue.
-1. The self-hosted integration runtime copies data between an on-premises store and cloud storage. The direction of the copy depends on how the copy activity is configured in the data pipeline. For this step, the self-hosted integration runtime directly communicates with cloud-based storage services like Azure Blob storage over a secure HTTPS channel.
+10. On the **Register Integration Runtime (Self-hosted)** window of Microsoft Integration Runtime Configuration Manager running on your machine, take the following steps:
-## Considerations for using a self-hosted IR
+ 1. Paste the authentication key in the text area.
-- You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](#create-a-shared-self-hosted-integration-runtime-in-azure-data-factory).-- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](#create-a-shared-self-hosted-integration-runtime-in-azure-data-factory) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory. -- The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources.-- You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories.-- Use a self-hosted integration runtime to support data integration within an Azure virtual network.-- Treat your data source as an on-premises data source that is behind a firewall, even when you use Azure ExpressRoute. Use the self-hosted integration runtime to connect the service to the data source.-- Use the self-hosted integration runtime even if the data store is in the cloud on an Azure Infrastructure as a Service (IaaS) virtual machine.-- Tasks might fail in a self-hosted integration runtime that you installed on a Windows server for which FIPS-compliant encryption is enabled. To work around this problem, you have two options: store credentials/secret values in an Azure Key Vault or disable FIPS-compliant encryption on the server. To disable FIPS-compliant encryption, change the following registry subkey's value from 1 (enabled) to 0 (disabled): `HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled`. If you use the [self-hosted integration runtime as a proxy for SSIS integration runtime](./self-hosted-integration-runtime-proxy-ssis.md), FIPS-compliant encryption can be enabled and will be used when moving data from on premises to Azure Blob Storage as a staging area.
+ 2. Optionally, select **Show authentication key** to see the key text.
-## Prerequisites
+ 3. Select **Register**.
-- The supported versions of Windows are:
- + Windows 7 Service Pack 1
- + Windows 8.1
- + Windows 10
- + Windows Server 2008 R2 SP1
- + Windows Server 2012
- + Windows Server 2012 R2
- + Windows Server 2016
- + Windows Server 2019
-
- Installation of the self-hosted integration runtime on a domain controller isn't supported.
-- .NET Framework 4.6.1 or later is required. If you're installing the self-hosted integration runtime on a Windows 7 machine, install .NET Framework 4.6.1 or later. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details.-- The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space.-- If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message.-- You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime.-- Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times.-- Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites:
- - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)
- - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the `JAVA_HOME` environment variable is set.
+## Service account for Self-hosted integration runtime
+The default log on service account of Self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
-## Installation best practices
+![Service account for Self-hosted integration runtime](media/create-self-hosted-integration-runtime/shir-service-account.png)
-You can install the self-hosted integration runtime by downloading a Managed Identity setup package from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717). See the article [Move data between on-premises and cloud](tutorial-hybrid-copy-powershell.md) for step-by-step instructions.
+Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
-- Configure a power plan on the host machine for the self-hosted integration runtime so that the machine doesn't hibernate. If the host machine hibernates, the self-hosted integration runtime goes offline.-- Regularly back up the credentials associated with the self-hosted integration runtime.-- To automate self-hosted IR setup operations, please refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime).
+![Service account permission](media/create-self-hosted-integration-runtime/shir-service-account-permission.png)
-## Install and register a self-hosted IR from Microsoft Download Center
+![Service account permission](media/create-self-hosted-integration-runtime/shir-service-account-permission-2.png)
-1. Go to the [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717).
-1. Select **Download**, select the 64-bit version, and select **Next**. The 32-bit version isn't supported.
-1. Run the Managed Identity file directly, or save it to your hard drive and run it.
-1. On the **Welcome** window, select a language and select **Next**.
-1. Accept the Microsoft Software License Terms and select **Next**.
-1. Select **folder** to install the self-hosted integration runtime, and select **Next**.
-1. On the **Ready to install** page, select **Install**.
-1. Select **Finish** to complete installation.
-1. Get the authentication key by using PowerShell. Here's a PowerShell example for retrieving the authentication key:
- ```powershell
- Get-AzDataFactoryV2IntegrationRuntimeKey -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $selfHostedIntegrationRuntime
- ```
+## Notification area icons and notifications
-1. On the **Register Integration Runtime (Self-hosted)** window of Microsoft Integration Runtime Configuration Manager running on your machine, take the following steps:
+If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime.
- 1. Paste the authentication key in the text area.
+![Notifications in the notification area](media/create-self-hosted-integration-runtime/system-tray-notifications.png)
- 1. Optionally, select **Show authentication key** to see the key text.
- 1. Select **Register**.
## High availability and scalability
@@ -248,90 +261,6 @@ Here are the requirements for the TLS/SSL certificate that you use to secure com
> > Data movement in transit from a self-hosted IR to other data stores always happens within an encrypted channel, regardless of whether or not this certificate is set.
-## Create a shared self-hosted integration runtime in Azure Data Factory
-
-You can reuse an existing self-hosted integration runtime infrastructure that you already set up in a data factory. This reuse lets you create a linked self-hosted integration runtime in a different data factory by referencing an existing shared self-hosted IR.
-
-To see an introduction and demonstration of this feature, watch the following 12-minute video:
-
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
-
-### Terminology
--- **Shared IR**: An original self-hosted IR that runs on a physical infrastructure. -- **Linked IR**: An IR that references another shared IR. The linked IR is a logical IR and uses the infrastructure of another shared self-hosted IR.-
-### Methods to share a self-hosted integration runtime
-
-To share a self-hosted integration runtime with multiple data factories, see [Create a shared self-hosted integration runtime](create-shared-self-hosted-integration-runtime-powershell.md) for details.
-
-### Monitoring
-
-#### Shared IR
-
-![Selections to find a shared integration runtime](media/create-self-hosted-integration-runtime/Contoso-shared-IR.png)
-
-![Monitor a shared integration runtime](media/create-self-hosted-integration-runtime/contoso-shared-ir-monitoring.png)
-
-#### Linked IR
-
-![Selections to find a linked integration runtime](media/create-self-hosted-integration-runtime/Contoso-linked-ir.png)
-
-![Monitor a linked integration runtime](media/create-self-hosted-integration-runtime/Contoso-linked-ir-monitoring.png)
-
-### Known limitations of self-hosted IR sharing
-
-* The data factory in which a linked IR is created must have an [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md). By default, the data factories created in the Azure portal or PowerShell cmdlets have an implicitly created Managed Identity. But when a data factory is created through an Azure Resource Manager template or SDK, you must set the **Identity** property explicitly. This setting ensures that Resource Manager creates a data factory that contains a Managed Identity.
-
-* The Data Factory .NET SDK that supports this feature must be version 1.1.0 or later.
-
-* To grant permission, you need the Owner role or the inherited Owner role in the data factory where the shared IR exists.
-
-* The sharing feature works only for data factories within the same Azure AD tenant.
-
-* For Azure AD [guest users](../active-directory/governance/manage-guest-access-with-access-reviews.md), the search functionality in the UI, which lists all data factories by using a search keyword, [doesn't work](/previous-versions/azure/ad/graph/howto/azure-ad-graph-api-permission-scopes#SearchLimits). But as long as the guest user is the owner of the data factory, you can share the IR without the search functionality. For the Managed Identity of the data factory that needs to share the IR, enter that Managed Identity in the **Assign Permission** box and select **Add** in the Data Factory UI.
-
- > [!NOTE]
- > This feature is available only in Data Factory V2.
-
-## Notification area icons and notifications
-
-If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime.
-
-![Notifications in the notification area](media/create-self-hosted-integration-runtime/system-tray-notifications.png)
-
-## Ports and firewalls
-
-There are two firewalls to consider:
--- The *corporate firewall* that runs on the central router of the organization-- The *Windows firewall* that is configured as a daemon on the local machine where the self-hosted integration runtime is installed-
-![The firewalls](media/create-self-hosted-integration-runtime/firewall.png)
-
-At the corporate firewall level, you need to configure the following domains and outbound ports:
-
-[!INCLUDE [domain-and-outbound-port-requirements](../../includes/domain-and-outbound-port-requirements.md)]
--
-At the Windows firewall level or machine level, these outbound ports are normally enabled. If they aren't, you can configure the domains and ports on a self-hosted integration runtime machine.
-
-> [!NOTE]
-> Based on your source and sinks, you might need to allow additional domains and outbound ports in your corporate firewall or Windows firewall.
->
-> For some cloud databases, such as Azure SQL Database and Azure Data Lake, you might need to allow IP addresses of self-hosted integration runtime machines on their firewall configuration.
-
-### Copy data from a source to a sink
-
-Ensure that you properly enable firewall rules on the corporate firewall, the Windows firewall of the self-hosted integration runtime machine, and the data store itself. Enabling these rules lets the self-hosted integration runtime successfully connect to both source and sink. Enable rules for each data store that is involved in the copy operation.
-
-For example, to copy from an on-premises data store to a SQL Database sink or an Azure Synapse Analytics sink, take the following steps:
-
-1. Allow outbound TCP communication on port 1433 for both the Windows firewall and the corporate firewall.
-1. Configure the firewall settings of the SQL Database to add the IP address of the self-hosted integration runtime machine to the list of allowed IP addresses.
-
-> [!NOTE]
-> If your firewall doesn't allow outbound port 1433, the self-hosted integration runtime can't access the SQL database directly. In this case, you can use a [staged copy](copy-activity-performance.md) to SQL Database and Azure Synapse Analytics. In this scenario, you require only HTTPS (port 443) for the data movement.
## Proxy server considerations
@@ -433,6 +362,66 @@ msiexec /q /i IntegrationRuntime.msi NOFIREWALL=1
If you choose not to open port 8060 on the self-hosted integration runtime machine, use mechanisms other than the Setting Credentials application to configure data-store credentials. For example, you can use the **New-AzDataFactoryV2LinkedServiceEncryptCredential** PowerShell cmdlet. +
+## Ports and firewalls
+
+There are two firewalls to consider:
+
+- The *corporate firewall* that runs on the central router of the organization
+- The *Windows firewall* that is configured as a daemon on the local machine where the self-hosted integration runtime is installed
+
+![The firewalls](media/create-self-hosted-integration-runtime/firewall.png)
+
+At the corporate firewall level, you need to configure the following domains and outbound ports:
+
+[!INCLUDE [domain-and-outbound-port-requirements](./includes/domain-and-outbound-port-requirements-internal.md)]
++
+At the Windows firewall level or machine level, these outbound ports are normally enabled. If they aren't, you can configure the domains and ports on a self-hosted integration runtime machine.
+
+> [!NOTE]
+> As currently Azure Relay doesn't support service tag, you have to use service tag **AzureCloud** or **Internet** in NSG rules for the communication to Azure Relay.
+> For the communication to Azure Data Factory, you can use service tag **DataFactoryManagement** in the NSG rule setup.
+
+Based on your source and sinks, you might need to allow additional domains and outbound ports in your corporate firewall or Windows firewall.
+
+[!INCLUDE [domain-and-outbound-port-requirements](./includes/domain-and-outbound-port-requirements-external.md)]
+
+For some cloud databases, such as Azure SQL Database and Azure Data Lake, you might need to allow IP addresses of self-hosted integration runtime machines on their firewall configuration.
+
+### Get URL of Azure Relay
+One required domain and port that need to be put in the allow list of your firewall is for the communication to Azure Relay. Self-hosted integration runtime use it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can get all the FQDNs which is required by your self-hosted integration runtime from ADF portal.
+1. Go to ADF portal and select your self-hosted integration runtime.
+2. In Edit page, select **Nodes**.
+3. Click **View Service URLs** to get all FQDNs.
+
+![Azure Relay URLs](media/create-self-hosted-integration-runtime/Azure-relay-url.png)
+
+4. You can add these FQDNs in the allow list of firewall rules.
+
+### Copy data from a source to a sink
+
+Ensure that you properly enable firewall rules on the corporate firewall, the Windows firewall of the self-hosted integration runtime machine, and the data store itself. Enabling these rules lets the self-hosted integration runtime successfully connect to both source and sink. Enable rules for each data store that is involved in the copy operation.
+
+For example, to copy from an on-premises data store to a SQL Database sink or an Azure Synapse Analytics sink, take the following steps:
+
+1. Allow outbound TCP communication on port 1433 for both the Windows firewall and the corporate firewall.
+2. Configure the firewall settings of the SQL Database to add the IP address of the self-hosted integration runtime machine to the list of allowed IP addresses.
+
+> [!NOTE]
+> If your firewall doesn't allow outbound port 1433, the self-hosted integration runtime can't access the SQL database directly. In this case, you can use a [staged copy](copy-activity-performance.md) to SQL Database and Azure Synapse Analytics. In this scenario, you require only HTTPS (port 443) for the data movement.
++
+## Installation best practices
+
+You can install the self-hosted integration runtime by downloading a Managed Identity setup package from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717). See the article [Move data between on-premises and cloud](tutorial-hybrid-copy-powershell.md) for step-by-step instructions.
+
+- Configure a power plan on the host machine for the self-hosted integration runtime so that the machine doesn't hibernate. If the host machine hibernates, the self-hosted integration runtime goes offline.
+- Regularly back up the credentials associated with the self-hosted integration runtime.
+- To automate self-hosted IR setup operations, please refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime).
+++ ## Next steps For step-by-step instructions, see [Tutorial: Copy on-premises data to cloud](tutorial-hybrid-copy-powershell.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-shared-self-hosted-integration-runtime-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
@@ -20,6 +20,19 @@ ms.date: 06/10/2020
This guide shows you how to create a shared self-hosted integration runtime in Azure Data Factory. Then you can use the shared self-hosted integration runtime in another data factory.
+## Create a shared self-hosted integration runtime in Azure Data Factory
+
+You can reuse an existing self-hosted integration runtime infrastructure that you already set up in a data factory. This reuse lets you create a linked self-hosted integration runtime in a different data factory by referencing an existing shared self-hosted IR.
+
+To see an introduction and demonstration of this feature, watch the following 12-minute video:
+
+> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
+
+### Terminology
+
+- **Shared IR**: An original self-hosted IR that runs on a physical infrastructure.
+- **Linked IR**: An IR that references another shared IR. The linked IR is a logical IR and uses the infrastructure of another shared self-hosted IR.
+ ## Create a shared self-hosted IR using Azure Data Factory UI To create a shared self-hosted IR using Azure Data Factory UI, you can take following steps:
@@ -210,6 +223,37 @@ Remove-AzDataFactoryV2IntegrationRuntime `
-LinkedDataFactoryName $LinkedDataFactoryName ```
+### Monitoring
+
+#### Shared IR
+
+![Selections to find a shared integration runtime](media/create-self-hosted-integration-runtime/Contoso-shared-IR.png)
+
+![Monitor a shared integration runtime](media/create-self-hosted-integration-runtime/contoso-shared-ir-monitoring.png)
+
+#### Linked IR
+
+![Selections to find a linked integration runtime](media/create-self-hosted-integration-runtime/Contoso-linked-ir.png)
+
+![Monitor a linked integration runtime](media/create-self-hosted-integration-runtime/Contoso-linked-ir-monitoring.png)
++
+### Known limitations of self-hosted IR sharing
+
+* The data factory in which a linked IR is created must have an [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md). By default, the data factories created in the Azure portal or PowerShell cmdlets have an implicitly created Managed Identity. But when a data factory is created through an Azure Resource Manager template or SDK, you must set the **Identity** property explicitly. This setting ensures that Resource Manager creates a data factory that contains a Managed Identity.
+
+* The Data Factory .NET SDK that supports this feature must be version 1.1.0 or later.
+
+* To grant permission, you need the Owner role or the inherited Owner role in the data factory where the shared IR exists.
+
+* The sharing feature works only for data factories within the same Azure AD tenant.
+
+* For Azure AD [guest users](../active-directory/governance/manage-guest-access-with-access-reviews.md), the search functionality in the UI, which lists all data factories by using a search keyword, doesn't work. But as long as the guest user is the owner of the data factory, you can share the IR without the search functionality. For the Managed Identity of the data factory that needs to share the IR, enter that Managed Identity in the **Assign Permission** box and select **Add** in the Data Factory UI.
+
+ > [!NOTE]
+ > This feature is available only in Data Factory V2.
++ ### Next steps - Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-movement-security-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-movement-security-considerations.md
@@ -27,9 +27,9 @@ In a Data Factory solution, you create one or more data [pipelines](concepts-pip
Even though Data Factory is only available in few regions, the data movement service is [available globally](concepts-integration-runtime.md#integration-runtime-location) to ensure data compliance, efficiency, and reduced network egress costs.
-Azure Data Factory including Integration Runtime does not store any data except for linked service credentials for cloud data stores, which are encrypted by using certificates. With Data Factory, you create data-driven workflows to orchestrate movement of data between [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats), and processing of data by using [compute services](compute-linked-services.md) in other regions or in an on-premises environment. You can also monitor and manage workflows by using SDKs and Azure Monitor.
+Azure Data Factory including Azure Integration Runtime and Self-hosted Integration Runtime does not store any temporary data, cache data or logs except for linked service credentials for cloud data stores, which are encrypted by using certificates. With Data Factory, you create data-driven workflows to orchestrate movement of data between [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats), and processing of data by using [compute services](compute-linked-services.md) in other regions or in an on-premises environment. You can also monitor and manage workflows by using SDKs and Azure Monitor.
-Data Factory has been certified for:
+Data Factory has been certified for:
| **[CSA STAR Certification](https://www.microsoft.com/trustcenter/compliance/csa-star-certification)** | | :----------------------------------------------------------- |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/includes/domain-and-outbound-port-requirements-external https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/includes/domain-and-outbound-port-requirements-external.md new file mode 100644
@@ -0,0 +1,15 @@
+---
+title: include file
+description: include file
+services: data-factory
+author: lrtoyou1223
+ms.service: data-factory
+ms.topic: include
+ms.date: 10/09/2019
+ms.author: lle
+---
+| Domain names | Outbound ports | Description |
+| ----------------------------- | -------------- | ---------------------------------------- |
+| `*.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the Azure storage account when you use the staged copy feature. |
+| `*.database.windows.net` | 1433 | Required only when you copy from or to Azure SQL Database or Azure Synapse Analytics and optional otherwise. Use the staged-copy feature to copy data to SQL Database or Azure Synapse Analytics without opening port 1433. |
+| `*.azuredatalakestore.net`<br>`login.microsoftonline.com/<tenant>/oauth2/token` | 443 | Required only when you copy from or to Azure Data Lake Store and optional otherwise. |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/includes/domain-and-outbound-port-requirements-internal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/includes/domain-and-outbound-port-requirements-internal.md new file mode 100644
@@ -0,0 +1,16 @@
+---
+title: include file
+description: include file
+services: data-factory
+author: lrtoyou1223
+ms.service: data-factory
+ms.topic: include
+ms.date: 10/09/2019
+ms.author: lle
+---
+| Domain names | Outbound ports | Description |
+| ----------------------------------------------------- | -------------- | ---------------------------|
+| Public Cloud: `*.servicebus.windows.net` <br> Azure Government: `*.servicebus.usgovcloudapi.net` <br> China: `*.servicebus.chinacloudapi.cn` | 443 | Required by the self-hosted integration runtime for interactive authoring. |
+| Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Factory service. <br>For new created Data Factory in public cloud, please find the FQDN from your Self-hosted Integration Runtime key which is in format {datafactory}.{region}.datafactory.azure.net. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, please use *.frontend.clouddatahub.net instead. |
+| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. |
+| Key Vault URL | 443 | Required by Azure Key Vault if you store the credential in Key Vault. |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/self-hosted-integration-runtime-auto-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-auto-update.md new file mode 100644
@@ -0,0 +1,43 @@
+---
+title: Self-hosted integration runtime auto-update and expire notification
+description: Learn about self-hosted integration runtime auto-update and expire notification
+services: data-factory
+documentationcenter: ''
+ms.service: data-factory
+ms.workload: data-services
+ms.topic: conceptual
+author: lrtoyou1223
+ms.author: lle
+manager: shwang
+ms.custom: seo-lt-2019
+ms.date: 12/25/2020
+---
+
+# Self-hosted integration runtime auto-update and expire notification
+
+[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
+
+This article will describe how to let self-hosted integration runtime auto-update to the latest version and how ADF manages the versions of self-hosted integration runtime.
+
+## Self-hosted Integration Runtime Auto-update
+Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month which includes new feature release, bug fix or enhancement. So we recommend users to update to the latest version in order to get the newest feature and enhancement.
+
+The most convenient way is to enable auto-update when you create or edit self-hosted integration runtime. Then it will be automatically update to the latest version. You can also schedule the update at the most suitable time slot as you wish.
+
+![Enable auto-update](media/create-self-hosted-integration-runtime/shir-auto-update.png)
+
+You can check the last update datetime in your self-hosted integration runtime client.
+
+![Enable auto-update](media/create-self-hosted-integration-runtime/shir-auto-update-2.png)
+
+> [!NOTE]
+> To ensure the stability of self-hosted integration runtime, although we release two versions, we will only update it automatically once every month. So sometimes you will find that the auto-updated version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717).
+
+## Self-hosted Integration Runtime Expire Notification
+If you want to manually control which version of self-hosted integration runtime, you can disable the setting of auto-update and install it manually. Each version of self-hosted integration runtime will be expired in one year. The expiring message is shown in ADF portal and self-hosted integration runtime client **90 days** before expiration.
+
+## Next steps
+
+- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).
+
+- Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-bulk-copy-portal.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: tutorial ms.custom: seo-lt-2019; seo-dt-2019
-ms.date: 12/09/2020
+ms.date: 01/12/2021
--- # Copy multiple tables in bulk by using Azure Data Factory in the Azure portal
@@ -46,20 +46,8 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites * **Azure Storage account**. The Azure Storage account is used as staging blob storage in the bulk copy operation.
-* **Azure SQL Database**. This database contains the source data.
-* **Azure Synapse Analytics**. This data warehouse holds the data copied over from the SQL Database.
-
-### Prepare SQL Database and Azure Synapse Analytics
-
-**Prepare the source Azure SQL Database**:
-
-Create a database in SQL Database with Adventure Works LT sample data following [Create a database in Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) article. This tutorial copies all the tables from this sample database to an Azure Synapse Analytics.
-
-**Prepare the sink Azure Synapse Analytics**:
-
-1. If you don't have an Azure Synapse Analytics workspace, see the [Get started with Azure Synapse Analytics](..\synapse-analytics\get-started.md) article for steps to create one.
-
-1. Create corresponding table schemas in Azure Synapse Analytics. You use Azure Data Factory to migrate/copy data in a later step.
+* **Azure SQL Database**. This database contains the source data. Create a database in SQL Database with Adventure Works LT sample data following [Create a database in Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) article. This tutorial copies all the tables from this sample database to an Azure Synapse Analytics.
+* **Azure Synapse Analytics**. This data warehouse holds the data copied over from the SQL Database. If you don't have an Azure Synapse Analytics workspace, see the [Get started with Azure Synapse Analytics](..\synapse-analytics\get-started.md) article for steps to create one.
## Azure services to access SQL server
@@ -236,6 +224,7 @@ The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
![Foreach parameter builder](./media/tutorial-bulk-copy-portal/for-each-parameter-builder.png) d. Switch to **Activities** tab, click the **pencil icon** to add a child activity to the **ForEach** activity.
+
![Foreach activity builder](./media/tutorial-bulk-copy-portal/for-each-activity-builder.png) 1. In the **Activities** toolbox, expand **Move & Transfer**, and drag-drop **Copy data** activity into the pipeline designer surface. Notice the breadcrumb menu at the top. The **IterateAndCopySQLTable** is the pipeline name and **IterateSQLTables** is the ForEach activity name. The designer is in the activity scope. To switch back to the pipeline editor from the ForEach editor, you can click the link in the breadcrumb menu.
@@ -252,7 +241,6 @@ The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
SELECT * FROM [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}] ``` - 1. Switch to the **Sink** tab, and do the following steps: 1. Select **AzureSqlDWDataset** for **Sink Dataset**.
@@ -260,6 +248,7 @@ The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
1. Click the input box for the VALUE of DWSchema parameter -> select the **Add dynamic content** below, enter `@item().TABLE_SCHEMA` expression as script, -> select **Finish**. 1. For Copy method, select **PolyBase**. 1. Clear the **Use type default** option.
+ 1. For Table option, the default setting is "None". If you donΓÇÖt have tables pre-created in the sink Azure Synapse Analytics, enable **Auto create table** option, copy activity will then automatically create tables for you based on the source data. For details, refer to [Auto create sink tables](copy-activity-overview.md#auto-create-sink-tables).
1. Click the **Pre-copy Script** input box -> select the **Add dynamic content** below -> enter the following expression as script -> select **Finish**. ```sql
@@ -267,6 +256,8 @@ The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
``` ![Copy sink settings](./media/tutorial-bulk-copy-portal/copy-sink-settings.png)++ 1. Switch to the **Settings** tab, and do the following steps: 1. Select the checkbox for **Enable Staging**.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
@@ -141,7 +141,7 @@ While designing models to reflect the entities in your environment, it can be us
Using models that are based on industry standards or use standard ontology representation, such as RDF or OWL, provides a rich starting point when designing your Azure Digital Twins models. Using industry models also helps with standardization and information sharing.
-To be used with Azure Digital Twins, a model must be represented in the JSON-LD-based [**Digital Twins Definition Language (DTDL)**](concepts-models.md). Therefore, this article describes how to represent your industry-standard models in DTDL, integrating the existing industry concepts with DTDL semantics so that Azure Digital Twins can use them. The DTDL model then serves as the source of truth for the model within Azure Digital Twins.
+To be used with Azure Digital Twins, a model must be represented in the JSON-LD-based [**Digital Twins Definition Language (DTDL)**](concepts-models.md). Therefore, to use an industry-standard model, you must first convert it to DTDL so that Azure Digital Twins can use it. The DTDL model then serves as the source of truth for the model within Azure Digital Twins.
There are two main paths to integrating industry-standard models with DTDL, depending on your situation: * If you have yet to create your models, you can design them around **existing starter DTDL ontologies** that contain language specific to your industry.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
@@ -2,7 +2,7 @@
# Mandatory fields. title: Integrate with Azure Maps titleSuffix: Azure Digital Twins
-description: See how to create an Azure function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
+description: See how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
author: alexkarcher-msft ms.author: alkarche # Microsoft employees only ms.date: 6/3/2020
@@ -22,7 +22,7 @@ This article walks through the steps required to use Azure Digital Twins data to
This how-to will cover: 1. Configuring your Azure Digital Twins instance to send twin update events to a function in [Azure Functions](../azure-functions/functions-overview.md).
-2. Creating an Azure function to update an Azure Maps indoor maps feature stateset.
+2. Creating a function to update an Azure Maps indoor maps feature stateset.
3. How to store your maps ID and feature stateset ID in the Azure Digital Twins graph. ### Prerequisites
@@ -41,7 +41,7 @@ The image below illustrates where the indoor maps integration elements in this t
## Create a function to update a map when twins update
-First, you'll create a route in Azure Digital Twins to forward all twin update events to an event grid topic. Then, you'll use an Azure function to read those update messages and update a feature stateset in Azure Maps.
+First, you'll create a route in Azure Digital Twins to forward all twin update events to an event grid topic. Then, you'll use a function to read those update messages and update a feature stateset in Azure Maps.
## Create a route and filter to twin update notifications
@@ -70,7 +70,7 @@ This pattern reads from the room twin directly, rather than the IoT device, whic
az dt route create -n <your-Azure-Digital-Twins-instance-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my_route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'" ```
-## Create an Azure function to update maps
+## Create a function to update maps
You're going to create an Event Grid-triggered function inside your function app from the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
@@ -114,4 +114,4 @@ Depending on the configuration of your topology, you will be able to store these
To read more about managing, upgrading, and retrieving information from the twins graph, see the following references: * [*How-to: Manage digital twins*](./how-to-manage-twin.md)
-* [*How-to: Query the twin graph*](./how-to-query-graph.md)
\ No newline at end of file
+* [*How-to: Query the twin graph*](./how-to-query-graph.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-time-series-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
@@ -85,13 +85,13 @@ The Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](./tutorial-
Before moving on, take note of your *Event Hubs namespace* and *resource group*, as you will use them again to create another event hub later in this article.
-## Create an Azure function
+## Create a function in Azure
-Next, you'll create an Event Hubs-triggered function inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), or your own.
+Next, you'll use Azure Functions to create an Event Hubs-triggered function inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), or your own.
This function will convert those twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
-For more information about using Event Hubs with Azure functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
+For more information about using Event Hubs with Azure Functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
Inside your published function app, replace the function code with the following code.
@@ -203,4 +203,4 @@ The digital twins are stored by default as a flat hierarchy in Time Series Insig
You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following references: * [*How-to: Manage a digital twin*](./how-to-manage-twin.md)
-* [*How-to: Query the twin graph*](./how-to-query-graph.md)
\ No newline at end of file
+* [*How-to: Query the twin graph*](./how-to-query-graph.md)
dms https://docs.microsoft.com/en-us/azure/dms/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/faq.md
@@ -104,7 +104,7 @@ During a typical, simple database migration, you:
## Troubleshooting and optimization **Q. IΓÇÖm setting up a migration project in DMS, and IΓÇÖm having difficulty connecting to my source database. What should I do?**
-If you have trouble connecting to your source database system while working on migration, create a virtual machine in the virtual network with which you set up your DMS instance. In the virtual machine, you should be able to run a connect test, such as using a UDL file to test a connection to SQL Server or downloading Robo 3T to test MongoDB connections. If the connection test succeeds, you shouldn't have an issue with connecting to your source database. If the connection test doesn't succeed, contact your network administrator.
+If you have trouble connecting to your source database system while working on migration, create a virtual machine in the same subnet of the virtual network with which you set up your DMS instance. In the virtual machine, you should be able to run a connect test, such as using a UDL file to test a connection to SQL Server or downloading Robo 3T to test MongoDB connections. If the connection test succeeds, you shouldn't have an issue with connecting to your source database. If the connection test doesn't succeed, contact your network administrator.
**Q. Why is my Azure Database Migration Service unavailable or stopped?** If the user explicitly stops Azure Database Migration Service (DMS) or if the service is inactive for a period of 24 hours, the service will be in a stopped or auto paused state. In each case, the service will be unavailable and in a stopped status. To resume active migrations, restart the service.
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-sql-server-to-azure-sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
@@ -1,5 +1,5 @@
---
-title: "Tutorial: Migrate SQL Server offline to a SQL single database"
+title: "Tutorial: Migrate SQL Server offline to a SQL single database"
titleSuffix: Azure Database Migration Service description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service. services: dms
@@ -11,23 +11,23 @@ ms.service: dms
ms.workload: data-services ms.custom: "seo-lt-2019" ms.topic: tutorial
-ms.date: 01/08/2020
+ms.date: 01/03/2021
--- # Tutorial: Migrate SQL Server to Azure SQL Database offline using DMS
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the **Adventureworks2012** database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
+You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [Adventureworks2016](https://docs.microsoft.com/sql/samples/adventureworks-install-configure?view=sql-server-ver15&tabs=ssms#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
-In this tutorial, you learn how to:
+You will learn how to:
> [!div class="checklist"] >
-> - Assess your on-premises database by using the Data Migration Assistant.
-> - Migrate the sample schema by using the Data Migration Assistant.
+> - Assess and evaluate your on-premises database for any blocking issues by using the Data Migration Assistant.
+> - Use the Data Migration Assistant to migrate the database sample schema.
+> - Register the Azure DataMigration resource provider.
> - Create an instance of Azure Database Migration Service. > - Create a migration project by using Azure Database Migration Service. > - Run the migration. > - Monitor the migration.
-> - Download a migration report.
[!INCLUDE [online-offline](../../includes/database-migration-service-offline-online.md)]
@@ -39,12 +39,12 @@ To complete this tutorial, you need to:
- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads). - Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- Create a database in Azure SQL Database, which you do by following the detail in the article [Create a database in Azure SQL Database using the Azure portal](../azure-sql/database/single-database-create-quickstart.md).
+- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](../azure-sql/database/single-database-create-quickstart.md). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.
> [!NOTE] > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md). -- Download and install the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
+- Download and install the latest version of the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).
- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
@@ -56,23 +56,23 @@ To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity. >
- >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-portal.md).
+ >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md).
-- Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Ensure that your virtual network Network Security Group outbound security rules don't block the following communication ports required for the Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). - Open your Windows firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. - If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server. - When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration. - Create a server-level IP [firewall rule](../azure-sql/database/firewall-configure.md) for Azure SQL Database to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. - Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.-- Ensure that the credentials used to connect to target Azure SQL Database instance have CONTROL DATABASE permission on the target databases.
+- Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.
## Assess your on-premises database
-Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant v3.3 or later, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
+Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
+2. Specify a project name. From the **Assessment type** drop down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
@@ -83,7 +83,7 @@ Before you can migrate data from a SQL Server instance to a single database or p
3. In the Data Migration Assistant, on the **Options** screen, select **Next**. 4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **AdventureWorks2012**, select **Add**, and then select **Start Assessment**.
+5. In the **Add sources** dialog box, select **Adventureworks2016**, select **Add**, and then select **Start Assessment**.
> [!NOTE] > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
@@ -104,12 +104,12 @@ Before you can migrate data from a SQL Server instance to a single database or p
After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database. > [!NOTE]
-> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites. For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.
+> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites.
> [!IMPORTANT] > If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-To migrate the **AdventureWorks2012** schema to a single database or pooled database Azure SQL Database, perform the following steps:
+To migrate the **Adventureworks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**. 2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
@@ -120,7 +120,7 @@ To migrate the **AdventureWorks2012** schema to a single database or pooled data
![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png) 4. Select **Create** to create the project.
-5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2012** database.
+5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **Adventureworks2016** database.
![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
@@ -128,7 +128,7 @@ To migrate the **AdventureWorks2012** schema to a single database or pooled data
![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2012** database that need to be deployed to Azure SQL Database.
+7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **Adventureworks2016** database that need to be deployed to Azure SQL Database.
By default, all objects are selected.
@@ -166,23 +166,26 @@ To migrate the **AdventureWorks2012** schema to a single database or pooled data
![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-azure-sql/dms-create1.png)
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
+3. On the **Create Migration Service** basics screen:
-4. Select the location in which you want to create the instance of Azure Database Migration Service.
+ - Select the subscription.
+ - Create a new resource group or choose an existing one.
+ - Specify a name for the instance of the Azure Database Migration Service.
+ - Select the location in which you want to create the instance of Azure Database Migration Service.
+ - Choose **Azure** as the service mode.
+ - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-5. Select an existing virtual network or create a new one.
+ ![Configure Azure Database Migration Service instance basics settings](media/tutorial-sql-server-to-azure-sql/dms-settings2.png)
- The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
+ - Select **Next: Networking**.
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+4. On the **Create Migration Service** networking screen:
-6. Select a pricing tier.
+ - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance. For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ ![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-azure-sql/dms-settings3.png)
- ![Configure Azure Database Migration Service instance settings](media/tutorial-sql-server-to-azure-sql/dms-settings2.png)
-
-7. Select **Create** to create the service.
+ - Select **Review + Create** to create the service.
## Create a migration project
@@ -206,7 +209,7 @@ After the service is created, locate it within the Azure portal, open it, and th
## Specify source details
-1. On the **Migration source detail** screen, specify the connection details for the source SQL Server instance.
+1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
@@ -217,42 +220,38 @@ After the service is created, locate it within the Azure portal, open it, and th
> [!CAUTION] > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
- ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details2.png)
- > [!IMPORTANT] > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
+ ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details2.png)
+
+3. Select **Next: Select target**.
+ ## Specify target details
-1. Select **Save**, and then on the **Migration target details** screen, specify the connection details for the target Azure SQL Database, which is the pre-provisioned Azure SQL Database to which the **AdventureWorks2012** schema was deployed by using the Data Migration Assistant.
+1. On the **Select target** screen, specify the connection details for the target Azure SQL Database, which is the pre-provisioned Azure SQL Database to which the **Adventureworks2016** schema was deployed by using the Data Migration Assistant.
![Select Target](media/tutorial-sql-server-to-azure-sql/dms-select-target2.png)
-2. Select **Save**, and then on the **Map to target databases** screen, map the source and the target database for migration.
+2. Select **Next: Map to target databases** screen, map the source and the target database for migration.
If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default. ![Map to target databases](media/tutorial-sql-server-to-azure-sql/dms-map-targets-activity2.png)
-3. Select **Save**, on the **Select tables** screen, expand the table listing, and then review the list of affected fields.
+3. Select **Next: Configuration migration settings**, expand the table listing, and then review the list of affected fields.
Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade. ![Select tables](media/tutorial-sql-server-to-azure-sql/dms-configure-setting-activity2.png)
-4. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-5. Expand the **Validation option** section to display the **Choose validation option** screen, and then specify whether to validate the migrated databases for **Schema comparison**, **Data consistency**, and **Query correctness**.
+4. Select **Next: Summary**, review the migration configuration and in the **Activity name** text box, specify a name for the migration activity.
![Choose validation option](media/tutorial-sql-server-to-azure-sql/dms-configuration2.png)
-6. Select **Save**, review the summary to ensure that the source and target details match what you previously specified.
-
- ![Migration Summary](media/tutorial-sql-server-to-azure-sql/dms-run-migration2.png)
- ## Run the migration -- Select **Run migration**.
+- Select **Start migration**.
The migration activity window appears, and the **Status** of the activity is **Pending**.
@@ -264,9 +263,7 @@ After the service is created, locate it within the Azure portal, open it, and th
![Activity Status Completed](media/tutorial-sql-server-to-azure-sql/dms-completed-activity1.png)
-2. After the migration completes, select **Download report** to get a report listing the details associated with the migration process.
-
-3. Verify the target database(s) on the target Azure SQL Database.
+2. Verify the target database(s) on the target **Azure SQL Database**.
### Additional resources
dns https://docs.microsoft.com/en-us/azure/dns/dns-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-faq.md
@@ -5,7 +5,7 @@ services: dns
author: rohinkoul ms.service: dns ms.topic: article
-ms.date: 6/15/2019
+ms.date: 01/11/2021
ms.author: rohink ---
@@ -190,6 +190,10 @@ Internationalized domain names (IDNs) encode each DNS name by using [punycode](h
To configure IDNs in Azure DNS, convert the zone name or record set name to punycode. Azure DNS doesn't currently support built-in conversion to or from punycode.
+### Does Azure DNS private zones store any customer content?
+
+No, Azure DNS private zones doesn't store any customer content.
+ ## Next steps - [Learn more about Azure DNS](dns-overview.md).
@@ -198,4 +202,4 @@ To configure IDNs in Azure DNS, convert the zone name or record set name to puny
- [Learn more about DNS zones and records](dns-zones-records.md). -- [Get started with Azure DNS](dns-getstarted-portal.md).\ No newline at end of file
+- [Get started with Azure DNS](dns-getstarted-portal.md).
education-hub https://docs.microsoft.com/en-us/azure/education-hub/azure-dev-tools-teaching/enroll-renew-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/azure-dev-tools-teaching/enroll-renew-subscription.md
@@ -6,19 +6,19 @@ ms.author: rymend
ms.topic: quickstart ms.service: azure-education ms.subservice: education-hub
-ms.date: 06/30/2020
+ms.date: 01/07/2021
--- # Enroll or renew an Azure Dev Tools for Teaching subscription This article describes the process for enrolling in Azure Dev Tools for Teaching and creating a subscription.
-## Purchase a new subscription
+## Enroll a new subscription
1. Navigate to the [Azure Dev Tools for Teaching webpage](https://azure.microsoft.com/education/institutions/). 1. Select the **Sign up** button. 1. Select **Enroll or Renew** on the Azure Dev Tools for Teaching banner.
-1. Select the type of subscription you're purchasing:
+1. Select the type of subscription you're enrolling:
- Apply for a new subscription - Continue an application you started
@@ -29,16 +29,12 @@ This article describes the process for enrolling in Azure Dev Tools for Teaching
1. Complete your **Institution Information**, if enrolling for the first time. If renewing, this information will autofill. :::image type="content" source="media/enroll-renew-subscription/application-institution-information.png" alt-text="Enter institution information." border="false":::
-
-1. Fill out your **Billing Information**. If your institution is part of a Volume Licensing agreement, you can input your Volume Licensing agreement number. If you sign up for a new subscription and are paying by anything other than credit card, there may be a delay in getting access to your subscription while the payment processes. You'll receive emails updating payment progress.
- :::image type="content" source="media/enroll-renew-subscription/application-billing-information.png" alt-text="Enter billing information." border="false":::
-
1. Select the **Subscription Plan** and confirm the **Subscription Administrator** for the subscription. The email domain of the Subscription Administrator will enable students on the same domain to get easy access to download their software benefits. :::image type="content" source="media/enroll-renew-subscription/application-select-subscription-plan.png" alt-text="Select subscription plan." border="false":::
-1. Confirm all purchase information and click **Place Order**. Confirmation emails will be sent to your inbox, with updates on payment status and any possible next steps.
+1. Confirm all enrollment information and click **Place Order**. Confirmation emails will be sent to your inbox, with updates on enrollment status and any possible next steps.
:::image type="content" source="media/enroll-renew-subscription/application-confirm-place-order.png" alt-text="Confirm your order." border="false":::
@@ -62,7 +58,7 @@ You can complete the renewal process as early as 90 days before the expiration d
1. Select the **Subscription Plan** and confirm the **Subscription Administrator** for the subscription.
-1. Confirm all purchase information and click **Place Order**. Confirmation emails will be sent to your inbox with updates on payment status and any possible next steps.
+1. Confirm all enrollment information and click **Place Order**. Confirmation emails will be sent to your inbox with updates on enrollment status and any possible next steps.
## Next steps
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/dynamically-add-partitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/dynamically-add-partitions.md
@@ -66,7 +66,7 @@ Event Hubs provides three sender options:
- **Partition sender** ΓÇô In this scenario, clients send events directly to a partition. Although partitions are identifiable and events can be sent directly to them, we don't recommend this pattern. Adding partitions doesn't impact this scenario. We recommend that you restart applications so that they can detect newly added partitions. - **Partition key sender** ΓÇô in this scenario, clients sends the events with a key so that all events belonging to that key end up in the same partition. In this case, service hashes the key and routes to the corresponding partition. The partition count update can cause out-of-order issues because of hashing change. So, if you care about ordering, ensure that your application consumes all events from existing partitions before you increase the partition count.-- **Round-robin sender (default)** ΓÇô In this scenario, the Event Hubs service round robins the events across partitions. Event Hubs service is aware of partition count changes and will send to new partitions within seconds of altering partition count.
+- **Round-robin sender (default)** ΓÇô In this scenario, the Event Hubs service round robins the events across partitions, and also uses a load-balancing algorithm. Event Hubs service is aware of partition count changes and will send to new partitions within seconds of altering partition count.
### Receiver/consumer clients Event Hubs provides direct receivers and an easy consumer library called the [Event Processor Host (old SDK)](event-hubs-event-processor-host.md) or [Event Processor (new SDK)](event-processor-balance-partition-load.md).
@@ -94,7 +94,7 @@ When a consumer group member performs a metadata refresh and picks up the newly
> [!IMPORTANT] > While the existing data preserves ordering, partition hashing will be broken for messages hashed after the partition count changes due to addition of partitions. - Adding partition to an existing topic or event hub instance is recommended in the following cases:
- - When you use the round robin (default) method of sending events
+ - When you use the default method of sending events
- Kafka default partitioning strategies, example ΓÇô Sticky Assignor strategy
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-diagnostic-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-diagnostic-logs.md
@@ -40,7 +40,7 @@ Event Hubs captures diagnostic logs for the following categories:
| Category | Description | | -------- | ----------- | | Archive Logs | Captures information about [Event Hubs Capture](event-hubs-capture-overview.md) operations, specifically, logs related to capture errors. |
-| Operational Logs | Capture all management operations that are performed on the Azure Event Hubs namespace. Data operations are not captured, because of the high volume of data operations that are conducted on Azure Event Hubs. |
+| Operational Logs | Capture all management operations that are performed on the Azure Event Hubs namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on Azure Event Hubs. |
| Auto scale logs | Captures auto-inflate operations done on an Event Hubs namespace. | | Kafka coordinator logs | Captures Kafka coordinator operations related to Event Hubs. | | Kafka user error logs | Captures information about Kafka APIs called on Event Hubs. |
@@ -95,12 +95,12 @@ Operational log JSON strings include elements listed in the following table:
Name | Description ------- | ------- `ActivityId` | Internal ID, used for tracking purposes |
-`EventName` | Operation name |
+`EventName` | Operation name. For a list of values for this element, see the [Event names](#event-names) |
`resourceId` | Azure Resource Manager resource ID | `SubscriptionId` | Subscription ID | `EventTimeString` | Operation time |
-`EventProperties` | Operation properties |
-`Status` | Operation status |
+`EventProperties` |Properties for the operation. This element provides more information about the event as shown in the following example. |
+`Status` | Operation status. The value can be either **Succeeded** or **Failed**. |
`Caller` | Caller of operation (Azure portal or management client) | `Category` | OperationalLogs |
@@ -121,6 +121,13 @@ Example:
} ```
+### Event names
+Event name is populated as operation type + resource type from the following enumerations. For example, `Create Queue`, `Retrieve Event Hu`, or `Delete Rule`.
+
+| Operation type | Resource type |
+| -------------- | ------------- |
+| <ul><li>Create</li><li>Update</li><li>Delete</li><li>Retrieve</li><li>Unknown</li></ul> | <ul><li>Namespace</li><li>Queue</li><li>Topic</li><li>Subscription</li><li>EventHub</li><li>EventHubSubscription</li><li>NotificationHub</li><li>NotificationHubTier</li><li>SharedAccessPolicy</li><li>UsageCredit</li><li>NamespacePnsCredentials</li>Rule</li>ConsumerGroup</li> |
+ ## Autoscale logs schema Autoscale log JSON includes elements listed in the following table:
@@ -195,7 +202,7 @@ Event Hubs virtual network (VNet) connection event JSON includes elements listed
| `Count` | Number of occurrences for the given action | | `ResourceId` | Azure Resource Manager resource ID. |
-Virtual network logs are generated only if the namespace allows access from **selected networks** or from **specific IP addresses** (IP filter rules). If you don't want to restrict access to your namespace using these features and still want to get virtual network logs to track IP addresses of clients connecting to the Event Hubs namespace, you could use the following workaround. Enable IP filtering, and add the total addressable IPv4 range (1.0.0.0/1 - 255.0.0.0/1). Event Hubs doesn't support IPv6 ranges.
+Virtual network logs are generated only if the namespace allows access from **selected networks** or from **specific IP addresses** (IP filter rules). If you don't want to restrict the access to your namespace using these features and still want to get virtual network logs to track IP addresses of clients connecting to the Event Hubs namespace, you could use the following workaround. Enable IP filtering, and add the total addressable IPv4 range (1.0.0.0/1 - 255.0.0.0/1). Event Hubs doesn't support IPv6 ranges.
### Example
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-metrics-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-metrics-azure-monitor.md
@@ -41,58 +41,8 @@ The following metrics give you an overview of the health of your service.
All metrics values are sent to Azure Monitor every minute. The time granularity defines the time interval for which metrics values are presented. The supported time interval for all Event Hubs metrics is 1 minute.
-## Request metrics
-
-Counts the number of data and management operations requests.
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-| Incoming Requests | The number of requests made to the Azure Event Hubs service over a specified period. <br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName |
-| Successful Requests | The number of successful requests made to the Azure Event Hubs service over a specified period. <br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName |
-| Server Errors | The number of requests not processed due to an error in the Azure Event Hubs service over a specified period. <br/><br/>Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName |
-|User Errors |The number of requests not processed due to user errors over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Quota Exceeded Errors |The number of requests exceeded the available quota. See [this article](event-hubs-quotas.md) for more information about Event Hubs quotas.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-
-## Throughput metrics
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-|Throttled Requests |The number of requests that were throttled because the throughput unit usage was exceeded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-
-## Message metrics
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-|Incoming Messages |The number of events or messages sent to Event Hubs over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Outgoing Messages |The number of events or messages retrieved from Event Hubs over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Incoming Bytes |The number of bytes sent to the Azure Event Hubs service over a specified period.<br/><br/> Unit: Bytes <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Outgoing Bytes |The number of bytes retrieved from the Azure Event Hubs service over a specified period.<br/><br/> Unit: Bytes <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-
-## Connection metrics
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-|ActiveConnections |The number of active connections on a namespace as well as on an entity.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Connections Opened |The number of open connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Connections Closed |The number of closed connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-
-## Event Hubs Capture metrics
-
-You can monitor Event Hubs Capture metrics when you enable the Capture feature for your event hubs. The following metrics describe what you can monitor with Capture enabled.
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-|Capture Backlog |The number of bytes that are yet to be captured to the chosen destination.<br/><br/> Unit: Bytes <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Captured Messages |The number of messages or events that are captured to the chosen destination over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-|Captured Bytes |The number of bytes that are captured to the chosen destination over a specified period.<br/><br/> Unit: Bytes <br/> Aggregation Type: Total <br/> Dimension: EntityName|
-
-## Metrics dimensions
-
-Azure Event Hubs supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you do not add dimensions, metrics are specified at the namespace level.
-
-| Metric Name | Description |
-| ------------------- | ----------------- |
-|EntityName| Event Hubs supports the event hub entities under the namespace.|
+## Azure Event Hubs metrics
+For a list of metrics supported by the service, see [Azure Event Hubs](../azure-monitor/platform/metrics-supported.md#microsofteventhubnamespaces)
## Azure Monitor integration with SIEM tools Routing your monitoring data (activity logs, diagnostics logs, etc.) to an event hub with Azure Monitor enables you to easily integrate with Security Information and Event Management (SIEM) tools. For more information, see the following articles/blog posts:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-programming-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-programming-guide.md
@@ -72,7 +72,7 @@ When sending event data, you can specify a value that is hashed to produce a par
### Availability considerations
-Using a partition key is optional, and you should consider carefully whether or not to use one. If you don't specify a partition key when publishing an event, a round-robin assignment is used. In many cases, using a partition key is a good choice if event ordering is important. When you use a partition key, these partitions require availability on a single node, and outages can occur over time; for example, when compute nodes reboot and patch. As such, if you set a partition ID and that partition becomes unavailable for some reason, an attempt to access the data in that partition will fail. If high availability is most important, do not specify a partition key; in that case events are sent to partitions using the round-robin model described previously. In this scenario, you are making an explicit choice between availability (no partition ID) and consistency (pinning events to a partition ID).
+Using a partition key is optional, and you should consider carefully whether or not to use one. If you don't specify a partition key when publishing an event, Event Hubs balances the load among partitions. In many cases, using a partition key is a good choice if event ordering is important. When you use a partition key, these partitions require availability on a single node, and outages can occur over time; for example, when compute nodes reboot and patch. As such, if you set a partition ID and that partition becomes unavailable for some reason, an attempt to access the data in that partition will fail. If high availability is most important, don't specify a partition key. In that case, events are sent to partitions using an internal load-balancing algorithm. In this scenario, you are making an explicit choice between availability (no partition ID) and consistency (pinning events to a partition ID).
Another consideration is handling delays in processing events. In some cases, it might be better to drop data and retry than to try to keep up with processing, which can potentially cause further downstream processing delays. For example, with a stock ticker it's better to wait for complete up-to-date data, but in a live chat or VOIP scenario you'd rather have the data quickly, even if it isn't complete.
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-set-global-reach-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-set-global-reach-portal.md new file mode 100644
@@ -0,0 +1,103 @@
+---
+title: 'Azure ExpressRoute: Configure Global Reach using the Azure portal'
+description: This article helps you link ExpressRoute circuits together to make a private network between your on-premises networks and enable Global Reach using the Azure portal.
+services: expressroute
+author: duongau
+
+ms.service: expressroute
+ms.topic: how-to
+ms.date: 01/11/2021
+ms.author: duau
+
+---
+
+# Configure ExpressRoute Global Reach using the Azure portal
+
+This article helps you configure ExpressRoute Global Reach using PowerShell. For more information, see [ExpressRouteRoute Global Reach](expressroute-global-reach.md).
+
+ ## Before you begin
+
+Before you start configuration, confirm the following criteria:
+
+* You understand ExpressRoute circuit provisioning [workflows](expressroute-workflows.md).
+* Your ExpressRoute circuits are in a provisioned state.
+* Azure private peering is configured on your ExpressRoute circuits.
+* If you want to run PowerShell locally, verify that the latest version of Azure PowerShell is installed on your computer.
+
+## Identify circuits
+
+1. From a browser, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
+
+2. Identify the ExpressRoute circuits that you want use. You can enable ExpressRoute Global Reach between the private peering of any two ExpressRoute circuits, as long as they're located in the supported countries/regions. The circuits are required to be created at different peering locations.
+
+ * If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections.
+ * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-global-reach-list.png" alt-text="List of ExpressRoute circuits":::
+
+## Enable connectivity
+
+Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions.
+
+### ExpressRoute circuits in the same Azure subscription
+
+1. Select the **Azure private** peering configuration.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png" alt-text="ExpressRoute peering overview":::
+
+1. Select the **Enable Global Reach** checkbox and then select **Add Global Reach** to open the *Add Global Reach* configuration page.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png" alt-text="Enable global reach from private peering":::
+
+1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png" alt-text="Global Reach configuration page":::
+
+1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png" alt-text="Saving private peering configuration":::
+
+### ExpressRoute circuits in different Azure subscriptions
+
+If the two circuits aren't in the same Azure subscription, you'll need authorization. In the following configuration, authorization is generated from circuit 2's subscription. The authorization key is then passed to circuit 1.
+
+1. Generate an authorization key.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/create-authorization-expressroute-circuit.png" alt-text="Generate authorization key":::
+
+ Make a note of the private peering ID of circuit 2 and the authorization key.
+
+1. Select the **Azure private** peering configuration.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png" alt-text="Circuit 1 peering overview":::
+
+1. Select the **Enable Global Reach** checkbox and then select **Add Global Reach** to open the *Add Global Reach* configuration page.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png" alt-text="Enable global reach from circuit 1":::
+
+1. On the *Add Global Reach* configuration page, give a name to this configuration. Check the **Redeem authorization** box. Enter the **Authorization Key** and the **ExpressRoute circuit ID** generated and obtained in Step 1. Then provide a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png" alt-text="Add Global Reach with authorization key":::
+
+1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
+
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png" alt-text="Saving private peering configuration on circuit 1":::
+
+## Verify the configuration
+
+Verify the Global Reach configuration by selecting *Private peering* under the ExpressRoute circuit configuration. When configured correctly your configuration should look as followed:
+
+:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png" alt-text="Verify Global Reach configuration":::
+
+## Disable connectivity
+
+You have two options when it comes to disabling Global Reach. To disable connectivity between all circuits, uncheck **Enable Global Reach** to disable connectivity between all circuits. To disable connectivity between an individual circuit, select the delete button next to the *Global Reach name* to remove connectivity between them. Then select **Save** to complete the operation.
+
+:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/disable-global-reach-configuration.png" alt-text="Disable Global Reach configuration":::
+
+After the operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
+
+## Next steps
+1. [Learn more about ExpressRoute Global Reach](expressroute-global-reach.md)
+2. [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
+3. [Link an ExpressRoute circuit to an Azure virtual network](expressroute-howto-linkvnet-arm.md)
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -93,6 +93,7 @@ The following table shows connectivity locations and the service providers for e
| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | |
| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, PCCW Global Limited, SingTel |
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/overview.md
@@ -5,7 +5,7 @@ author: vhorne
ms.service: firewall-manager services: firewall-manager ms.topic: overview
-ms.date: 11/23/2020
+ms.date: 01/12/2021
ms.author: victorh ---
@@ -81,7 +81,6 @@ Azure Firewall Manager has the following known issues:
|Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic isn't supported when private traffic filtering is enabled. |Investigating.<br><br>Don't secure private traffic if branch to branch connectivity is critical.| |All Secured Virtual Hubs sharing the same virtual WAN must be in the same resource group.|This behavior is aligned with Virtual WAN Hubs today.|Create multiple Virtual WANs to allow Secured Virtual Hubs to be created in different resource groups.| |Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.|
-|Application rules fail in a secure hub with custom DNS (preview) configured.|Custom DNS (preview) doesnΓÇÖt work in secure hub deployments and Hub virtual network deployments where forced tunneling is enabled.|Fix under investigation.|
|DDoS Protection Standard not supported with secured virtual hubs|DDoS Protection Standard is not integrated with vWANs.|Investigating| |Activity logs not fully supported|Firewall policy does not currently support Activity logs.|Investigating| |Configuring SNAT private IP address ranges|[Private IP range settings](../firewall/snat-private-range.md) are ignored if Azure Firewall policy is configured. The default Azure Firewall behavior is used, where it doesnΓÇÖt SNAT Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918).|Investigating|
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/shared-query-azure-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/shared-query-azure-powershell.md new file mode 100644
@@ -0,0 +1,110 @@
+---
+title: "Quickstart: Create a shared query with Azure PowerShell"
+description: In this quickstart, you follow the steps to create a Resource Graph shared query using Azure PowerShell.
+ms.date: 01/11/2021
+ms.topic: quickstart
+ms.custom: devx-track-azurepowershell
+---
+# Quickstart: Create a Resource Graph shared query using Azure PowerShell
+
+This article describes how you can create an Azure Resource Graph shared query using the
+[Az.ResourceGraph](/powershell/module/az.resourcegraph) PowerShell module.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+[!INCLUDE [azure-powershell-requirements-no-header.md](../../../includes/azure-powershell-requirements-no-header.md)]
+
+ > [!IMPORTANT]
+ > While the **Az.ResourceGraph** PowerShell module is in preview, you must install it separately
+ > using the `Install-Module` cmdlet.
+
+ ```azurepowershell-interactive
+ Install-Module -Name Az.ResourceGraph
+ ```
+
+- If you have multiple Azure subscriptions, choose the appropriate subscription in which the
+ resources should be billed. Select a specific subscription using the
+ [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+ ```azurepowershell-interactive
+ Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+ ```
+
+## Create a Resource Graph shared query
+
+With the `Az.ResourceGraph` PowerShell module added to your environment of choice, it's time to create
+a Resource Graph shared query. The shared query is an Azure Resource Manager object that you can
+grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all
+resources grouped by _location_.
+
+1. Create a resource group with
+ [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) to store the Azure
+ Resource Graph shared query. This resource group is named `resource-graph-queries` and the
+ location is `westus2`.
+
+ ```azurepowershell-interactive
+ # Login first with `Connect-AzAccount` if not using Cloud Shell
+
+ # Create the resource group
+ New-AzResourceGroup -Name resource-graph-queries -Location westus2
+ ```
+
+1. Create the Azure Resource Graph shared query using the `Az.ResourceGraph` PowerShell module and
+ [New-AzResourceGraphQuery](/powershell/module/az.resourcegraph/new-azresourcegraphquery)
+ cmdlet:
+
+ ```azurepowershell-interactive
+ # Create the Azure Resource Graph shared query
+ $Params = @{
+ Name = 'Summarize resources by location'
+ ResourceGroupName = 'resource-graph-queries'
+ Location = 'westus2'
+ Description = 'This shared query summarizes resources by location for a pinnable map graphic.'
+ Query = 'Resources | summarize count() by location'
+ }
+ New-AzResourceGraphQuery @Params
+ ```
+
+1. List the shared queries in the new resource group. The
+ [Get-AzResourceGraphQuery](/powershell/module/az.resourcegraph/get-azresourcegraphquery)
+ cmdlet returns an array of values.
+
+ ```azurepowershell-interactive
+ # List all the Azure Resource Graph shared queries in a resource group
+ Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries
+ ```
+
+1. To get just a single shared query result, use `Get-AzResourceGraphQuery` with its `Name` parameter.
+
+ ```azurepowershell-interactive
+ # Show a specific Azure Resource Graph shared query
+ Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location'
+ ```
+
+## Clean up resources
+
+If you wish to remove the Resource Graph shared query and resource group from your Azure
+environment, you can do so by using the following commands:
+
+- [Remove-AzResourceGraphQuery](/powershell/module/az.resourcegraph/remove-azresourcegraphquery)
+- [Remove-AzResourceGroup](/cli/azure/group#az_group_delete)
+
+```azurepowershell-interactive
+# Delete the Azure Resource Graph shared query
+Remove-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location'
+
+# Remove the resource group
+# WARNING: This command deletes ALL resources you've added to this resource group
+Remove-AzResourceGroup -Name resource-graph-queries
+```
+
+## Next steps
+
+In this quickstart, you've created a Resource Graph shared query using Azure PowerShell. To learn
+more about the Resource Graph language, continue to the query language details page.
+
+> [!div class="nextstepaction"]
+> [Get more information about the query language](./concepts/query-language.md)
\ No newline at end of file
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/apache-ambari-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/apache-ambari-usage.md
@@ -6,7 +6,7 @@ ms.author: hrasheed
ms.reviewer: jasonh ms.service: hdinsight ms.topic: conceptual
-ms.date: 02/05/2020
+ms.date: 01/12/2021
--- # Apache Ambari usage in Azure HDInsight
@@ -61,9 +61,18 @@ Never manually start/stop ambari-server or ambari-agent services, unless you're
Never manually modify any configuration files on any cluster node, let Ambari UI do the job for you.
+## Property values in ESP clusters
+
+In HDInsight 4.0 Enterprise Security Package clusters, use pipes `|` rather than commas as variable delimiters. An example is shown below:
+
+```
+Property Key: hive.security.authorization.sqlstd.confwhitelist.append
+Property Value: environment|env|dl_data_dt
+```
+ ## Next steps * [Manage HDInsight clusters by using the Apache Ambari Web UI](hdinsight-hadoop-manage-ambari.md) * [Manage HDInsight clusters by using the Apache Ambari REST API](hdinsight-hadoop-manage-ambari-rest-api.md)
-[!INCLUDE [troubleshooting next steps](../../includes/hdinsight-troubleshooting-next-steps.md)]
\ No newline at end of file
+[!INCLUDE [troubleshooting next steps](../../includes/hdinsight-troubleshooting-next-steps.md)]
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
@@ -6,7 +6,7 @@ ms.author: hrasheed
ms.reviewer: jasonh ms.service: hdinsight ms.topic: troubleshooting
-ms.date: 01/22/2020
+ms.date: 01/12/2021
--- # Cluster creation fails with InvalidNetworkConfigurationErrorCode in Azure HDInsight
@@ -142,6 +142,13 @@ hostname -f
nslookup <headnode_fqdn> (e.g.nslookup hn1-hditest.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net) dig @168.63.129.16 <headnode_fqdn> (e.g. dig @168.63.129.16 hn0-hditest.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net) ```
+### Cause
+
+Another cause for this `InvalidNetworkConfigurationErrorCode` error code could be the use of the deprecated parameter `EnableVmProtection` in PowerShell or an Azure Runbook.
+
+### Resolution
+
+Use the valid parameters for `Get-AzVirtualNetwork` as documented in the [Az PowerShell SDK](https://docs.microsoft.com/powershell/module/az.network/get-azvirtualnetwork?view=azps-5.3.0&viewFallbackFrom=azps-4.2.0)
---
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-custom-ambari-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-custom-ambari-db.md
@@ -6,7 +6,7 @@ ms.reviewer: jasonh
ms.service: hdinsight ms.custom: hdinsightactive ms.topic: how-to
-ms.date: 06/24/2019
+ms.date: 01/12/2021
ms.author: hrasheed --- # Set up HDInsight clusters with a custom Ambari DB
@@ -59,6 +59,20 @@ az deployment group create --name HDInsightAmbariDBDeployment \
--parameters azuredeploy.parameters.json ```
+## Database sizing
+
+The following table provides guidelines on which Azure SQL DB tier to select based on the size of your HDInsight cluster.
+
+| Number of worker nodes | Required DB tier |
+|---|---|
+| <=4 | S0 |
+| >4 && <=8 | S1 |
+| >8 && <=16 | S2 |
+| >16 && <=32 | S3 |
+| >32 && <=64 | S4 |
+| >64 && <=128 | P2 |
+| >128 | Contact Support |
+ ## Next steps - [Use external metadata stores in Azure HDInsight](hdinsight-use-external-metadata-stores.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-managed-identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-managed-identities.md
@@ -50,6 +50,7 @@ If you have already created a long running cluster with multiple different manag
* In ESP clusters, when changing AAD-DS LDAPS cert, the LDAPS certificate does not automatically get updated and therefore LDAP sync and scale ups start failing. * MSI access to ADLS Gen2 start failing. * Encryption Keys can not be rotated in the CMK scenario.+ then you should assign the required roles and permissions for the above scenarios to all of those managed identities used in the cluster. For example, if you used different managed identities for ADLS Gen2 and ESP clusters then both of them should have the "Storage blob data Owner" and "HDInsight Domain Services Contributor" roles assigned to them to avoid running in to these issues. ## FAQ
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-overview.md
@@ -49,7 +49,7 @@ Extract, transform, and load (ETL) is a process where unstructured or structured
### Data warehousing
-You can use HDInsight to perform interactive queries at petabyte scales over structured or unstructured data in any format. You can also build models connecting them to BI tools. For more information, [read this customer story](https://customers.microsoft.com/story/milliman).
+You can use HDInsight to perform interactive queries at petabyte scales over structured or unstructured data in any format. You can also build models connecting them to BI tools.
![HDInsight architecture: Data warehousing](./hadoop/media/apache-hadoop-introduction/hdinsight-architecture-data-warehouse.png "HDInsight Data warehousing architecture")
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-plan-virtual-network-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
ms.service: hdinsight ms.topic: conceptual ms.custom: hdinsightactive,seoapr2020
-ms.date: 05/04/2020
+ms.date: 01/12/2021
--- # Plan a virtual network for Azure HDInsight
@@ -46,7 +46,8 @@ The following are the questions that you must answer when planning to install HD
Use the steps in this section to discover how to add a new HDInsight to an existing Azure Virtual Network. > [!NOTE]
-> You cannot add an existing HDInsight cluster into a virtual network.
+> - You cannot add an existing HDInsight cluster into a virtual network.
+> - The VNET and the cluster being created must be in the same subscription.
1. Are you using a classic or Resource Manager deployment model for the virtual network?
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-resource-manager.md
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
ms.service: hdinsight ms.topic: how-to ms.custom: hdinsightactive
-ms.date: 12/06/2019
+ms.date: 01/12/2021
--- # Manage resources for Apache Spark cluster on Azure HDInsight
@@ -80,6 +80,9 @@ The following command is an example of how to change the configuration parameter
curl -k -v -H 'Content-Type: application/json' -X POST -d '{"file":"<location of application jar file>", "className":"<the application class to execute>", "args":[<application parameters>], "numExecutors":10, "executorMemory":"2G", "executorCores":5' localhost:8998/batches ```
+> [!Note]
+> Copy the JAR file to your cluster storage account. Do not copy the JAR file directly to the head node.
+ ### Change these parameters on a Spark Thrift Server Spark Thrift Server provides JDBC/ODBC access to a Spark cluster and is used to service Spark SQL queries. Tools like Power BI, Tableau, and so on, use ODBC protocol to communicate with Spark Thrift Server to execute Spark SQL queries as a Spark Application. When a Spark cluster is created, two instances of the Spark Thrift Server are started, one on each head node. Each Spark Thrift Server is visible as a Spark application in the YARN UI.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/spark-dotnet-version-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/spark-dotnet-version-update.md new file mode 100644
@@ -0,0 +1,56 @@
+---
+title: Updating .NET for Apache Spark to version v1.0 in HDI
+description: Learn about updating .NET for Apache Spark version to 1.0 in HDI and how that affects your existing code and clusters.
+author: Niharikadutta
+ms.author: nidutta
+ms.service: hdinsight
+ms.topic: how-to
+ms.date: 01/05/2021
+---
+
+# Updating .NET for Apache Spark to version v1.0 in HDInsight
+
+This document talks about the first major version of [.NET for Apache Spark](https://github.com/dotnet/spark), and how it might impact your current production pipelines in HDInsight clusters.
+
+## About .NET for Apache Spark version 1.0.0
+
+This is the first [major official release](https://github.com/dotnet/spark/releases/tag/v1.0.0) of .NET for Apache Spark and provides DataFrame API completeness for Spark 2.4.x as well as Spark 3.0.x along with other features. For a complete list of all features, improvements and bug fixes, see the official [v1.0.0 release notes](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md).
+Another important thing to note is that this version is **not** compatible with prior versions of `Microsoft.Spark` and `Microsoft.Spark.Worker`. Check out the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) if you are planning to upgrade your .NET for Apache Spark application to be compatible with v1.0.0.
+
+## Using .NET for Apache Spark v1.0 in HDInsight
+
+While current HDI clusters will not be affected (i.e. they will still have the same version as before), newly created HDI clusters will carry this latest v1.0.0 version of .NET for Apache Spark. What this means if:
+
+- **You have an older HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), you will have to update the `Microsoft.Spark.Worker` version on your HDI cluster. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](#changing-net-for-apache-spark-version-on-hdinsight).
+If you don't want to update the current version of .NET for Apache Spark in your application, no further steps are necessary.
+
+- **You have a new HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), no steps are needed to change the worker on HDI, however you will have to refer to the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand the steps needed to update your code and pipelines.
+If you don't want to change the current version of .NET for Apache Spark in your application, you would have to change the version on your HDI cluster from v1.0 (default on new clusters) to whichever version you are using. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](spark-dotnet-version-update.md#changing-net-for-apache-spark-version-on-hdinsight).
+
+## Changing .NET for Apache Spark version on HDInsight
+
+### Deploy Microsoft.Spark.Worker
+
+`Microsoft.Spark.Worker` is a backend component that lives on the individual worker nodes of your Spark cluster. When you want to execute a C# UDF (user-defined function), Spark needs to understand how to launch the .NET CLR to execute this UDF. `Microsoft.Spark.Worker` provides a collection of classes to Spark that enable this functionality. Select the worker version depending on the version of .NET for Apache Spark you want to deploy on the HDI cluster.
+
+1. Download the Microsoft.Spark.Worker Linux release of your particular version. For example, if you want `.NET for Apache Spark v1.0.0`, you'd download [Microsoft.Spark.Worker.netcoreapp3.1.linux-x64-1.0.0.tar.gz](https://github.com/dotnet/spark/releases/tag/v1.0.0).
+
+2. Download [install-worker.sh](https://github.com/dotnet/spark/blob/master/deployment/install-worker.sh) script to install the worker binaries downloaded in Step 1 to all the worker nodes of your HDI cluster.
+
+3. Upload the above mentioned files to the Azure Storage account your cluster has access to. You can refer to [the .NET for Apache Spark HDI deployment article](https://docs.microsoft.com/dotnet/spark/tutorials/hdinsight-deployment#upload-files-to-azure) for more details.
+
+4. Run the `install-worker.sh` script on all worker nodes of your cluster, using Script actions. Refer to [the .NET for Apache Spark HDI deployment article](https://docs.microsoft.com/dotnet/spark/tutorials/hdinsight-deployment#run-the-hdinsight-script-action) for more information.
+
+### Update your application to use specific version
+
+You can update your .NET for Apache Spark application to use a specific version by choosing the required version of the [Microsoft.Spark NuGet package](https://www.nuget.org/packages/Microsoft.Spark/) in your project. Be sure to check out the release notes of the particular version and the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) as mentioned above, if choosing to update your application to v1.0.0.
+
+## FAQs
+
+### Will my existing HDI cluster with version < 1.0.0 start failing with the new release?
+
+Existing HDI clusters will continue to have the same previous version for .NET for Apache Spark and your existing application (having previous version of Spark .NET) will not be affected.
+
+## Next steps
+
+[Deploy your .NET for Apache Spark application on HDInsight](https://docs.microsoft.com/dotnet/spark/tutorials/hdinsight-deployment)
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/security-baseline.md
@@ -106,7 +106,7 @@ Azure HPC Cache is not intended to run web applications, and does not require yo
- [Manage Azure DDoS Protection Standard using the Azure portal](../ddos-protection/manage-ddos-protection.md) -- [Azure Security Center recommendations](../security-center/recommendations-reference.md#recs-network)
+- [Azure Security Center recommendations](../security-center/recommendations-reference.md#recs-networking)
**Azure Security Center monitoring**: Yes
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-telemetry-properties-commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-telemetry-properties-commands.md
@@ -183,6 +183,9 @@ The following snippet from a device model shows the definition of a `geopoint` t
} ```
+> [!NOTE]
+> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+ A device client should send the telemetry as JSON that looks like the following example. IoT Central displays the value as a pin on a map: ```json
@@ -571,6 +574,9 @@ The following snippet from a device model shows the definition of a `geopoint` p
} ```
+> [!NOTE]
+> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+ A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ```json
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/how-to-move-device-to-iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-move-device-to-iot.md new file mode 100644
@@ -0,0 +1,39 @@
+---
+title: How to move a device to Azure IoT Central from IoT Hub
+description: How to move device to Azure IoT Central from IoT Hub
+author: TheRealJasonAndrew
+ms.author: v-anjaso
+ms.date: 12/20/2020
+ms.topic: how-to
+ms.service: iot-central
+services: iot-central
+---
+
+# How to transfer a device to Azure IoT Central from IoT Hub
+
+*This article applies to operators and device developers.*
+
+This article describes how to transfer a device to an Azure IoT Central application from an IoT Hub.
+
+A device first connects to a DPS endpoint to retrieve the information it needs to connect to your application. Internally, your IoT Central application uses an IoT hub to handle device connectivity.
+
+A device can be connected to an IoT hub directly using a connection string or using DPS. [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) is the route for IoT Central.
+
+## To move the device
+
+To connect a device to IoT Central from the IOT Hub a device needs to be updated with:
+
+* The [Scope ID](../../iot-dps/concepts-service.md) of the IoT Central application.
+* A key derived either from the [group SAS](concepts-get-connected.md) key or [the X.509 cert](../../iot-hub/iot-hub-x509ca-overview.md)
+
+To interact with IoT Central, there must be a device template that models the properties/telemetry/commands that the device implements. For more information, see [Get connected to IoT Central](concepts-get-connected.md) and [What are device templates?](concepts-device-templates.md)
+
+## Next steps
+
+If you're a device developer, some suggested next steps are to:
+
+- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)
+- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
+- Learn how to [Define a new IoT device type in your Azure IoT Central application](./howto-set-up-template.md)
+- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
\ No newline at end of file
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-use-commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-commands.md new file mode 100644
@@ -0,0 +1,249 @@
+---
+title: How to use device commands in an Azure IoT Central solution
+description: How to use device commands in Azure IoT Central solution. This tutorial shows you how, as a device developer, to use device commands in client app to your Azure IoT Central application.
+author: dominicbetts
+ms.author: dobett
+ms.date: 01/07/2021
+ms.topic: how-to
+ms.service: iot-central
+services: iot-central
+---
+
+# How to use commands in an Azure IoT Central solution
+
+This how-to guide shows you how, as a device developer, to use commands that are defined in a device template.
+
+An operator can use the IoT Central UI to call a command on a device. Commands control the behavior of a device. For example, an operator might call a command to reboot a device or collect diagnostics data.
+
+A device can:
+
+* Respond to a command immediately.
+* Respond to IoT Central when it receives the command and then later notify IoT Central when the *long-running command* is complete.
+
+By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article.
+
+## Define your commands
+
+Standard commands are sent to a device to instruct the device to do something. A command can include parameters with additional information. For example, a command to open a valve on a device could have a parameter that specifies how much to open the valve. Commands can also receive a return value when the device completes the command. For example, a command that asks a device to run some diagnostics could receive a diagnostics report as a return value.
+
+Commands are defined as part of a device template. The following screenshot shows the **Get Max-Min report** command definition in the **Thermostat** device template. This command has both request and response parameters:
+
+:::image type="content" source="media/howto-use-commands/command-definition.png" alt-text="Screenshot showing Get Max Min Report command in Thermostat device template":::
+
+The following table shows the configuration settings for a command capability:
+
+| Field |Description|
+|-------------------|-----------|
+|Display Name |The command value used on dashboards and forms.|
+| Name | The name of the command. IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field needs to be alphanumeric. The device code uses this **Name** value.|
+| Capability Type | Command.|
+| Queue if offline | Whether to make this command an *offline* command. |
+| Description | A description of the command capability.|
+| Comment | Any comments about the command capability.|
+| Request | The payload for the device command.|
+| Response | The payload of the device command response.|
+
+The following snippet shows the JSON representation of the command in the device model. In this example, the response value is a complex **Object** type with multiple fields:
+
+```json
+{
+ "@type": "Command",
+ "name": "getMaxMinReport",
+ "displayName": "Get Max-Min report.",
+ "description": "This command returns the max, min and average temperature from the specified time to the current time.",
+ "request": {
+ "name": "since",
+ "displayName": "Since",
+ "description": "Period to return the max-min report.",
+ "schema": "dateTime"
+ },
+ "response": {
+ "name" : "tempReport",
+ "displayName": "Temperature Report",
+ "schema": {
+ "@type": "Object",
+ "fields": [
+ {
+ "name": "maxTemp",
+ "displayName": "Max temperature",
+ "schema": "double"
+ },
+ {
+ "name": "minTemp",
+ "displayName": "Min temperature",
+ "schema": "double"
+ },
+ {
+ "name" : "avgTemp",
+ "displayName": "Average Temperature",
+ "schema": "double"
+ },
+ {
+ "name" : "startTime",
+ "displayName": "Start Time",
+ "schema": "dateTime"
+ },
+ {
+ "name" : "endTime",
+ "displayName": "End Time",
+ "schema": "dateTime"
+ }
+ ]
+ }
+ }
+}
+```
+
+> [!TIP]
+> You can export a device model from the device template page.
+
+You can relate this command definition to the screenshot of the UI using the following fields:
+
+* `@type` to specify the type of capability: `Command`
+* `name` for the command value.
+
+Optional fields, such as display name and description, let you add more details to the interface and capabilities.
+
+## Standard commands
+
+This section shows you how a device sends a response value as soon as it receives the command.
+
+The following code snippet shows how a device can respond to a command immediately sending a success code:
+
+> [!NOTE]
+> This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
+
+```javascript
+client.onDeviceMethod('getMaxMinReport', commandHandler);
+
+// ...
+
+const commandHandler = async (request, response) => {
+ switch (request.methodName) {
+ case 'getMaxMinReport': {
+ console.log('MaxMinReport ' + request.payload);
+ await sendCommandResponse(request, response, 200, deviceTemperatureSensor.getMaxMinReportObject());
+ break;
+ }
+ default:
+ await sendCommandResponse(request, response, 404, 'unknown method');
+ break;
+ }
+};
+
+const sendCommandResponse = async (request, response, status, payload) => {
+ try {
+ await response.send(status, payload);
+ console.log('Response to method \'' + request.methodName +
+ '\' sent successfully.' );
+ } catch (err) {
+ console.error('An error ocurred when sending a method response:\n' +
+ err.toString());
+ }
+};
+```
+
+The call to `onDeviceMethod` sets up the `commandHandler` method. This command handler:
+
+1. Checks the name of the command.
+1. For the `getMaxMinReport` command, it calls `getMaxMinReportObject` to retrieve the values to include in the return object.
+1. Calls `sendCommandResponse` to send the response back to IoT Central. This response includes the `200` response code to indicate success.
+
+The following screenshot shows how the successful command response displays in the IoT Central UI:
+
+:::image type="content" source="media/howto-use-commands/simple-command-ui.png" alt-text="Screenshot showing how to view command payload for a standard command":::
+
+## Long-running commands
+
+This section shows you how a device can delay sending a confirmation that the command competed.
+
+The following code snippet shows how a device can implement a long-running command:
+
+> [!NOTE]
+> This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
+
+```javascript
+client.onDeviceMethod('rundiagnostics', commandHandler);
+
+// ...
+
+const commandHandler = async (request, response) => {
+ switch (request.methodName) {
+ case 'rundiagnostics': {
+ console.log('Starting long-running diagnostics run ' + request.payload);
+ await sendCommandResponse(request, response, 202, 'Diagnostics run started');
+
+ // Long-running operation here
+ // ...
+
+ const patch = {
+ rundiagnostics: {
+ value: 'Diagnostics run complete at ' + new Date().toLocaleString()
+ }
+ };
+
+ deviceTwin.properties.reported.update(patch, function (err) {
+ if (err) throw err;
+ console.log('Properties have been reported for component');
+ });
+ break;
+ }
+ default:
+ await sendCommandResponse(request, response, 404, 'unknown method');
+ break;
+ }
+};
+```
+
+The call to `onDeviceMethod` sets up the `commandHandler` method. This command handler:
+
+1. Checks the name of the command.
+1. Calls `sendCommandResponse` to send the response back to IoT Central. This response includes the `202` response code to indicate pending results.
+1. Completes the long-running operation.
+1. Uses a reported property with the same name as the command to tell IoT Central that the command completed.
+
+The following screenshot shows how the command response displays in the IoT Central UI when it receives the 202 response code:
+
+:::image type="content" source="media/howto-use-commands/long-running-start.png" alt-text="Screenshot that shows immediate response from device":::
+
+The following screenshot shows the IoT Central UI when it receives the property update that indicates the command is complete:
+
+:::image type="content" source="media/howto-use-commands/long-running-finish.png" alt-text="Screenshot that shows long-running command finished":::
+
+## Offline commands
+
+This section shows you how a device handles an offline command. If a device is online, it can handle the offline command as soon it's received. If a device is offline, it handles the offline command when it next connects to IoT Central. Devices can't send a return value in response to an offline command.
+
+> [!NOTE]
+> This article uses Node.js for simplicity.
+
+The following screenshot shows an offline command called **GenerateDiagnostics**. The request parameter is an object with datetime property called **StartTime** and an integer enumeration property called **Bank**:
+
+:::image type="content" source="media/howto-use-commands/offline-command.png" alt-text="Screenshot that shows the UI for an offline command":::
+
+The following code snippet shows how a client can listen for offline commands and display the message contents:
+
+```javascript
+client.on('message', function (msg) {
+ console.log('Body: ' + msg.data);
+ console.log('Properties: ' + JSON.stringify(msg.properties));
+ client.complete(msg, function (err) {
+ if (err) {
+ console.error('complete error: ' + err.toString());
+ } else {
+ console.log('complete sent');
+ }
+ });
+});
+```
+
+The output from the previous code snippet shows the payload with the **StartTime** and **Bank** values. The property list includes the command name in the **method-name** list item:
+
+```output
+Body: {"StartTime":"2021-01-06T06:00:00.000Z","Bank":2}
+Properties: {"propertyList":[{"key":"iothub-ack","value":"none"},{"key":"method-name","value":"GenerateDiagnostics"}]}
+```
+
+## Next steps
+
+Now that you've learned how to use commands in your Azure IoT Central application, see [Payloads](concepts-telemetry-properties-commands.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-use-location-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-location-data.md new file mode 100644
@@ -0,0 +1,109 @@
+---
+title: Use location data in an Azure IoT Central solution
+description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules.
+author: dominicbetts
+ms.author: dobett
+ms.date: 01/08/2021
+ms.topic: how-to
+ms.service: iot-central
+services: iot-central
+
+---
+
+# Use location data in an Azure IoT Central solution
+
+This article shows you how to use location data in an IoT Central application. A device connected to IoT Central can send location data as telemetry stream or use a device property to report location data.
+
+A solution builder can use the location data to:
+
+* Plot the reported location on a map.
+* Plot the telemetry location history om a map.
+* Create geofencing rules to notify an operator when a device enters or leaves a specific area.
+
+## Add location capabilities to a device template
+
+The following screenshot shows a device template with examples of a device property and telemetry type that use location data. The definitions use the **location** semantic type and the **geolocation** schema type:
+
+:::image type="content" source="media/howto-use-location-data/location-device-template.png" alt-text="Screenshot showing location property definition in device template":::
+
+For reference, the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
+
+```json
+{
+ "@type": [
+ "Property",
+ "Location"
+ ],
+ "displayName": {
+ "en": "DeviceLocation"
+ },
+ "name": "DeviceLocation",
+ "schema": "geopoint",
+ "writable": false
+},
+{
+ "@type": [
+ "Telemetry",
+ "Location"
+ ],
+ "displayName": {
+ "en": "Tracking"
+ },
+ "name": "Tracking",
+ "schema": "geopoint"
+}
+```
+
+> [!NOTE]
+> The **geopoint** schema type is not part of the [DTDL specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+
+## Send location data from a device
+
+When a device sends data for the **DeviceLocation** property shown in the previous section, the payload looks like the following JSON snippet:
+
+```json
+{
+ "DeviceLocation": {
+ "lat": 47.64263,
+ "lon": -122.13035,
+ "alt": 0
+ }
+}
+```
+
+When a device sends data for the **Tracking** telemetry shown in the previous section, the payload looks like the following JSON snippet:
+
+```json
+{
+ "Tracking": {
+ "lat": 47.64263,
+ "lon": -122.13035,
+ "alt": 0
+ }
+}
+```
+
+## Display device location
+
+You can display location data in multiple places in your IoT Central application. For example, on views associated with an individual device or on dashboards.
+
+When you create a view for a device, you can choose to plot the location on a map, or show the individual values:
+
+:::image type="content" source="media/howto-use-location-data/location-views.png" alt-text="Screenshot showing example view with location data":::
+
+You can add map tiles to a dashboard to plot the location of one or more devices. When you add a map tile to show location telemetry, you can plot the location over a time period. The following screenshot shows the location reported by a simulated device over the last 30 minutes:
+
+:::image type="content" source="media/howto-use-location-data/location-dashboard.png" alt-text="Screenshot showing example dashboard with location data":::
+
+## Create a geofencing rule
+
+You can use location telemetry to create a geofencing rule that generates an alert when a device moves into or out of a rectangular area. The following screenshot shows a rule that uses four conditions to define a rectangular area using latitude and longitude values. The rule generates an email when the device moves into the rectangular area:
+
+:::image type="content" source="media/howto-use-location-data/geofence-rule.png" alt-text="Screenshot that shows a geofencing rule definition":::
+
+## Next steps
+
+Now that you've learned how to use properties in your Azure IoT Central application, see:
+
+* [Payloads](concepts-telemetry-properties-commands.md)
+* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
\ No newline at end of file
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-use-properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-properties.md
@@ -12,7 +12,7 @@ services: iot-central
# Use properties in an Azure IoT Central solution
-This article shows you how to use device properties that are defined in a device template in your Azure IoT Central application.
+This how-to guide shows you how, as a device developer, to use device properties that are defined in a device template in your Azure IoT Central application.
Properties represent point-in-time values. For example, a device can use a property to report the target temperature it's trying to reach. By default, device properties are read-only in IoT Central. Writable properties let you synchronize state between your device and your Azure IoT Central application.
@@ -31,7 +31,7 @@ The following table shows the configuration settings for a property capability.
| Field | Description | |-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Display name | The display name for the property value used on dashboards and forms. |
-| Name | The name of the property. Azure IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field must be alphanumeric. |
+| Name | The name of the property. Azure IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field must be alphanumeric. The device code uses this **Name** value. |
| Capability type | Property. | | Semantic type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. | | Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
@@ -156,7 +156,7 @@ hubClient.getTwin((err, twin) => {
}); ```
-This article uses Node.js for simplicity. For complete information about device application examples, see the following [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
+This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
The following view in Azure IoT Central application shows the properties you can see. The view automatically makes the **Device model** property a _read-only device property_.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-machine-learning-edge-04-train-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-04-train-model.md
@@ -13,19 +13,18 @@ services: iot-edge
In this article, we do the following tasks:
-* Use Azure Notebooks to train a machine learning model.
+* Use Azure Machine Learning Studio to train a machine learning model.
* Package the trained model as a container image. * Deploy the container image as an Azure IoT Edge module.
-The Azure Notebooks take advantage of an Azure Machine Learning workspace, a foundational block used to experiment, train, and deploy machine learning models.
+The Azure Machine Learning Studio is a foundational block used to experiment, train, and deploy machine learning models.
The steps in this article might be typically performed by data scientists. In this section of the tutorial, you learn how to: > [!div class="checklist"]
->
-> * Create an Azure Notebooks project to train a machine learning model.
+> * Create Jupyter Notebooks in Azure Machine Learning Workspace to train a machine learning model.
> * Containerize the trained machine learning model. > * Create an Azure IoT Edge module from the containerized machine learning model.
@@ -33,49 +32,49 @@ In this section of the tutorial, you learn how to:
This article is part of a series for a tutorial about using Azure Machine Learning on IoT Edge. Each article in the series builds on the work in the previous article. If you have arrived at this article directly, visit the [first article](tutorial-machine-learning-edge-01-intro.md) in the series.
-## Set up Azure Notebooks
+## Set up Azure Machine Learning
-We use Azure Notebooks to host the two Jupyter Notebooks and supporting files. Here we create and configure an Azure Notebooks project. If you have not used Jupyter and/or Azure Notebooks, here are a couple of introductory documents:
+We use Azure Machine Learning Studio to host the two Jupyter Notebooks and supporting files. Here we create and configure an Azure Machine Learning project. If you have not used Jupyter and/or Azure Machine Learning Studio , here are a couple of introductory documents:
-* **Quickstart:** [Create and share a notebook](../notebooks/quickstart-create-share-jupyter-notebook.md)
-* **Tutorial:** [Create and run a Jupyter notebook with Python](../notebooks/tutorial-create-run-jupyter-notebook.md)
+* **Jupyter Notebooks:** [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support)
+* **Azure Machine Learning:** [Get Started with Azure Machine Learning in Jupyter Notebooks](../machine-learning/tutorial-1st-experiment-sdk-setup.md)
-Using Azure Notebooks ensures a consistent environment for the exercise.
> [!NOTE]
-> Once set up, the Azure Notebooks service can be accessed from any machine. During setup, you should use the development VM, which has all of the files that you will need.
+> Once set up, the Azure Machine Learning service can be accessed from any machine. During setup, you should use the development VM, which has all of the files that you will need.
-### Create an Azure Notebooks account
+### Install Azure Machine Learning Visual Studio Code extension
+VS Code on the development VM should have this extension installed. If you are running on a different instance, please reinstall the extension as described [here.](../machine-learning/tutorial-setup-vscode-extension.md)
-To use Azure Notebooks, you need to create an account. Azure Notebook accounts are independent from Azure subscriptions.
+### Create an Azure Machine Learning account
+In order to provision resources and run workloads on Azure, you have to sign in with your Azure account credentials.
-1. Navigate to [Azure Notebooks](https://notebooks.azure.com).
+1. In Visual Studio Code, open the command palette by selecting **View** > **Command Palette** from the menu bar.
-1. Click **Sign In** in the upper right corner of the page.
+1. Enter the command `Azure: Sign In` into the command palette to start the sign in process. Follow the instructions to complete sign in.
-1. Sign in with either your work or school account (Azure Active Directory) or your personal account (Microsoft Account).
+1. Create an Azure ML Compute instance to run your workload. Using Command pallet enter the command `Azure ML: Create Compute`.
+1. Select your Azure Subscription
+1. Select **+ Create new Azure ML workspace** and enter name `turbofandemo`.
+1. Select the resource group that you have been using for this demo.
+1. You should be able to see the progress of workspace creation in the lower right hand corner of your VS Code window: **Creating Workspace: turobofandemo** (this can take a minute or two).
+1. Please wait for the workspace to be created successfully. It should say **Azure ML workspace turbofandemo created**.
-1. If you have not used Azure Notebooks before, you will be prompted to grant access for the Azure Notebooks app.
-1. Create a user ID for Azure Notebooks.
+### Upload Jupyter Notebook files
-### Upload Jupyter notebook files
+We will upload sample notebook files into a new Azure ML workspace.
-We will upload sample notebook files into a new Azure Notebooks project.
+1. Navigate to ml.azure.com and sign in.
+1. Select your Microsoft Directory, Azure Subscription and the newly created Azure ML workspace.
-1. From the user page of your new account, select **My Projects** from the top menu bar.
+ :::image type="content" source="media/tutorial-machine-learning-edge-04-train-model/select-studio-workspace.png" alt-text="Select your Azure ML workspace." :::
-1. Add a new project by selecting the **+** button.
+1. Once logged into your Azure ML workspace, navigate to the **Notebooks** section using the left side menu.
+1. Select the **My files** tab.
-1. On the **Create New Project** dialog box, provide a **Project Name**.
+1. Select **Upload** (the up arrow icon)
-1. Leave **Public** and **README** unchecked as there is no need for the project to be public or to have a readme.
-
-1. Select **Create**.
-
-1. Select **Upload** (the up arrow icon) and choose **From Computer**.
-
-1. Select **Choose files**.
1. Navigate to **C:\source\IoTEdgeAndMlSample\AzureNotebooks**. Select all the files in the list and click **Open**.
@@ -83,9 +82,9 @@ We will upload sample notebook files into a new Azure Notebooks project.
1. Select **Upload** to begin uploading and then select **Done** once the process is complete.
-### Azure Notebook files
+### Jupyter Notebook files
-Let's review the files you uploaded into your Azure Notebooks project. The activities in this portion of the tutorial span across two notebook files, which use a few supporting files.
+Let's review the files you uploaded into your Azure ML workspace. The activities in this portion of the tutorial span across two notebook files, which use a few supporting files.
* **01-turbofan\_regression.ipynb:** This notebook uses the Machine Learning service workspace to create and run a machine learning experiment. Broadly, the notebook does the following steps:
@@ -109,13 +108,13 @@ Let's review the files you uploaded into your Azure Notebooks project. The activ
* **README.md:** Readme describing the use of the notebooks.
-## Run Azure Notebooks
+## Run Jupyter Notebooks
-Now that the project is created, you can run the notebooks.
+Now that the workspace is created, you can run the notebooks.
-1. From your project page, select **01-turbofan\_regression.ipynb**.
+1. From your **My files** page, select **01-turbofan\_regression.ipynb**.
- ![Select first notebook to run](media/tutorial-machine-learning-edge-04-train-model/select-turbofan-regression-notebook.png)
+ :::image type="content" source="media/tutorial-machine-learning-edge-04-train-model/select-turbofan-notebook.png" alt-text="Select first notebook to run. ":::
1. If the notebook is listed as **Not Trusted**, click on the **Not Trusted** widget in the top right of the notebook. When the dialog comes up, select **Trust**.
@@ -156,7 +155,7 @@ Now that the project is created, you can run the notebooks.
To verify that the notebooks have completed successfully, verify that a few items were created.
-1. On your Azure Notebooks project page, select **Show hidden items** so that item names that begin with a period appear.
+1. On your Azure ML Notebooks**My files** tab, select **refresh**.
1. Verify that the following files were created:
@@ -188,9 +187,9 @@ This tutorial is part of a set where each article builds on the work done in the
## Next steps
-In this article, we used two Jupyter Notebooks running in Azure Notebooks to use the data from the turbofan devices to train a remaining useful life (RUL) classifier, to save the classifier as a model, to create a container image, and to deploy and test the image as a web service.
+In this article, we used two Jupyter Notebooks running in Azure ML Studio to use the data from the turbofan devices to train a remaining useful life (RUL) classifier, to save the classifier as a model, to create a container image, and to deploy and test the image as a web service.
Continue to the next article to create an IoT Edge device. > [!div class="nextstepaction"]
-> [Configure an IoT Edge device](tutorial-machine-learning-edge-05-configure-edge-device.md)
+> [Configure an IoT Edge device](tutorial-machine-learning-edge-05-configure-edge-device.md)
\ No newline at end of file
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device.md
@@ -75,7 +75,7 @@ In this section, we create the self-signed certificates using a Docker image tha
## Upload certificates to Azure Key Vault
-To store our certificates securely and to make them accessible from multiple devices, we will upload the certificates into Azure Key Vault. As you can see from the list above, we have two types of certificate files: PFX and PEM. We will treat the PFX as Key Vault certificates to be uploaded to Key Vault. The PEM files are plain text and we will treat them as Key Vault secrets. We will use the Key Vault associated with the Azure Machine Learning workspace we created by running the [Azure Notebooks](tutorial-machine-learning-edge-04-train-model.md#run-azure-notebooks).
+To store our certificates securely and to make them accessible from multiple devices, we will upload the certificates into Azure Key Vault. As you can see from the list above, we have two types of certificate files: PFX and PEM. We will treat the PFX as Key Vault certificates to be uploaded to Key Vault. The PEM files are plain text and we will treat them as Key Vault secrets. We will use the Key Vault associated with the Azure Machine Learning workspace we created by running the [Jupyter Notebooks](tutorial-machine-learning-edge-04-train-model.md#run-jupyter-notebooks).
1. From the [Azure portal](https://portal.azure.com), navigate to your Azure Machine Learning workspace.
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-android.md
@@ -56,7 +56,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyAndroidDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyAndroidDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-c.md
@@ -123,7 +123,7 @@ A device must be registered with your IoT hub before it can connect. In this sec
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyCDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyCDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-dotnet.md
@@ -70,7 +70,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyDotnetDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDotnetDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-ios.md
@@ -59,7 +59,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id myiOSdevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id myiOSdevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-java.md
@@ -70,7 +70,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyJavaDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyJavaDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-python.md
@@ -56,7 +56,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyPythonDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyPythonDevice --output table
``` Make a note of the device connection string, which looks like:
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/key-vault-integrate-kubernetes.md
@@ -12,7 +12,8 @@ ms.date: 09/25/2020
# Tutorial: Configure and run the Azure Key Vault provider for the Secrets Store CSI driver on Kubernetes > [!IMPORTANT]
-> CSI Driver is an open source project that is not supported by Azure technical support. Please report all feedback and issues related to CSI Driver Key Vault integration on the github link at the bottom of the page. This tool is provided for users to self-install into clusters and gather feedback from our community.
+> CSI Driver is an open source project that is not supported by Azure technical support. Please report all feedback and issues related to CSI Driver Key Vault integration on the github link [here](https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues). This tool is provided for users to self-install into clusters and gather feedback from our community.
+ In this tutorial, you access and retrieve secrets from your Azure key vault by using the Secrets Store Container Storage Interface (CSI) driver to mount the secrets into Kubernetes pods.
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/logging.md
@@ -178,6 +178,7 @@ The following table lists the **operationName** values and corresponding REST AP
| **CertificatePendingDelete** |Delete pending certificate | | **CertificateNearExpiryEventGridNotification** |Certificate near expiry event published | | **CertificateExpiredEventGridNotification** |Certificate expired event published |+ --- ## Use Azure Monitor logs
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/whats-new.md
@@ -8,7 +8,7 @@ tags: azure-resource-manager
ms.service: key-vault ms.subservice: general ms.topic: reference
-ms.date: 10/01/2020
+ms.date: 01/12/2020
ms.author: mbaldwin #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
@@ -36,7 +36,7 @@ To support [soft delete now on by default](#soft-delete-on-by-default), two chan
### Soft delete on by default
-By the end of 2020, the **soft-delete will be on by default for all key vaults**, both new and pre-existing. For full details on this potentially breaking change, as well as steps to find affected key vaults and update them beforehand, see the article [Soft-delete will be enabled on all key vaults](soft-delete-change.md).
+**Soft-delete is required to be enabled for all key vaults**, both new and pre-existing. Over the next few months the ability to opt out of soft delete will be deprecated. For full details on this potentially breaking change, as well as steps to find affected key vaults and update them beforehand, see the article [Soft-delete will be enabled on all key vaults](soft-delete-change.md).
### Azure TLS certificate changes
@@ -108,4 +108,4 @@ First preview version (version 2014-12-08-preview) was announced on January 8, 2
## Next steps
-If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
\ No newline at end of file
+If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
key-vault https://docs.microsoft.com/en-us/azure/key-vault/keys/about-keys-details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/about-keys-details.md
@@ -22,7 +22,7 @@ Following table shows a summary of key types and supported algorithms.
| --- | --- | --- | |EC-P256, EC-P256K, EC-P384, EC-521|NA|ES256<br>ES256K<br>ES384<br>ES512| |RSA 2K, 3K, 4K| RSA1_5<br>RSA-OAEP<br>RSA-OAEP-256|PS256<br>PS384<br>PS512<br>RS256<br>RS384<br>RS512<br>RSNULL|
-|AES 128-bit, 256-bit| AES-KW<br>AES-GCM<br>AES-CBC| NA|
+|AES 128-bit, 256-bit <br/>(Managed HSM only)| AES-KW<br>AES-GCM<br>AES-CBC| NA|
||| ## EC algorithms
@@ -61,7 +61,7 @@ Following table shows a summary of key types and supported algorithms.
- **RS512** - RSASSA-PKCS-v1_5 using SHA-512. The application supplied digest value must be computed using SHA-512 and must be 64 bytes in length. - **RSNULL** - See [RFC2437](https://tools.ietf.org/html/rfc2437), a specialized use-case to enable certain TLS scenarios.
-## Symmetric key algorithms
+## Symmetric key algorithms (Managed HSM only)
- **AES-KW** - AES Key Wrap ([RFC3394](https://tools.ietf.org/html/rfc3394)). - **AES-GCM** - AES encryption in Galois Counter Mode ([NIST SP 800-38d](https://csrc.nist.gov/publications/sp800)) - **AES-CBC** - AES encryption in Cipher Block Chaining Mode ([NIST SP 800-38a](https://csrc.nist.gov/publications/sp800))
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/concepts/cross-tenant-management-experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cross-tenant-management-experience.md
@@ -1,7 +1,7 @@
--- title: Cross-tenant management experiences description: Azure delegated resource management enables a cross-tenant management experience.
-ms.date: 12/16/2020
+ms.date: 01/07/2020
ms.topic: conceptual ---
@@ -57,7 +57,7 @@ Most tasks and services can be performed on delegated resources across managed t
[Azure Automation](../../automation/index.yml): -- Use automation accounts to access and work with delegated resources
+- Use Automation accounts to access and work with delegated resources
[Azure Backup](../../backup/index.yml):
@@ -92,7 +92,7 @@ Most tasks and services can be performed on delegated resources across managed t
- View alerts for delegated subscriptions, with the ability to view and refresh alerts across all subscriptions - View activity log details for delegated subscriptions-- Log analytics: Query data from remote workspaces in multiple tenants
+- Log analytics: Query data from remote workspaces in multiple tenants (note that automation accounts used to access data from workspaces in customer tenants must be created in the same tenant)
- Create alerts in customer tenants that trigger automation, such as Azure Automation runbooks or Azure Functions, in the managing tenant through webhooks - Create [diagnostic settings](../..//azure-monitor/platform/diagnostic-settings.md) in customer tenants to send resource logs to workspaces in the managing tenant - For SAP workloads, [monitor SAP Solutions metrics with an aggregated view across customer tenants](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-lighthouse-and-azure-monitor-for-sap-solutions-to/ba-p/1537293)
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/how-to/monitor-at-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-at-scale.md
@@ -1,7 +1,7 @@
--- title: Monitor delegated resources at scale description: Learn how to effectively use Azure Monitor Logs in a scalable way across the customer tenants you're managing.
-ms.date: 12/14/2020
+ms.date: 01/07/2021
ms.topic: how-to ---
@@ -20,6 +20,9 @@ In order to collect data, you'll need to create Log Analytics workspaces. These
We recommend creating these workspaces directly in the customer tenants. This way their data remains in their tenants rather than being exported into yours. This also allows centralized monitoring of any resources or services supported by Log Analytics, giving you more flexibility on what types of data you monitor.
+> [!TIP]
+> Any automation account used to access data from a Log Analytics workspace must be created in the same tenant as the workspace.
+ You can create a Log Analytics workspace by using the [Azure portal](../../azure-monitor/learn/quick-create-workspace.md), by using [Azure CLI](../../azure-monitor/learn/quick-create-workspace-cli.md), or by using [Azure PowerShell](../../azure-monitor/platform/powershell-workspace-configuration.md). > [!IMPORTANT]
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-diagnostics.md
@@ -34,15 +34,18 @@ The various Standard Load Balancer configurations provide the following metrics:
| --- | --- | --- | --- | | Data path availability | Public and internal load balancer | Standard Load Balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your VM. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customers use is also validated. The measurement is invisible to your application and does not interfere with other operations.| Average | | Health probe status | Public and internal load balancer | Standard Load Balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance endpoint in the load balancer pool. You can see how Load Balancer views the health of your application, as indicated by your health probe configuration. | Average |
-| SYN (synchronize) packets | Public and internal load balancer | Standard Load Balancer does not terminate Transmission Control Protocol (TCP) connections or interact with TCP or UDP packet flows. Flows and their handshakes are always between the source and the VM instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received.| Average |
-| SNAT connections | Public load balancer |Standard Load Balancer reports the number of outbound flows that are masqueraded to the Public IP address front end. Source network address translation (SNAT) ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported and can be used to troubleshoot and understand the health of your outbound flows.| Average |
+| SYN (synchronize) count | Public and internal load balancer | Standard Load Balancer does not terminate Transmission Control Protocol (TCP) connections or interact with TCP or UDP packet flows. Flows and their handshakes are always between the source and the VM instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received.| Sum |
+| SNAT connection count | Public load balancer |Standard Load Balancer reports the number of outbound flows that are masqueraded to the Public IP address front end. Source network address translation (SNAT) ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported and can be used to troubleshoot and understand the health of your outbound flows.| Sum |
| Allocated SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports allocated per backend instance | Average. | | Used SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports that are utilized per backend instance. | Average |
-| Byte counters | Public and internal load balancer | Standard Load Balancer reports the data processed per front end. You may notice that the bytes are not distributed equally across the backend instances. This is expected as Azure's Load Balancer algorithm is based on flows | Average |
-| Packet counters | Public and internal load balancer | Standard Load Balancer reports the packets processed per front end.| Average |
+| Byte count | Public and internal load balancer | Standard Load Balancer reports the data processed per front end. You may notice that the bytes are not distributed equally across the backend instances. This is expected as Azure's Load Balancer algorithm is based on flows | Sum |
+| Packet count | Public and internal load balancer | Standard Load Balancer reports the packets processed per front end.| Sum |
>[!NOTE]
- >When using distributing traffic from an internal load balancer through an NVA or firewall Syn Packet, Byte Counter, and Packet Counter metrics are not be available and will show as zero.
+ >When using distributing traffic from an internal load balancer through an NVA or firewall Syn Packet, Byte Count, and Packet Count metrics are not be available and will show as zero.
+
+ >[!NOTE]
+ >Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics
### View your load balancer metrics in the Azure portal
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-add-run-inline-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-add-run-inline-code.md
@@ -27,7 +27,7 @@ When you want to run a piece of code inside your logic app, you can add the buil
> [!NOTE] > The `require()` function isn't supported by the Inline Code action for running JavaScript.
-This action runs the code snippet and returns the output from that snippet as a token that's named `Result`. You can use this token with subsequent actions in your logic app's workflow. For other scenarios where you want to create a function for your code, try [creating and calling an Azure function instead](../logic-apps/logic-apps-azure-functions.md) in your logic app.
+This action runs the code snippet and returns the output from that snippet as a token that's named `Result`. You can use this token with subsequent actions in your logic app's workflow. For other scenarios where you want to create a function for your code, try [creating and calling a function through Azure Functions instead](../logic-apps/logic-apps-azure-functions.md) in your logic app.
In this article, the example logic app triggers when a new email arrives in a work or school account. The code snippet extracts and returns any email addresses that appear in the email body.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/execute-r-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/execute-r-script.md
@@ -45,10 +45,13 @@ azureml_main <- function(dataframe1, dataframe2){
To install additional R packages, use the `install.packages()` method. Packages are installed for each Execute R Script module. They aren't shared across other Execute R Script modules. > [!NOTE]
+> It's **NOT** recommended to install R package from the script bundle. It's recommended to install packages directly in the script editor.
> Specify the CRAN repository when you're installing packages, such as `install.packages("zoo",repos = "http://cran.us.r-project.org")`. > [!WARNING] > Excute R Script module does not support installing packages that require native compilation, like `qdap` package which requires JAVA and `drc` package which requires C++. This is because this module is executed in a pre-installed environment with non-admin permission.
+> Do not install packages which are pre-built on/for Windows, since the designer modules are running on Ubuntu. To check whether a package is pre-built on windows, you could go to [CRAN](https://cran.r-project.org/) and search your package, download one binary file according to your OS, and check **Built:** part in the **DESCRIPTION** file. Following is an example:
+> :::image type="content" source="media/module/r-package-description.png" alt-text="R package description" lightbox="media/module/r-package-page.png":::
This sample shows how to install Zoo: ```R
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -22,8 +22,6 @@ In this article, learn about Azure Machine Learning releases. For the full SDK
+ **Bug fixes and improvements** + **azure-cli-ml** + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
- + **azureml-automl-runtime**
- + In this update, we added holt winters exponential smoothing to forecasting toolbox of AutoML SDK. Given a time series, the best model is selected by [AICc (Corrected Akaike's Information Criterion)](https://otexts.com/fpp3/selecting-predictors.html#selecting-predictors) and returned.
+ **azureml-contrib-optimization** + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI. + **azureml-pipeline-steps**
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-ml-pipelines.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.topic: conceptual ms.author: laobri author: lobrien
-ms.date: 08/17/2020
+ms.date: 01/11/2021
ms.custom: devx-track-python ---
@@ -36,7 +36,7 @@ The Azure cloud provides several other pipelines, each with a different purpose.
| -------- | --------------- | -------------- | ------------ | -------------- | --------- | | Model orchestration (Machine learning) | Data scientist | Azure Machine Learning Pipelines | Kubeflow Pipelines | Data -> Model | Distribution, caching, code-first, reuse | | Data orchestration (Data prep) | Data engineer | [Azure Data Factory pipelines](../data-factory/concepts-pipelines-activities.md) | Apache Airflow | Data -> Data | Strongly-typed movement, data-centric activities |
-| Code & app orchestration (CI/CD) | App Developer / Ops | [Azure DevOps Pipelines](https://azure.microsoft.com/services/devops/pipelines/) | Jenkins | Code + Model -> App/Service | Most open and flexible activity support, approval queues, phases with gating |
+| Code & app orchestration (CI/CD) | App Developer / Ops | [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/) | Jenkins | Code + Model -> App/Service | Most open and flexible activity support, approval queues, phases with gating |
## What can Azure ML pipelines do?
@@ -102,15 +102,18 @@ experiment = Experiment(ws, 'MyExperiment')
input_data = Dataset.File.from_files( DataPath(datastore, '20newsgroups/20news.pkl'))
-output_data = PipelineData("output_data", datastore=blob_store)
-
+dataprep_step = PythonScriptStep(
+ name="prep_data",
+ script_name="dataprep.py",
+ compute_target=cluster,
+ arguments=[input_dataset.as_named_input('raw_data').as_mount(), dataprep_output]
+ )
+output_data = OutputFileDatasetConfig()
input_named = input_data.as_named_input('input') steps = [ PythonScriptStep( script_name="train.py", arguments=["--input", input_named.as_download(), "--output", output_data],
- inputs=[input_data],
- outputs=[output_data],
compute_target=compute_target, source_directory="myfolder" ) ]
@@ -121,7 +124,9 @@ pipeline_run = experiment.submit(pipeline)
pipeline_run.wait_for_completion() ```
-The snippet starts with common Azure Machine Learning objects, a `Workspace`, a `Datastore`, a [ComputeTarget](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py), and an `Experiment`. Then, the code creates the objects to hold `input_data` and `output_data`. The array `steps` holds a single element, a `PythonScriptStep` that will use the data objects and run on the `compute_target`. Then, the code instantiates the `Pipeline` object itself, passing in the workspace and steps array. The call to `experiment.submit(pipeline)` begins the Azure ML pipeline run. The call to `wait_for_completion()` blocks until the pipeline is finished.
+The snippet starts with common Azure Machine Learning objects, a `Workspace`, a `Datastore`, a [ComputeTarget](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py), and an `Experiment`. Then, the code creates the objects to hold `input_data` and `output_data`. The `input_data` is an instance of [FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py&preserve-view=true) and the `output_data` is an instance of [OutputFileDatasetConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig?view=azure-ml-py&preserve-view=true). For `OutputFileDatasetConfig` the default behavior is to copy the output to the `workspaceblobstore` datastore under the path `/dataset/{run-id}/{output-name}`, where `run-id` is the Run's ID and `output-name` is an auto-generated value if not specified by the developer.
+
+The array `steps` holds a single element, a `PythonScriptStep` that will use the data objects and run on the `compute_target`. Then, the code instantiates the `Pipeline` object itself, passing in the workspace and steps array. The call to `experiment.submit(pipeline)` begins the Azure ML pipeline run. The call to `wait_for_completion()` blocks until the pipeline is finished.
To learn more about connecting your pipeline to your data, see the articles [Data access in Azure Machine Learning](concept-data.md) and [Moving data into and between ML pipeline steps (Python)](how-to-move-data-in-out-of-pipelines.md).
@@ -157,4 +162,4 @@ Azure ML pipelines are a powerful facility that begins delivering value in the e
+ See the SDK reference docs for [pipeline core](/python/api/azureml-pipeline-core/?preserve-view=true&view=azure-ml-py) and [pipeline steps](/python/api/azureml-pipeline-steps/?preserve-view=true&view=azure-ml-py).
-+ Try out example Jupyter notebooks showcasing [Azure Machine Learning pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines). Learn how to [run notebooks to explore this service](samples-notebooks.md).
\ No newline at end of file++ Try out example Jupyter notebooks showcasing [Azure Machine Learning pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines). Learn how to [run notebooks to explore this service](samples-notebooks.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
@@ -223,6 +223,9 @@ Supported customizations for `forecasting` tasks include:
To customize featurizations with the SDK, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. Learn more about [custom featurizations](how-to-configure-auto-features.md#customize-featurization).
+>[!NOTE]
+> The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment.
+ ```python featurization_config = FeaturizationConfig()
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-features.md
@@ -121,6 +121,9 @@ Supported customizations include:
|**Drop columns** |Specifies columns to drop from being featurized.| |**Block transformers**| Specifies block transformers to be used in the featurization process.|
+>[!NOTE]
+> The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment.
+ Create the `FeaturizationConfig` object by using API calls: ```python
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
@@ -152,7 +152,6 @@ Some examples include:
time_series_settings = { 'time_column_name': time_column_name, 'time_series_id_column_names': time_series_id_column_names,
- 'drop_column_names': ['logQuantity'],
'forecast_horizon': n_test_periods }
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-pipelines.md
@@ -44,6 +44,112 @@ If you perform a management operation on a compute target from a remote job, you
``` For example, you will receive an error if you try to create or attach a compute target from an ML Pipeline that is submitted for remote execution.
+## Troubleshooting `ParallelRunStep`
+
+The script for a `ParallelRunStep` *must contain* two functions:
+- `init()`: Use this function for any costly or common preparation for later inference. For example, use it to load the model into a global object. This function will be called only once at beginning of process.
+- `run(mini_batch)`: The function will run for each `mini_batch` instance.
+ - `mini_batch`: `ParallelRunStep` will invoke run method and pass either a list or pandas `DataFrame` as an argument to the method. Each entry in mini_batch will be a file path if input is a `FileDataset` or a pandas `DataFrame` if input is a `TabularDataset`.
+ - `response`: run() method should return a pandas `DataFrame` or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful run of input element in the input mini-batch. Make sure that enough data is included in run result to map input to run output result. Run output will be written in output file and not guaranteed to be in order, you should use some key in the output to map it to input.
+
+```python
+%%writefile digit_identification.py
+# Snippets from a sample script.
+# Refer to the accompanying digit_identification.py
+# (https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run)
+# for the implementation script.
+
+import os
+import numpy as np
+import tensorflow as tf
+from PIL import Image
+from azureml.core import Model
++
+def init():
+ global g_tf_sess
+
+ # Pull down the model from the workspace
+ model_path = Model.get_model_path("mnist")
+
+ # Construct a graph to execute
+ tf.reset_default_graph()
+ saver = tf.train.import_meta_graph(os.path.join(model_path, 'mnist-tf.model.meta'))
+ g_tf_sess = tf.Session()
+ saver.restore(g_tf_sess, os.path.join(model_path, 'mnist-tf.model'))
++
+def run(mini_batch):
+ print(f'run method start: {__file__}, run({mini_batch})')
+ resultList = []
+ in_tensor = g_tf_sess.graph.get_tensor_by_name("network/X:0")
+ output = g_tf_sess.graph.get_tensor_by_name("network/output/MatMul:0")
+
+ for image in mini_batch:
+ # Prepare each image
+ data = Image.open(image)
+ np_im = np.array(data).reshape((1, 784))
+ # Perform inference
+ inference_result = output.eval(feed_dict={in_tensor: np_im}, session=g_tf_sess)
+ # Find the best probability, and add it to the result list
+ best_result = np.argmax(inference_result)
+ resultList.append("{}: {}".format(os.path.basename(image), best_result))
+
+ return resultList
+```
+
+If you have another file or folder in the same directory as your inference script, you can reference it by finding the current working directory.
+
+```python
+script_dir = os.path.realpath(os.path.join(__file__, '..',))
+file_path = os.path.join(script_dir, "<file_name>")
+```
+
+### Parameters for ParallelRunConfig
+
+`ParallelRunConfig` is the major configuration for `ParallelRunStep` instance within the Azure Machine Learning pipeline. You use it to wrap your script and configure necessary parameters, including all of the following entries:
+- `entry_script`: A user script as a local file path that will be run in parallel on multiple nodes. If `source_directory` is present, use a relative path. Otherwise, use any path that's accessible on the machine.
+- `mini_batch_size`: The size of the mini-batch passed to a single `run()` call. (optional; the default value is `10` files for `FileDataset` and `1MB` for `TabularDataset`.)
+ - For `FileDataset`, it's the number of files with a minimum value of `1`. You can combine multiple files into one mini-batch.
+ - For `TabularDataset`, it's the size of data. Example values are `1024`, `1024KB`, `10MB`, and `1GB`. The recommended value is `1MB`. The mini-batch from `TabularDataset` will never cross file boundaries. For example, if you have .csv files with various sizes, the smallest file is 100 KB and the largest is 10 MB. If you set `mini_batch_size = 1MB`, then files with a size smaller than 1 MB will be treated as one mini-batch. Files with a size larger than 1 MB will be split into multiple mini-batches.
+- `error_threshold`: The number of record failures for `TabularDataset` and file failures for `FileDataset` that should be ignored during processing. If the error count for the entire input goes above this value, the job will be aborted. The error threshold is for the entire input and not for individual mini-batch sent to the `run()` method. The range is `[-1, int.max]`. The `-1` part indicates ignoring all failures during processing.
+- `output_action`: One of the following values indicates how the output will be organized:
+ - `summary_only`: The user script will store the output. `ParallelRunStep` will use the output only for the error threshold calculation.
+ - `append_row`: For all inputs, only one file will be created in the output folder to append all outputs separated by line.
+- `append_row_file_name`: To customize the output file name for append_row output_action (optional; default value is `parallel_run_step.txt`).
+- `source_directory`: Paths to folders that contain all files to execute on the compute target (optional).
+- `compute_target`: Only `AmlCompute` is supported.
+- `node_count`: The number of compute nodes to be used for running the user script.
+- `process_count_per_node`: The number of processes per node. Best practice is to set to the number of GPU or CPU one node has (optional; default value is `1`).
+- `environment`: The Python environment definition. You can configure it to use an existing Python environment or to set up a temporary environment. The definition is also responsible for setting the required application dependencies (optional).
+- `logging_level`: Log verbosity. Values in increasing verbosity are: `WARNING`, `INFO`, and `DEBUG`. (optional; the default value is `INFO`)
+- `run_invocation_timeout`: The `run()` method invocation timeout in seconds. (optional; default value is `60`)
+- `run_max_try`: Maximum try count of `run()` for a mini-batch. A `run()` is failed if an exception is thrown, or nothing is returned when `run_invocation_timeout` is reached (optional; default value is `3`).
+
+You can specify `mini_batch_size`, `node_count`, `process_count_per_node`, `logging_level`, `run_invocation_timeout`, and `run_max_try` as `PipelineParameter`, so that when you resubmit a pipeline run, you can fine-tune the parameter values. In this example, you use `PipelineParameter` for `mini_batch_size` and `Process_count_per_node` and you will change these values when resubmit a run later.
+
+### Parameters for creating the ParallelRunStep
+
+Create the ParallelRunStep by using the script, environment configuration, and parameters. Specify the compute target that you already attached to your workspace as the target of execution for your inference script. Use `ParallelRunStep` to create the batch inference pipeline step, which takes all the following parameters:
+- `name`: The name of the step, with the following naming restrictions: unique, 3-32 characters, and regex ^\[a-z\]([-a-z0-9]*[a-z0-9])?$.
+- `parallel_run_config`: A `ParallelRunConfig` object, as defined earlier.
+- `inputs`: One or more single-typed Azure Machine Learning datasets to be partitioned for parallel processing.
+- `side_inputs`: One or more reference data or datasets used as side inputs without need to be partitioned.
+- `output`: An `OutputFileDatasetConfig` object that corresponds to the output directory.
+- `arguments`: A list of arguments passed to the user script. Use unknown_args to retrieve them in your entry script (optional).
+- `allow_reuse`: Whether the step should reuse previous results when run with the same settings/inputs. If this parameter is `False`, a new run will always be generated for this step during pipeline execution. (optional; the default value is `True`.)
+
+```python
+from azureml.pipeline.steps import ParallelRunStep
+
+parallelrun_step = ParallelRunStep(
+ name="predict-digits-mnist",
+ parallel_run_config=parallel_run_config,
+ inputs=[input_mnist_ds_consumption],
+ output=output_dir,
+ allow_reuse=True
+)
+```
## Debugging techniques
@@ -135,7 +241,7 @@ When you submit a pipeline run and stay in the authoring page, you can find the
1. In the right pane of the module, go to the **Outputs + logs** tab. 1. Expand the right pane, and select the **70_driver_log.txt** to view the file in browser. You can also download logs locally.
- ![Expanded output pane in the designer](./media/how-to-debug-pipelines/designer-logs.png)
+ ![Expanded output pane in the designer](./media/how-to-debug-pipelines/designer-logs.png)?view=azure-ml-py&preserve-view=true)?view=azure-ml-py&preserve-view=true)
### Get logs from pipeline runs
@@ -161,8 +267,6 @@ In some cases, you may need to interactively debug the Python code used in your
## Next steps
-* [Debug and troubleshoot ParallelRunStep](how-to-debug-parallel-run-step.md)
- * For a complete tutorial using `ParallelRunStep`, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](tutorial-pipeline-batch-scoring-classification.md). * For a complete example showing automated machine learning in ML pipelines, see [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
@@ -486,7 +486,7 @@ You can deploy the explainer along with the original model and use it at inferen
# WARNING: to install this, g++ needs to be available on the Docker image and is not by default (look at the next cell)
- azureml_pip_packages = ['azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry', 'azureml-interpret']
+ azureml_pip_packages = ['azureml-defaults', 'azureml-core', 'azureml-telemetry', 'azureml-interpret']
# specify CondaDependencies obj
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-move-data-in-out-of-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
@@ -7,7 +7,7 @@ ms.service: machine-learning
ms.subservice: core ms.author: laobri author: lobrien
-ms.date: 08/20/2020
+ms.date: 01/11/2021
ms.topic: conceptual ms.custom: how-to, contperf-fy20q4, devx-track-python, data4ml # As a data scientist using Python, I want to get data into my pipeline and flowing between steps
@@ -24,13 +24,9 @@ This article will show you how to:
- Use `Dataset` objects for pre-existing data - Access data within your steps - Split `Dataset` data into subsets, such as training and validation subsets-- Create `PipelineData` objects to transfer data to the next pipeline step-- Use `PipelineData` objects as input to pipeline steps-- Create new `Dataset` objects from `PipelineData` you wish to persist-
-> [!TIP]
-> An improved experience for passing temporary data between pipeline steps and persisting your data after pipeline runs is available in the public preview classes, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py) and [`OutputTabularDatasetConfig`](/python/api/azureml-core/azureml.data.output_dataset_config.outputtabulardatasetconfig?preserve-view=true&view=azure-ml-py). These classes are [experimental](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py#&preserve-view=truestable-vs-experimental) preview features, and may change at any time.
-
+- Create `OutputFileDatasetConfig` objects to transfer data to the next pipeline step
+- Use `OutputFileDatasetConfig` objects as input to pipeline steps
+- Create new `Dataset` objects from `OutputFileDatasetConfig` you wish to persist
## Prerequisites
@@ -153,71 +149,71 @@ ds = Dataset.get_by_name(workspace=ws, name='mnist_opendataset')
> [!NOTE] > The preceding snippets show the form of the calls and are not part of a Microsoft sample. You must replace the various arguments with values from your own project.
-## Use `PipelineData` for intermediate data
+## Use `OutputFileDatasetConfig` for intermediate data
-While `Dataset` objects represent persistent data, [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?preserve-view=true&view=azure-ml-py) objects are used for temporary data that is output from pipeline steps. Because the lifespan of a `PipelineData` object is longer than a single pipeline step, you define them in the pipeline definition script. When you create a `PipelineData` object, you must provide a name and a datastore at which the data will reside. Pass your `PipelineData` object(s) to your `PythonScriptStep` using _both_ the `arguments` and the `outputs` arguments:
+While `Dataset` objects represent only persistent data, [`OutputFileDatasetConfig`](https://docs.microsoft.com/python/api/azureml-core/azureml.data.outputfiledatasetconfig?view=azure-ml-py&preserve-view=true) object(s) can be used for temporary data output from pipeline steps **and** persistent output data. `OutputFileDatasetConfig` supports writing data to blob storage, fileshare, adlsgen1, or adlsgen2. It supports both mount mode and upload mode. In mount mode, files written to the mounted directory are permanently stored when the file is closed. In upload mode, files written to the output directory are uploaded at the end of the job. If the job fails or is canceled, the output directory will not be uploaded.
-```python
+ `OutputFileDatasetConfig` object's default behavior is to write to the default datastore of the workspace. Pass your `OutputFileDatasetConfig` objects to your `PythonScriptStep` with the `arguments` parameter.
-default_datastore = workspace.get_default_datastore()
-dataprep_output = PipelineData("clean_data", datastore=default_datastore)
+```python
+from azureml.data import OutputFileDatasetConfig
+dataprep_output = OutputFileDatasetConfig()
+input_dataset = Dataset.get_by_name(workspace, 'raw_data')
dataprep_step = PythonScriptStep( name="prep_data", script_name="dataprep.py", compute_target=cluster,
- arguments=["--output-path", dataprep_output]
- inputs=[Dataset.get_by_name(workspace, 'raw_data')],
- outputs=[dataprep_output]
-)
-
+ arguments=[input_dataset.as_named_input('raw_data').as_mount(), dataprep_output]
+ )
```
-You may choose to create your `PipelineData` object using an access mode that provides an immediate upload. In that case, when you create your `PipelineData`, set the `upload_mode` to `"upload"` and use the `output_path_on_compute` argument to specify the path to which you'll be writing the data:
+You may choose to upload the contents of your `OutputFileDatasetConfig` object at the end of a run. In that case, use the `as_upload()` function along with your `OutputFileDatasetConfig` object, and specify whether to overwrite existing files in the destination.
```python
-PipelineData("clean_data", datastore=def_blob_store, output_mode="upload", output_path_on_compute="clean_data_output/")
+#get blob datastore already registered with the workspace
+blob_store= ws.datastores['my_blob_store']
+OutputFileDatasetConfig(name="clean_data", destination=blob_store).as_upload(overwrite=False)
``` > [!NOTE]
-> The preceding snippets show the form of the calls and are not part of a Microsoft sample. You must replace the various arguments with values from your own project.
-
-> [!TIP]
-> An improved experience for passing intermediate data between pipeline steps is available with the public preview class, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py). For a code example using `OutputFileDatasetConfig`, see how to [build a two step ML pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb).
+> Concurrent writes to a `OutputFileDatasetConfig` will fail. Do not attempt to use a single `OutputFileDatasetConfig` concurrently. Do not share a single `OutputFileDatasetConfig` in a multiprocessing situation, such as when using distributed training.
+### Use `OutputFileDatasetConfig` as outputs of a training step
-### Use `PipelineData` as outputs of a training step
-Within your pipeline's `PythonScriptStep`, you can retrieve the available output paths using the program's arguments. If this step is the first and will initialize the output data, you must create the directory at the specified path. You can then write whatever files you wish to be contained in the `PipelineData`.
+Within your pipeline's `PythonScriptStep`, you can retrieve the available output paths using the program's arguments. If this step is the first and will initialize the output data, you must create the directory at the specified path. You can then write whatever files you wish to be contained in the `OutputFileDatasetConfig`.
```python parser = argparse.ArgumentParser() parser.add_argument('--output_path', dest='output_path', required=True) args = parser.parse_args()+ # Make directory for file os.makedirs(os.path.dirname(args.output_path), exist_ok=True) with open(args.output_path, 'w') as f: f.write("Step 1's output") ```
-If you created your `PipelineData` with the `is_directory` argument set to `True`, it would be enough to just perform the `os.makedirs()` call and then you would be free to write whatever files you wished to the path. For more details, see the [PipelineData](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?preserve-view=true&view=azure-ml-py) reference documentation.
+### Read `OutputFileDatasetConfig` as inputs to non-initial steps
+After the initial pipeline step writes some data to the `OutputFileDatasetConfig` path and it becomes an output of that initial step, it can be used as an input to a later step.
-### Read `PipelineData` as inputs to non-initial steps
+In the following code,
-After the initial pipeline step writes some data to the `PipelineData` path and it becomes an output of that initial step, it can be used as an input to a later step:
+* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](how-to-access-data.md#azure-data-lake-storage-generation-2) in order to write data back to ADLS Gen 2 datastores.
+
+* After `step1` completes and the output is written to the destination indicated by `step1_output_data`, then step2 is ready to use `step1_output_data` as an input.
```python
-step1_output_data = PipelineData("processed_data", datastore=def_blob_store, output_mode="upload")
# get adls gen 2 datastore already registered with the workspace datastore = workspace.datastores['my_adlsgen2']
+step1_output_data = OutputFileDatasetConfig(name="processed_data", destination=datastore).as_upload()
step1 = PythonScriptStep( name="generate_data", script_name="step1.py", runconfig = aml_run_config,
- arguments = ["--output_path", step1_output_data],
- inputs=[],
- outputs=[step1_output_data]
+ arguments = ["--output_path", step1_output_data]
) step2 = PythonScriptStep(
@@ -225,41 +221,22 @@ step2 = PythonScriptStep(
script_name="step2.py", compute_target=compute, runconfig = aml_run_config,
- arguments = ["--pd", step1_output_data],
- inputs=[step1_output_data]
+ arguments = ["--pd", step1_output_data.as_input]
+ )+ pipeline = Pipeline(workspace=ws, steps=[step1, step2]) ```
-The value of a `PipelineData` input is the path to the previous output.
+## Register `OutputFileDatasetConfig` objects for reuse
-> [!NOTE]
-> The preceding snippets show the form of the calls and are not part of a Microsoft sample. You must replace the various arguments with values from your own project.
-
-> [!TIP]
-> An improved experience for passing intermediate data between pipeline steps is available with the public preview class, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py). For a code example using `OutputFileDatasetConfig`, see how to [build a two step ML pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb).
-
-If, as shown previously, the first step wrote a single file, consuming it might look like:
+If you'd like to make your `OutputFileDatasetConfig` available for longer than the duration of your experiment, register it to your workspace to share and reuse across experiments.
```python
-parser = argparse.ArgumentParser()
-parser.add_argument('--pd', dest='pd', required=True)
-args = parser.parse_args()
-with open(args.pd) as f:
- print(f.read())
+step1_output_ds = step1_output_data.register_on_complete(name='processed_data',
+ description = 'files from step1`)
```
-## Convert `PipelineData` objects to `Dataset`s
-
-If you'd like to make your `PipelineData` available for longer than the duration of a run, use its `as_dataset()` function to convert it to a `Dataset`. You may then register the `Dataset`, making it a first-class citizen in your workspace. Since your `PipelineData` object will have a different path every time the pipeline runs, it's highly recommended that you set `create_new_version` to `True` when registering a `Dataset` created from a `PipelineData` object.
-
-```python
-step1_output_ds = step1_output_data.as_dataset()
-step1_output_ds.register(name="processed_data", create_new_version=True)
-
-```
-> [!TIP]
-> An improved experience for persisting your intermediate data outside of your pipeline runs is available with the public preview class, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py). For a code example using `OutputFileDatasetConfig`, see how to [build a two step ML pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb).
## Next steps
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
@@ -172,6 +172,19 @@ See these notebooks for examples of configuring runs for various training scenar
## Troubleshooting
+* **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+
+ Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
+
+ If you are running into this issue for local runs, check the version of PyJWT installed in your environment where you are starting runs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ 1. Start a command shell, activate conda environment where azureml-core is installed.
+ 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
+ 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
+ 4. Install using `pip install 'PyJWT<2.0.0'`
+
+ If you are submitting a user-created environment with your run, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
++ * **ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure ML, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure ML installs the package in the environment used for your training run. If you are using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure ML Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
@@ -203,18 +216,6 @@ method, or from the Experiment tab view in Azure Machine Learning studio client
Internally, Azure ML concatenates the blocks with the same metric name into a contiguous list.
-* **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
-
- Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
-
- If you are running into this issue for local runs, check the version of PyJWT installed in your environment where you are starting runs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
- 1. Start a command shell, activate conda environment where azureml-core is installed.
- 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
- 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
- 4. Install using `pip install 'PyJWT<2.0.0'`
-
- If you are submitting a user-created environment with your run, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
- * **Compute target takes a long time to start**: The Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). ## Next steps
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-tune-hyperparameters.md
@@ -1,7 +1,7 @@
---
-title: Tune hyperparameters for your model
+title: Hyperparameter tuning a model
titleSuffix: Azure Machine Learning
-description: Efficiently tune hyperparameters for deep learning and machine learning models using Azure Machine Learning.
+description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning.
ms.author: swatig author: swatig007 ms.reviewer: sgilley
@@ -14,7 +14,7 @@ ms.custom: how-to, devx-track-python, contperf-fy21q1
---
-# Tune hyperparameters for your model with Azure Machine Learning
+# Hyperparameter tuning a model with Azure Machine Learning
Automate efficient hyperparameter tuning by using Azure Machine Learning [HyperDrive package](/python/api/azureml-train-core/azureml.train.hyperdrive?preserve-view=true&view=azure-ml-py). Learn how to complete the steps required to tune hyperparameters with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py):
@@ -27,11 +27,11 @@ Automate efficient hyperparameter tuning by using Azure Machine Learning [HyperD
1. Visualize the training runs 1. Select the best configuration for your model
-## What are hyperparameters?
+## What is hyperparameter tuning?
**Hyperparameters** are adjustable parameters that let you control the model training process. For example, with neural networks, you decide the number of hidden layers and the number of nodes in each layer. Model performance depends heavily on hyperparameters.
- **Hyperparameter tuning** is the process of finding the configuration of hyperparameters that results in the best performance. The process is typically computationally expensive and manual.
+ **Hyperparameter tuning**, also called **hyperparameter optimization**, is the process of finding the configuration of hyperparameters that results in the best performance. The process is typically computationally expensive and manual.
Azure Machine Learning lets you automate hyperparameter tuning and run experiments in parallel to efficiently optimize hyperparameters.
@@ -43,7 +43,7 @@ Tune hyperparameters by exploring the range of values defined for each hyperpara
Hyperparameters can be discrete or continuous, and has a distribution of values described by a [parameter expression](/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?preserve-view=true&view=azure-ml-py).
-### Discrete hyperparameters
+### Discrete hyperparameters
Discrete hyperparameters are specified as a `choice` among discrete values. `choice` can be:
@@ -293,7 +293,7 @@ max_concurrent_runs=4
This code configures the hyperparameter tuning experiment to use a maximum of 20 total runs, running four configurations at a time.
-## Configure experiment
+## Configure hyperparameter tuning experiment
To [configure your hyperparameter tuning](/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriverunconfig?preserve-view=true&view=azure-ml-py) experiment, provide the following: * The defined hyperparameter search space
@@ -320,7 +320,7 @@ hd_config = HyperDriveConfig(run_config=src,
max_concurrent_runs=4) ```
-## Submit experiment
+## Submit hyperparameter tuning experiment
After you define your hyperparameter tuning configuration, [submit the experiment](/python/api/azureml-core/azureml.core.experiment%28class%29?preserve-view=true&view=azure-ml-py#&preserve-view=truesubmit-config--tags-none----kwargs-):
@@ -330,7 +330,7 @@ experiment = Experiment(workspace, experiment_name)
hyperdrive_run = experiment.submit(hd_config) ```
-## Warm start your hyperparameter tuning experiment (optional)
+## Warm start hyperparameter tuning (optional)
Finding the best hyperparameter values for your model can be an iterative process. You can reuse knowledge from the five previous runs to accelerate hyperparameter tuning.
@@ -377,7 +377,7 @@ hd_config = HyperDriveConfig(run_config=src,
max_concurrent_runs=4) ```
-## Visualize experiment
+## Visualize hyperparameter tuning runs
Use the [Notebook widget](/python/api/azureml-widgets/azureml.widgets.rundetails?preserve-view=true&view=azure-ml-py) to visualize the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook:
@@ -388,15 +388,15 @@ RunDetails(hyperdrive_run).show()
This code displays a table with details about the training runs for each of the hyperparameter configurations.
-![hyperparameter tuning table](./media/how-to-tune-hyperparameters/HyperparameterTuningTable.png)
+![hyperparameter tuning table](./media/how-to-tune-hyperparameters/hyperparameter-tuning-table.png)
You can also visualize the performance of each of the runs as training progresses.
-![hyperparameter tuning plot](./media/how-to-tune-hyperparameters/HyperparameterTuningPlot.png)
+![hyperparameter tuning plot](./media/how-to-tune-hyperparameters/hyperparameter-tuning-plot.png)
You can visually identify the correlation between performance and values of individual hyperparameters by using a Parallel Coordinates Plot.
-[![hyperparameter tuning parallel coordinates](./media/how-to-tune-hyperparameters/HyperparameterTuningParallelCoordinates.png)](media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates-expanded.png)
+[![hyperparameter tuning parallel coordinates](./media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png)](media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates-expanded.png)
You can also visualize all of your hyperparameter tuning runs in the Azure web portal. For more information on how to view an experiment in the portal, see [how to track experiments](how-to-monitor-view-training-logs.md#view-the-experiment-in-the-web-portal).
@@ -417,6 +417,7 @@ print('\n batch size:',parameter_values[7])
``` ## Sample notebook+ Refer to train-hyperparameter-* notebooks in this folder: * [how-to-use-azureml/ml-frameworks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-automlstep-in-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.author: laobri author: lobrien manager: cgronlun
-ms.date: 08/26/2020
+ms.date: 12/04/2020
ms.topic: conceptual ms.custom: how-to, devx-track-python, automl
@@ -33,11 +33,7 @@ Automated ML in a pipeline is represented by an `AutoMLStep` object. The `AutoML
There are several subclasses of `PipelineStep`. In addition to the `AutoMLStep`, this article will show a `PythonScriptStep` for data preparation and another for registering the model.
-The preferred way to initially move data _into_ an ML pipeline is with `Dataset` objects. To move data _between_ steps, the preferred way is with `PipelineData` objects. To be used with `AutoMLStep`, the `PipelineData` object must be transformed into a `PipelineOutputTabularDataset` object. For more information, see [Input and output data from ML pipelines](how-to-move-data-in-out-of-pipelines.md).
--
-> [!TIP]
-> An improved experience for passing temporary data between pipeline steps is available in the public preview classes, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py) and [`OutputTabularDatasetConfig`](/python/api/azureml-core/azureml.data.output_dataset_config.outputtabulardatasetconfig?preserve-view=true&view=azure-ml-py). These classes are [experimental](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py#&preserve-view=truestable-vs-experimental) preview features, and may change at any time.
+The preferred way to initially move data _into_ an ML pipeline is with `Dataset` objects. To move data _between_ steps and possible save data output from runs, the preferred way is with [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py) and [`OutputTabularDatasetConfig`](/python/api/azureml-core/azureml.data.output_dataset_config.outputtabulardatasetconfig?preserve-view=true&view=azure-ml-py) objects. To be used with `AutoMLStep`, the `PipelineData` object must be transformed into a `PipelineOutputTabularDataset` object. For more information, see [Input and output data from ML pipelines](how-to-move-data-in-out-of-pipelines.md).
The `AutoMLStep` is configured via an `AutoMLConfig` object. `AutoMLConfig` is a flexible class, as discussed in [Configure automated ML experiments in Python](./how-to-configure-auto-train.md#configure-your-experiment-settings).
@@ -145,8 +141,7 @@ The baseline Titanic dataset consists of mixed numerical and text data, with som
- Transform categorical data to integers - Drop columns that we don't intend to use - Split the data into training and testing sets-- Write the transformed data to either
- - `PipelineData` output paths
+- Write the transformed data to the `OutputFileDatasetConfig` output paths
```python %%writefile dataprep.py
@@ -216,7 +211,7 @@ The above code snippet is a complete, but minimal, example of data preparation f
The various `prepare_` functions in the above snippet modify the relevant column in the input dataset. These functions work on the data once it has been changed into a Pandas `DataFrame` object. In each case, missing data is either filled with representative random data or categorical data indicating "Unknown." Text-based categorical data is mapped to integers. No-longer-needed columns are overwritten or dropped.
-After the code defines the data preparation functions, the code parses the input argument, which is the path to which we want to write our data. (These values will be determined by `PipelineData` objects that will be discussed in the next step.) The code retrieves the registered `'titanic_cs'` `Dataset`, converts it to a Pandas `DataFrame`, and calls the various data preparation functions.
+After the code defines the data preparation functions, the code parses the input argument, which is the path to which we want to write our data. (These values will be determined by `OutputFileDatasetConfig` objects that will be discussed in the next step.) The code retrieves the registered `'titanic_cs'` `Dataset`, converts it to a Pandas `DataFrame`, and calls the various data preparation functions.
Since the `output_path` is fully qualified, the function `os.makedirs()` is used to prepare the directory structure. At this point, you could use `DataFrame.to_csv()` to write the output data, but Parquet files are more efficient. This efficiency would probably be irrelevant with such a small dataset, but using the **PyArrow** package's `from_pandas()` and `write_table()` functions are only a few more keystrokes than `to_csv()`.
@@ -224,30 +219,27 @@ Parquet files are natively supported by the automated ML step discussed below, s
### Write the data preparation pipeline step (`PythonScriptStep`)
-The data preparation code described above must be associated with a `PythonScripStep` object to be used with a pipeline. The path to which the Parquet data-preparation output is written is generated by a `PipelineData` object. The resources prepared earlier, such as the `ComputeTarget`, the `RunConfig`, and the `'titanic_ds' Dataset` are used to complete the specification.
+The data preparation code described above must be associated with a `PythonScripStep` object to be used with a pipeline. The path to which the Parquet data-preparation output is written is generated by a `OutputFileDatasetConfig` object. The resources prepared earlier, such as the `ComputeTarget`, the `RunConfig`, and the `'titanic_ds' Dataset` are used to complete the specification.
PipelineData users ```python
-from azureml.pipeline.core import PipelineData
-
+from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep
-prepped_data_path = PipelineData("titanic_train", datastore).as_dataset()
+
+prepped_data_path = OutputFileDatasetConfig(name="titanic_train", (destination=(datastore, 'outputdataset')))
dataprep_step = PythonScriptStep( name="dataprep", script_name="dataprep.py", compute_target=compute_target, runconfig=aml_run_config,
- arguments=["--output_path", prepped_data_path],
+ arguments=[titanic_ds.as_named_input('titanic_ds').as_mount(), prepped_data_path],
inputs=[titanic_ds.as_named_input("titanic_ds")], outputs=[prepped_data_path], allow_reuse=True ) ```
-The `prepped_data_path` object is of type `PipelineOutputFileDataset`. Notice that it's specified in both the `arguments` and `outputs` arguments. If you review the previous step, you'll see that within the data preparation code, the value of the argument `'--output_path'` is the file path to which the Parquet file was written.
-
-> [!TIP]
-> An improved experience for passing intermediate data between pipeline steps is available with the public preview class, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py). For a code example using the `OutputFileDatasetConfig` class, see how to [build a two step ML pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb).
+The `prepped_data_path` object is of type `OutputFileDatasetConfig` which points to a directory. Notice that it's specified in the `arguments` parameter. If you review the previous step, you'll see that within the data preparation code, the value of the argument `'--output_path'` is the file path to which the Parquet file was written.
## Train with AutoMLStep
@@ -255,19 +247,14 @@ Configuring an automated ML pipeline step is done with the `AutoMLConfig` class.
### Send data to `AutoMLStep`
-In an ML pipeline, the input data must be a `Dataset` object. The highest-performing way is to provide the input data in the form of `PipelineOutputTabularDataset` objects. You create an object of that type with the `parse_parquet_files()` or `parse_delimited_files()` on a `PipelineOutputFileDataset`, such as the `prepped_data_path` object.
+In an ML pipeline, the input data must be a `Dataset` object. The highest-performing way is to provide the input data in the form of `OutputTabularDatasetConfig` objects. You create an object of that type with the `read_delimited_files()` on a `OutputFileDatasetConfig`, such as the `prepped_data_path`, such as the `prepped_data_path` object.
```python
-# type(prepped_data_path) == PipelineOutputFileDataset
-# type(prepped_data) == PipelineOutputTabularDataset
-prepped_data = prepped_data_path.parse_parquet_files(file_extension=None)
+# type(prepped_data_path) == OutputFileDatasetConfig
+# type(prepped_data) == OutputTabularDatasetConfig
+prepped_data = prepped_data_path.read_delimited_files()
```
-The snippet above creates a high-performing `PipelineOutputTabularDataset` from the `PipelineOutputFileDataset` output of the data preparation step.
-
-> [!TIP]
-> The public preview class, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py), contains the [read_delimited_files()](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py#&preserve-view=trueread-delimited-files-include-path-false--separator------header--promoteheadersbehavior-all-files-have-same-headers--3---partition-format-none--path-glob-none--set-column-types-none-) method that converts an `OutputFileDatasetConfig` into an [`OutputTabularDatasetConfig`](/python/api/azureml-core/azureml.data.output_dataset_config.outputtabulardatasetconfig?preserve-view=true&view=azure-ml-py) for consumption in AutoML runs.
- Another option is to use `Dataset` objects registered in the workspace: ```python
@@ -278,10 +265,10 @@ Comparing the two techniques:
| Technique | Benefits and drawbacks | |-|-|
-|`PipelineOutputTabularDataset`| Higher performance |
+|`OutputTabularDatasetConfig`| Higher performance |
|| Natural route from `PipelineData` | || Data isn't persisted after pipeline run |
-|| [Notebook showing `PipelineOutputTabularDataset` technique](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) |
+|| [Notebook showing `OutputTabularDatasetConfig` technique](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) |
| Registered `Dataset` | Lower performance | | | Can be generated in many ways | | | Data persists and is visible throughout workspace |
@@ -290,7 +277,7 @@ Comparing the two techniques:
### Specify automated ML outputs
-The outputs of the `AutoMLStep` are the final metric scores of the higher-performing model and that model itself. To use these outputs in further pipeline steps, prepare `PipelineData` objects to receive them.
+The outputs of the `AutoMLStep` are the final metric scores of the higher-performing model and that model itself. To use these outputs in further pipeline steps, prepare `OutputFileDatasetConfig` objects to receive them.
```python
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-version-track-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-version-track-datasets.md
@@ -163,9 +163,7 @@ The following are scenarios where your data is tracked as an **input dataset**.
The following are scenarios where your data is tracked as an **output dataset**. * Pass an `OutputFileDatasetConfig` object through either the `outputs` or `arguments` parameter when submitting an experiment run. `OutputFileDatasetConfig` objects can also be used to persist data between pipeline steps. See [Move data between ML pipeline steps.](how-to-move-data-in-out-of-pipelines.md)
- > [!TIP]
- > [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig?preserve-view=true&view=azure-ml-py) is a public preview class containing [experimental](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py#&preserve-view=truestable-vs-experimental) preview features that may change at any time.
-
+
* Register a dataset in your script. For this scenario, the name assigned to the dataset when you registered it to the workspace is the name displayed. In the following example, `training_ds` is the name that would be displayed. ```Python
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-batch-scoring-classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md
@@ -15,8 +15,6 @@ ms.custom: contperf-fy20q4, devx-track-python
# Tutorial: Build an Azure Machine Learning pipeline for batch scoring -- In this advanced tutorial, you learn how to build an [Azure Machine Learning pipeline](concept-ml-pipelines.md) to run a batch scoring job. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation. After you build and publish a pipeline, you configure a REST endpoint that you can use to trigger the pipeline from any HTTP library on any platform. The example uses a pretrained [Inception-V3](https://arxiv.org/abs/1512.00567) convolutional neural network model implemented in Tensorflow to classify unlabeled images.
@@ -74,26 +72,24 @@ def_data_store = ws.get_default_datastore()
## Create dataset objects
-When building pipelines, `Dataset` objects are used for reading data from workspace datastores, and `PipelineData` objects are used for transferring intermediate data between pipeline steps.
+When building pipelines, `Dataset` objects are used for reading data from workspace datastores, and `OutputFileDatasetConfig` objects are used for transferring intermediate data between pipeline steps.
> [!Important] > The batch scoring example in this tutorial uses only one pipeline step. In use cases that have multiple steps, the typical flow will include these steps: >
-> 1. Use `Dataset` objects as *inputs* to fetch raw data, perform some transformation, and then *output* a `PipelineData` object.
+> 1. Use `Dataset` objects as *inputs* to fetch raw data, perform some transformation, and then *output* with an `OutputFileDatasetConfig` object.
>
-> 2. Use the `PipelineData` *output object* in the preceding step as an *input object*. Repeat it for subsequent steps.
+> 2. Use the `OutputFileDatasetConfig` *output object* in the preceding step as an *input object*. Repeat it for subsequent steps.
-In this scenario, you create `Dataset` objects that correspond to the datastore directories for both the input images and the classification labels (y-test values). You also create a `PipelineData` object for the batch scoring output data.
+In this scenario, you create `Dataset` objects that correspond to the datastore directories for both the input images and the classification labels (y-test values). You also create an `OutputFileDatasetConfig` object for the batch scoring output data.
```python from azureml.core.dataset import Dataset
-from azureml.pipeline.core import PipelineData
+from azureml.data import OutputFileDatasetConfig
input_images = Dataset.File.from_files((batchscore_blob, "batchscoring/images/")) label_ds = Dataset.File.from_files((batchscore_blob, "batchscoring/labels/"))
-output_dir = PipelineData(name="scores",
- datastore=def_data_store,
- output_path_on_compute="batchscoring/results")
+output_dir = OutputFileDatasetConfig(name="scores")
``` Register the datasets to the workspace if you want to reuse it later. This step is optional.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/release-notes.md
@@ -18,6 +18,23 @@ This article provides you with information about:
<hr width=100%>
+## January 12, 2021
+
+This release tag is for the January 2021 refresh of the module is:
+
+```
+mcr.microsoft.com/media/live-video-analytics:2.0.1
+```
+
+> [!NOTE]
+> In the quickstarts and tutorials, the deployment manifests use a tag of 2 (live-video-analytics:2). So simply redeploying such manifests should update the module on your edge > devices.
+### Bug fixes
+
+* The fields `ActivationSignalOffset`, `MinimumActivationTime` and `MaximumActivationTime` in Signal Gate processors were incorrectly set as required properties. They are now **optional** properties.
+* Fixed a Usage bug that causes the Live Video Analytics on IoT Edge module to crash when deployed in certain regions.
+
+<hr width=100%>
+ ## December 14, 2020 This release is the public preview refresh release of Live Video Analytics on IoT Edge. The release tag is
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/upgrading-lva-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/upgrading-lva-module.md
@@ -17,7 +17,7 @@ This article covers the differences and the different things to consider when up
> [!div class="mx-tdCol4BreakAll"] > |Title|Live Video Analytics 1.0|Live Video Analytics 2.0|Description| > |-------------|----------|---------|---------|
-> |Container Image|mcr.microsoft.com/media/live-video-analytics:1.0.0|mcr.microsoft.com/media/live-video-analytics:2.0.0|Microsoft Published docker images for Live Video Analytics on Azure IoT Edge|
+> |Container Image|mcr.microsoft.com/media/live-video-analytics:1|mcr.microsoft.com/media/live-video-analytics:2|Microsoft Published docker images for Live Video Analytics on Azure IoT Edge|
> |**MediaGraph nodes** | | | | > |Sources|:::image type="icon" source="./././media/upgrading-lva/check.png"::: RTSP Source </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: IoT Hub Message Source |:::image type="icon" source="./././media/upgrading-lva/check.png"::: RTSP Source </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: IoT Hub Message Source | MediaGraph nodes that act as sources for media ingestion and messages.| > |Processors|:::image type="icon" source="./././media/upgrading-lva/check.png"::: Motion detection processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Frame rate filter processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Http extension processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Grpc extension processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Signal gate processor |:::image type="icon" source="./././media/upgrading-lva/check.png"::: Motion detection processor </br>:::image type="icon" source="./././media/upgrading-lva/remove.png"::: **Frame rate filter processor**</br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Http extension processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Grpc extension processor </br>:::image type="icon" source="./././media/upgrading-lva/check.png"::: Signal gate processor | MediaGraph nodes that enable you to format the media before sending to AI inference servers.|
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
@@ -40,7 +40,7 @@ Video Indexer supports detection, grouping, and recognition of characters in ani
### Planned Video Indexer website authenticatication changes
-Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Video Indexer](https://www.videoindexer.ai/) website using Facebook or LinkedIn.
+Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Video Indexer website](https://www.videoindexer.ai/) [developer portal](video-indexer-use-apis.md) using Facebook or LinkedIn.
You will be able to sign up and sign in using one of these providers: Azure AD, Microsoft, and Google.
mysql https://docs.microsoft.com/en-us/azure/mysql/connect-php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/connect-php.md
@@ -65,11 +65,10 @@ $db_name = 'your_database';
//Initializes MySQLi $conn = mysqli_init();
-// If using Azure Virtual machines or Azure Web App, 'mysqli-ssl_set()' is not required as the certificate is already installed on the machines.
mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL); // Establish the connection
-mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL);
+mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, NULL, MYSQLI_CLIENT_SSL);
//If connection failed, show the error if (mysqli_connect_errno($conn))
@@ -172,4 +171,4 @@ az group delete \
> [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-single-server-cli.md)
-[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
\ No newline at end of file
+[Cannot find what you are looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
mysql https://docs.microsoft.com/en-us/azure/mysql/quickstart-mysql-github-actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/quickstart-mysql-github-actions.md
@@ -12,7 +12,7 @@ ms.custom: github-actions-azure
# Quickstart: Use GitHub Actions to connect to Azure MySQL
-**APPLIES TO**: :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Flexible Server
+**APPLIES TO**: :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for MySQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for MySQL - Flexible Server
Get started with [GitHub Actions](https://docs.github.com/en/free-pro-team@latest/actions) by using a workflow to deploy database updates to [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/).
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-data-encryption-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-data-encryption-postgresql.md
@@ -10,7 +10,7 @@ ms.date: 01/13/2020
# Azure Database for PostgreSQL Single server data encryption with a customer-managed key
-Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+Azure PostgreSQL leverages [Azure Storage encryption](../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it is a very similar to Transparent Data Encruption (TDE) in other databases such as SQL Server. Many organizations require full control on access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../key-vault/general/secure-your-key-vault.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
@@ -55,7 +55,9 @@ When the server is configured to use the customer-managed key stored in the key
The following are requirements for configuring Key Vault: * Key Vault and Azure Database for PostgreSQL Single server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterwards requires you to reconfigure the data encryption.
-* Enable the soft-delete feature on the key vault, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days, unless the user recovers or purges them in the meantime. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
+* Key vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key vault has been configured with a lower number, you will need to create a new key vault as it cannot be modified after creation.
+* Enable the soft-delete feature on the key vault, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days, unless the user recovers or purges them in the meantime. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
+* Enable Purge protection to enforce a mandatory retention period for deleted vaults and vault objects
* Grant the Azure Database for PostgreSQL Single server access to the key vault with the get, wrapKey, and unwrapKey permissions by using its unique managed identity. In the Azure portal, the unique 'Service' identity is automatically created when data encryption is enabled on the PostgreSQL Single server. See [Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal](howto-data-encryption-portal.md) for detailed, step-by-step instructions when you're using the Azure portal. The following are requirements for configuring the customer-managed key:
private-link https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
@@ -5,7 +5,7 @@ services: private-link
author: mblanco77 ms.service: private-link ms.topic: conceptual
-ms.date: 06/18/2020
+ms.date: 01/12/2021
ms.author: allensu --- # Azure Private Endpoint DNS configuration
@@ -62,7 +62,7 @@ For Azure services, use the recommended zone names as described in the following
| Azure Backup (Microsoft.RecoveryServices/vaults) / vault | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com | | Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Service Bus (Microsoft.ServiceBus/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
-| Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.net | azure-devices.net |
+| Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net |
| Azure Relay (Microsoft.Relay/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Event Grid (Microsoft.EventGrid/topics) / topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | | Azure Event Grid (Microsoft.EventGrid/domains) / domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
@@ -77,6 +77,7 @@ For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories ) / portal | privatelink.azure.com | azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
+<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
## DNS configuration scenarios
@@ -194,4 +195,4 @@ The following diagram illustrates the DNS resolution sequence from an 
:::image type="content" source="media/private-endpoint-dns/hybrid-scenario.png" alt-text="Hybrid scenario"::: ## Next steps-- [Learn about private endpoints](private-endpoint-overview.md)\ No newline at end of file
+- [Learn about private endpoints](private-endpoint-overview.md)
private-link https://docs.microsoft.com/en-us/azure/private-link/private-link-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-link-overview.md
@@ -19,6 +19,8 @@ Traffic between your virtual network and the service travels the Microsoft backb
> [!IMPORTANT] > Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS will onboard to Azure Private Link at different schedules. Check [availability](#availability) section in this article for accurate status of Azure PaaS on Private Link. For known limitations, see [Private Endpoint](private-endpoint-overview.md#limitations) and [Private Link Service](private-link-service-overview.md#limitations).
+:::image type="content" source="./media/private-link-overview/private-link-center.png" alt-text="Azure Private Link center in Azure portal" border="false":::
+ ## Key benefits Azure Private Link provides the following benefits: - **Privately access services on the Azure platform**: Connect your virtual network to services in Azure without a public IP address at the source or destination. Service providers can render their services in their own virtual network and consumers can access those services in their local virtual network. The Private Link platform will handle the connectivity between the consumer and services over the Azure backbone network.
purview https://docs.microsoft.com/en-us/azure/purview/create-catalog-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-portal.md
@@ -24,7 +24,7 @@ In this quickstart, you create an Azure Purview account.
* Your account must have permission to create resources in the subscription
-* If you have **Azure Policy** blocking all applications from creating **Storage account** and **EventHub namespace**, you need to make policy exception using tag, which will can be entered during the process of creating a Purview account. The main reason is that for each Purview Account created, it needs to create a managed Resource Group and within this resource group, a Storage account and an
+* If you have **Azure Policy** blocking all applications from creating **Storage account** and **EventHub namespace**, you need to make policy exception using tag, which can be entered during the process of creating a Purview account. The main reason is that for each Purview Account created, it needs to create a managed Resource Group and within this resource group, a Storage account and an
EventHub namespace. > [!important]
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/check-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/check-access.md
@@ -35,7 +35,7 @@ Follow these steps to open the set of Azure resources that you want to check acc
The following shows an example resource group.
- ![Resource group overview](./media/check-access/rg-overview.png)
+ ![Resource group overview](./media/shared/rg-overview.png)
## Step 2: Check access for a user
@@ -45,7 +45,7 @@ Follow these steps to check the access for a single user, group, service princip
The following shows an example of the Access control (IAM) page for a resource group.
- ![Resource group access control - Check access tab](./media/check-access/rg-access-control.png)
+ ![Resource group access control - Check access tab](./media/shared/rg-access-control.png)
1. On the **Check access** tab, in the **Find** list, select the user, group, service principal, or managed identity you want to check access for.
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-external-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-external-users.md
@@ -25,9 +25,7 @@ ms.custom: it-pro
## Prerequisites
-To add or remove role assignments, you must have:
--- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
+[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
## When would you invite guest users?
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal-managed-identity.md new file mode 100644
@@ -0,0 +1,91 @@
+---
+title: Add a role assignment for a managed identity (Preview) - Azure RBAC
+description: Learn how to add a role assignment by starting with the managed identity and then select the scope and role using the Azure portal and Azure role-based access control (Azure RBAC).
+services: active-directory
+author: rolyon
+manager: mtillman
+ms.service: role-based-access-control
+ms.topic: how-to
+ms.workload: identity
+ms.date: 01/11/2021
+ms.author: rolyon
+---
+
+# Add a role assignment for a managed identity (Preview)
+
+You can add role assignments for a managed identity by using the **Access control (IAM)** page as described in [Add or remove Azure role assignments using the Azure portal](role-assignments-portal.md). When you use the Access control (IAM) page, you start with the scope and then select the managed identity and role. This article describes an alternate way to add role assignments for a managed identity. Using these steps, you start with the managed identity and then select the scope and role.
+
+> [!IMPORTANT]
+> Adding a role assignment for a managed identity using these alternate steps is currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
+
+## System-assigned managed identity
+
+Follow these steps to assign a role to a system-assigned managed identity by starting with the managed identity.
+
+1. In the Azure portal, open a system-assigned managed identity.
+
+1. In the left menu, click **Identity**.
+
+ ![System-assigned managed identity](./media/shared/identity-system-assigned.png)
+
+1. Under **Permissions**, click **Azure role assignments**.
+
+ If roles are already assigned to the selected system-assigned managed identity, you see the list of role assignments. This list includes all role assignments you have permission to read.
+
+ ![Role assignments for a system-assigned managed identity](./media/shared/role-assignments-system-assigned.png)
+
+1. To change the subscription, click the **Subscription** list.
+
+1. Click **Add role assignment (Preview)**.
+
+1. Use the drop-down lists to select the set of resources that the role assignment applies to such as **Subscription**, **Resource group**, or resource.
+
+ If you don't have role assignment write permissions for the selected scope, an inline message will be displayed.
+
+1. In the **Role** drop-down list, select a role such as **Virtual Machine Contributor**.
+
+ ![Add role assignment pane for system-assigned managed identity](./media/role-assignments-portal-managed-identity/add-role-assignment-with-scope.png)
+
+1. Click **Save** to assign the role.
+
+ After a few moments, the managed identity is assigned the role at the selected scope.
+
+## User-assigned managed identity
+
+Follow these steps to assign a role to a user-assigned managed identity by starting with the managed identity.
+
+1. In the Azure portal, open a user-assigned managed identity.
+
+1. In the left menu, click **Azure role assignments**.
+
+ If roles are already assigned to the selected user-assigned managed identity, you see the list of role assignments. This list includes all role assignments you have permission to read.
+
+ ![Role assignments for a user-assigned managed identity](./media/shared/role-assignments-user-assigned.png)
+
+1. To change the subscription, click the **Subscription** list.
+
+1. Click **Add role assignment (Preview)**.
+
+1. Use the drop-down lists to select the set of resources that the role assignment applies to such as **Subscription**, **Resource group**, or resource.
+
+ If you don't have role assignment write permissions for the selected scope, an inline message will be displayed.
+
+1. In the **Role** drop-down list, select a role such as **Virtual Machine Contributor**.
+
+ ![Add role assignment pane for a user-assigned managed identity](./media/role-assignments-portal-managed-identity/add-role-assignment-with-scope.png)
+
+1. Click **Save** to assign the role.
+
+ After a few moments, the managed identity is assigned the role at the selected scope.
+
+## Next steps
+
+- [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+- [Add or remove Azure role assignments using the Azure portal](role-assignments-portal.md)
+- [List Azure role assignments using the Azure portal](role-assignments-list-portal.md)
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal-subscription-admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal-subscription-admin.md new file mode 100644
@@ -0,0 +1,89 @@
+---
+title: Assign a user as an administrator of an Azure subscription - Azure RBAC
+description: Learn how to make a user an administrator of an Azure subscription using the Azure portal and Azure role-based access control (Azure RBAC).
+services: active-directory
+author: rolyon
+manager: mtillman
+ms.service: role-based-access-control
+ms.topic: how-to
+ms.workload: identity
+ms.date: 01/11/2021
+ms.author: rolyon
+---
+
+# Assign a user as an administrator of an Azure subscription
+
+To make a user an administrator of an Azure subscription, assign them the [Owner](built-in-roles.md#owner) role at the subscription scope. The Owner role gives the user full access to all resources in the subscription, including the permission to grant access to others. These steps are the same as any other role assignment.
+
+## Prerequisites
+
+[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
+
+## Step 1: Open the subscription
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the Search box at the top, search for subscriptions.
+
+ ![Azure portal search for resource group](./media/shared/sub-portal-search.png)
+
+1. Click the subscription you want to use.
+
+ The following shows an example subscription.
+
+ ![Resource group overview](./media/shared/sub-overview.png)
+
+## Step 2: Open the Add role assignment pane
+
+**Access control (IAM)** is the page that you typically use to assign roles to grant access to Azure resources. It's also known as identity and access management (IAM) and appears in several locations in the Azure portal.
+
+1. Click **Access control (IAM)**.
+
+ The following shows an example of the Access control (IAM) page for a subscription.
+
+ ![Access control (IAM) page for a resource group](./media/shared/sub-access-control.png)
+
+1. Click the **Role assignments** tab to view the role assignments at this scope.
+
+1. Click **Add** > **Add role assignment**.
+ If you don't have permissions to assign roles, the Add role assignment option will be disabled.
+
+ ![Add role assignment menu](./media/shared/add-role-assignment-menu.png)
+
+ The Add role assignment pane opens.
+
+ ![Add role assignment pane](./media/shared/add-role-assignment.png)
+
+## Step 3: Select the Owner role
+
+The [Owner](built-in-roles.md#owner) role grant full access to manage all resources, including the ability to assign roles in Azure RBAC. You should have a maximum of 3 subscription owners to reduce the potential for breach by a compromised owner.
+
+- In the **Role** list, select the **Owner** role.
+
+ ![Select Owner role in Add role assignment pane](./media/role-assignments-portal-subscription-admin/add-role-assignment-role-owner.png)
+
+## Step 4: Select who needs access
+
+1. In the **Assign access to** list, select **User, group, or service principal**.
+
+1. In the **Select** section, search for the user by entering a string or scrolling through the list.
+
+ ![Select user in Add role assignment](./media/role-assignments-portal-subscription-admin/add-role-assignment-user-admin.png)
+
+1. Once you have found the user, click to select it.
+
+## Step 5: Assign role
+
+1. To assign the role, click **Save**.
+
+ After a few moments, the user is assigned the role at the selected scope.
+
+1. On the **Role assignments** tab, verify that you see the role assignment in the list.
+
+ ![Add role assignment saved](./media/role-assignments-portal-subscription-admin/sub-role-assignments-owner.png)
+
+## Next steps
+
+- [Add or remove Azure role assignments using the Azure portal](role-assignments-portal.md)
+- [List Azure role assignments using the Azure portal](role-assignments-list-portal.md)
+- [Organize your resources with Azure management groups](../governance/management-groups/overview.md)
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal.md
@@ -7,9 +7,8 @@ manager: mtillman
ms.service: role-based-access-control ms.topic: how-to ms.workload: identity
-ms.date: 09/30/2020
+ms.date: 01/11/2021
ms.author: rolyon
-ms.reviewer: bagovind
--- # Add or remove Azure role assignments using the Azure portal
@@ -20,160 +19,92 @@ If you need to assign administrator roles in Azure Active Directory, see [View a
## Prerequisites
-To add or remove role assignments, you must have:
--- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)-
-## Access control (IAM)
-
-**Access control (IAM)** is the page that you typically use to assign roles to grant access to Azure resources. It's also known as identity and access management and appears in several locations in the Azure portal. The following shows an example of the Access control (IAM) page for a subscription.
-
-![Access control (IAM) page for a subscription](./media/role-assignments-portal/access-control-subscription.png)
-
-To be the most effective with the Access control (IAM) page, it helps to follow these steps to assign a role.
-
-1. Determine who needs access. You can assign a role to a user, group, service principal, or managed identity.
-
-1. Find the appropriate role. Permissions are grouped together into roles. You can select from a list of several [Azure built-in roles](built-in-roles.md) or you can use your own custom roles.
-
-1. Identify the needed scope. Azure provides four levels of scope: [management group](../governance/management-groups/overview.md), subscription, [resource group](../azure-resource-manager/management/overview.md#resource-groups), and resource. For more information about scope, see [Understand scope](scope-overview.md).
-
-1. Perform the steps in one of the following sections to assign a role.
+[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
## Add a role assignment
-In Azure RBAC, to grant access to an Azure resource, you add a role assignment. Follow these steps to assign a role.
+In Azure RBAC, to grant access to an Azure resource, you add a role assignment. Follow these steps to assign a role. For a high-level overview of steps, see [Steps to add a role assignment](role-assignments-steps.md).
-1. In the Azure portal, click **All services** and then select the scope that you want to grant access to. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource.
+### Step 1: Identify the needed scope
-1. Click the specific resource for that scope.
+[!INCLUDE [Scope for Azure RBAC introduction](../../includes/role-based-access-control/scope-intro.md)]
-1. Click **Access control (IAM)**.
+[!INCLUDE [Scope for Azure RBAC least privilege](../../includes/role-based-access-control/scope-least.md)] For more information about scope, see [Understand scope](scope-overview.md).
-1. Click the **Role assignments** tab to view the role assignments at this scope.
+![Scope levels for Azure RBAC](../../includes/role-based-access-control/media/scope-levels.png)
- ![Access control (IAM) and Role assignments tab](./media/role-assignments-portal/role-assignments.png)
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Add** > **Add role assignment**.
+1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
- If you don't have permissions to assign roles, the Add role assignment option will be disabled.
-
- ![Add role assignment menu](./media/shared/add-role-assignment-menu.png)
-
- The Add role assignment pane opens.
-
- ![Add role assignment pane](./media/role-assignments-portal/add-role-assignment.png)
+ ![Azure portal search for resource group](./media/shared/rg-portal-search.png)
-1. In the **Role** drop-down list, select a role such as **Virtual Machine Contributor**.
-
-1. In the **Select** list, select a user, group, service principal, or managed identity. If you don't see the security principal in the list, you can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
-
-1. Click **Save** to assign the role.
-
- After a few moments, the security principal is assigned the role at the selected scope.
-
- ![Add role assignment saved](./media/role-assignments-portal/add-role-assignment-save.png)
+1. Click the specific resource for that scope.
-## Assign a user as an administrator of a subscription
+ The following shows an example resource group.
-To make a user an administrator of an Azure subscription, assign them the [Owner](built-in-roles.md#owner) role at the subscription scope. The Owner role gives the user full access to all resources in the subscription, including the permission to grant access to others. These steps are the same as any other role assignment.
+ ![Resource group overview](./media/shared/rg-overview.png)
-1. In the Azure portal, click **All services** and then **Subscriptions**.
+### Step 2: Open the Add role assignment pane
-1. Click the subscription where you want to grant access.
+**Access control (IAM)** is the page that you typically use to assign roles to grant access to Azure resources. It's also known as identity and access management (IAM) and appears in several locations in the Azure portal.
1. Click **Access control (IAM)**.
-1. Click the **Role assignments** tab to view the role assignments for this subscription.
+ The following shows an example of the Access control (IAM) page for a resource group.
- ![Access control (IAM) and Role assignments tab](./media/role-assignments-portal/role-assignments.png)
+ ![Access control (IAM) page for a resource group](./media/shared/rg-access-control.png)
-1. Click **Add** > **Add role assignment**.
+1. Click the **Role assignments** tab to view the role assignments at this scope.
+1. Click **Add** > **Add role assignment**.
If you don't have permissions to assign roles, the Add role assignment option will be disabled.
- ![Add role assignment menu for a subscription](./media/shared/add-role-assignment-menu.png)
+ ![Add role assignment menu](./media/shared/add-role-assignment-menu.png)
The Add role assignment pane opens.
- ![Add role assignment pane for a subscription](./media/role-assignments-portal/add-role-assignment.png)
-
-1. In the **Role** drop-down list, select the **Owner** role.
-
-1. In the **Select** list, select a user. If you don't see the user in the list, you can type in the **Select** box to search the directory for display names and email addresses.
-
-1. Click **Save** to assign the role.
-
- After a few moments, the user is assigned the Owner role at the subscription scope.
-
-## Add a role assignment for a managed identity (Preview)
-
-You can add role assignments for a managed identity by using the **Access control (IAM)** page as described earlier in this article. When you use the Access control (IAM) page, you start with the scope and then select the managed identity and role. This section describes an alternate way to add role assignments for a managed identity. Using these steps, you start with the managed identity and then select the scope and role.
-
-> [!IMPORTANT]
-> Adding a role assignment for a managed identity using these alternate steps is currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-### System-assigned managed identity
+ ![Add role assignment pane](./media/shared/add-role-assignment.png)
-Follow these steps to assign a role to a system-assigned managed identity by starting with the managed identity.
+### Step 3: Select the appropriate role
-1. In the Azure portal, open a system-assigned managed identity.
+1. In the **Role** list, search or scroll to find the role that you want to assign.
-1. In the left menu, click **Identity**.
+ To help you determine the appropriate role, you can hover over the info icon to display a description for the role. For additional information, you can view the [Azure built-in roles](built-in-roles.md) article.
- ![System-assigned managed identity](./media/shared/identity-system-assigned.png)
+ ![Select role in Add role assignment](./media/role-assignments-portal/add-role-assignment-role.png)
-1. Under **Permissions**, click **Azure role assignments**.
+1. Click to select the role.
- If roles are already assigned to the selected system-assigned managed identity, you see the list of role assignments. This list includes all role assignments you have permission to read.
+### Step 4: Select who needs access
- ![Role assignments for a system-assigned managed identity](./media/shared/role-assignments-system-assigned.png)
+1. In the **Assign access to** list, select the type of security principal to assign access to.
-1. To change the subscription, click the **Subscription** list.
+ | Type | Description |
+ | --- | --- |
+ | **User, group, or service principal** | If you want to assign the role to a user, group, or service principal (application), select this type. |
+ | **User assigned managed identity** | If you want to assign the role to a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), select this type. |
+ | *System assigned managed identity* | If you want to assign the role to a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), select the Azure service instance where the managed identity is located. |
-1. Click **Add role assignment (Preview)**.
+ ![Select security principal type in Add role assignment](./media/role-assignments-portal/add-role-assignment-type.png)
-1. Use the drop-down lists to select the set of resources that the role assignment applies to such as **Subscription**, **Resource group**, or resource.
+1. If you selected a user-assigned managed identity or a system-assigned managed identity, select the **Subscription** where the managed identity is located.
- If you don't have role assignment write permissions for the selected scope, an inline message will be displayed.
+1. In the **Select** section, search for the security principal by entering a string or scrolling through the list.
-1. In the **Role** drop-down list, select a role such as **Virtual Machine Contributor**.
+ ![Select user in Add role assignment](./media/role-assignments-portal/add-role-assignment-user.png)
- ![Add role assignment pane for system-assigned managed identity](./media/role-assignments-portal/add-role-assignment-with-scope.png)
+1. Once you have found the security principal, click to select it.
-1. Click **Save** to assign the role.
+### Step 5: Assign role
- After a few moments, the managed identity is assigned the role at the selected scope.
+1. To assign the role, click **Save**.
-### User-assigned managed identity
-
-Follow these steps to assign a role to a user-assigned managed identity by starting with the managed identity.
-
-1. In the Azure portal, open a user-assigned managed identity.
-
-1. In the left menu, click **Azure role assignments**.
-
- If roles are already assigned to the selected user-assigned managed identity, you see the list of role assignments. This list includes all role assignments you have permission to read.
-
- ![Role assignments for a user-assigned managed identity](./media/shared/role-assignments-user-assigned.png)
-
-1. To change the subscription, click the **Subscription** list.
-
-1. Click **Add role assignment (Preview)**.
-
-1. Use the drop-down lists to select the set of resources that the role assignment applies to such as **Subscription**, **Resource group**, or resource.
-
- If you don't have role assignment write permissions for the selected scope, an inline message will be displayed.
-
-1. In the **Role** drop-down list, select a role such as **Virtual Machine Contributor**.
-
- ![Add role assignment pane for a user-assigned managed identity](./media/role-assignments-portal/add-role-assignment-with-scope.png)
+ After a few moments, the security principal is assigned the role at the selected scope.
-1. Click **Save** to assign the role.
+1. On the **Role assignments** tab, verify that you see the role assignment in the list.
- After a few moments, the managed identity is assigned the role at the selected scope.
+ ![Add role assignment saved](./media/role-assignments-portal/rg-role-assignments.png)
## Remove a role assignment
@@ -181,11 +112,11 @@ In Azure RBAC, to remove access from an Azure resource, you remove a role assign
1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where you want to remove access.
-1. Click the **Role assignments** tab to view all the role assignments for this subscription.
+1. Click the **Role assignments** tab to view all the role assignments at this scope.
1. In the list of role assignments, add a checkmark next to the security principal with the role assignment you want to remove.
- ![Role assignment selected to be removed](./media/role-assignments-portal/remove-role-assignment-select.png)
+ ![Role assignment selected to be removed](./media/role-assignments-portal/rg-role-assignments-select.png)
1. Click **Remove**.
@@ -199,7 +130,6 @@ In Azure RBAC, to remove access from an Azure resource, you remove a role assign
## Next steps -- [List Azure role assignments using the Azure portal](role-assignments-list-portal.md)-- [Tutorial: Grant a user access to Azure resources using the Azure portal](quickstart-assign-role-user-portal.md)
+- [Assign a user as an administrator of an Azure subscription](role-assignments-portal-subscription-admin.md)
+- [Add a role assignment for a managed identity](role-assignments-portal-managed-identity.md)
- [Troubleshoot Azure RBAC](troubleshooting.md)-- [Organize your resources with Azure management groups](../governance/management-groups/overview.md)
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-steps.md
@@ -26,7 +26,7 @@ You first need to determine who needs access. You can assign a role to a user, g
- Service principal - A security identity used by applications or services to access specific Azure resources. You can think of it as a *user identity* (username and password or certificate) for an application. - Managed identity - An identity in Azure Active Directory that is automatically managed by Azure. You typically use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) when developing cloud applications to manage the credentials for authenticating to Azure services.
-## Step 2: Find the appropriate role
+## Step 2: Select the appropriate role
Permissions are grouped together into a *role definition*. It's typically just called a *role*. You can select from a list of several built-in roles. If the built-in roles don't meet the specific needs of your organization, you can create your own custom roles.
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-quickstart-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-quickstart-blob.md
@@ -8,17 +8,17 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: quickstart
-ms.date: 09/25/2020
+ms.date: 01/12/2021
--- # Quickstart: Create an Azure Cognitive Search cognitive skillset in the Azure portal
-A skillset is an AI-based feature that extracts information and structure from large undifferentiated text or image files, and makes the content both indexable and searchable in Azure Cognitive Search.
+A skillset is an AI-based feature that uses deep learning models to extract information and structure from large undifferentiated text or image files, making the content both indexable and searchable in Azure Cognitive Search.
In this quickstart, you'll combine services and data in the Azure cloud to create the skillset. Once everything is in place, you'll run the **Import data** wizard in the Azure portal to pull it all together. The end result is a searchable index populated with data created by AI processing that you can query in the portal ([Search explorer](search-explorer.md)). ## Prerequisites
-Before you begin, you must have the following:
+Before you begin, create the following services:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
search https://docs.microsoft.com/en-us/azure/search/includes/search-get-started-rest-postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/includes/search-get-started-rest-postman.md deleted file mode 100644
@@ -1,273 +0,0 @@
-title: include file
-description: include file
-manager: nitinme
-ms.author: heidist
-author: HeidiSteen
-ms.service: cognitive-search
-ms.topic: include
-ms.custom: include file
-ms.date: 11/17/2020
-
-The article uses the Postman desktop application. You can [download and import a Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/Quickstart) if you prefer to use predefined requests.
-
-## Prerequisites
-
-The following services and tools are required for this quickstart.
-
-+ [Postman desktop app](https://www.getpostman.com/) is used for sending requests to Azure Cognitive Search.
-
-+ [Create an Azure Cognitive Search service](../search-create-service-portal.md) or [find an existing service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
-
-## Copy a key and URL
-
-REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-
-1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
-
-![Get an HTTP endpoint and access key](../media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
-
-All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
-
-## Connect to Azure Cognitive Search
-
-In this section, use your web tool of choice to set up connections to Azure Cognitive Search. Each tool persists request header information for the session, which means you only have to enter the api-key and Content-Type once.
-
-For either tool, you need to choose a command (GET, POST, PUT, and so forth), provide a URL endpoint, and for some tasks, provide JSON in the body of the request. Replace the search service name (YOUR-SEARCH-SERVICE-NAME) with a valid value. Add `$select=name` to return just the name of each index.
-
-> `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes?api-version=2020-06-30&$select=name`
-
-Notice the HTTPS prefix, the name of the service, the name of an object (in this case, the indexes collection), and the [api-version](../search-api-versions.md). The api-version is a required, lowercase string specified as `?api-version=2020-06-30` for the current version. API versions are updated regularly. Including the api-version on each request gives you full control over which one is used.
-
-Request header composition includes two elements: `Content-Type` and the `api-key` used to authenticate to Azure Cognitive Search. Replace the admin API key (YOUR-AZURE-SEARCH-ADMIN-API-KEY) with a valid value.
-
-```http
-api-key: <YOUR-AZURE-SEARCH-ADMIN-API-KEY>
-Content-Type: application/json
-```
-
-In Postman, formulate a request that looks like the following screenshot. Choose **GET** as the command, provide the URL, and click **Send**. This command connects to Azure Cognitive Search, reads the indexes collection, and returns HTTP status code 200 on a successful connection. If your service has indexes already, the response will also include index definitions.
-
-![Postman request URL and header](../media/search-get-started-rest/postman-url.png "Postman request URL and header")
-
-## 1 - Create an index
-
-In Azure Cognitive Search, you usually create the index before loading it with data. The [Create Index REST API](/rest/api/searchservice/create-index) is used for this task.
-
-The URL is extended to include the `hotels` index name.
-
-To do this in Postman:
-
-1. Change the command to **PUT**.
-
-2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart?api-version=2020-06-30`.
-
-3. Provide the index definition (copy-ready code is provided below) in the body of the request.
-
-4. Click **Send**.
-
-![Index JSON document in request body](../media/search-get-started-rest/postman-request.png "Index JSON document in request body")
-
-### Index definition
-
-The fields collection defines document structure. Each document must have these fields, and each field must have a data type. String fields are used in full text search. If you need numeric data to be searchable, you will need to cast numeric data as strings.
-
-Attributes on the field determine allowed action. The REST APIs allow many actions by default. For example, all strings are searchable, retrievable, filterable, and facetable by default. Often, you only have to set attributes when you need to turn off a behavior.
-
-```json
-{
- "name": "hotels-quickstart",
- "fields": [
- {"name": "HotelId", "type": "Edm.String", "key": true, "filterable": true},
- {"name": "HotelName", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": true, "facetable": false},
- {"name": "Description", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false, "analyzer": "en.lucene"},
- {"name": "Category", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
- {"name": "Tags", "type": "Collection(Edm.String)", "searchable": true, "filterable": true, "sortable": false, "facetable": true},
- {"name": "ParkingIncluded", "type": "Edm.Boolean", "filterable": true, "sortable": true, "facetable": true},
- {"name": "LastRenovationDate", "type": "Edm.DateTimeOffset", "filterable": true, "sortable": true, "facetable": true},
- {"name": "Rating", "type": "Edm.Double", "filterable": true, "sortable": true, "facetable": true},
- {"name": "Address", "type": "Edm.ComplexType",
- "fields": [
- {"name": "StreetAddress", "type": "Edm.String", "filterable": false, "sortable": false, "facetable": false, "searchable": true},
- {"name": "City", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
- {"name": "StateProvince", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
- {"name": "PostalCode", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
- {"name": "Country", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true}
- ]
- }
- ]
-}
-```
-
-When you submit this request, you should get an HTTP 201 response, indicating the index was created successfully. You can verify this action in the portal, but note that the portal page has refresh intervals so it could take a minute or two to catch up.
-
-> [!TIP]
-> If you get HTTP 504, verify the URL specifies HTTPS. If you see HTTP 400 or 404, check the request body to verify there were no copy-paste errors. An HTTP 403 typically indicates a problem with the api-key (either an invalid key or a syntax problem with how the api-key is specified).
-
-## 2 - Load documents
-
-Creating the index and populating the index are separate steps. In Azure Cognitive Search, the index contains all searchable data. In this scenario, the data is provided as JSON documents. The [Add, Update, or Delete Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) is used for this task.
-
-The URL is extended to include the `docs` collections and `index` operation.
-
-To do this in Postman:
-
-1. Change the command to **POST**.
-
-2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/docs/index?api-version=2020-06-30`.
-
-3. Provide the JSON documents (copy-ready code is below) in the body of the request.
-
-4. Click **Send**.
-
-![JSON documents in request body](../media/search-get-started-rest/postman-docs.png "JSON documents in request body")
-
-### JSON documents to load into the index
-
-The Request Body contains four documents to be added to the hotels index.
-
-```json
-{
- "value": [
- {
- "@search.action": "upload",
- "HotelId": "1",
- "HotelName": "Secret Point Motel",
- "Description": "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
- "Category": "Boutique",
- "Tags": [ "pool", "air conditioning", "concierge" ],
- "ParkingIncluded": false,
- "LastRenovationDate": "1970-01-18T00:00:00Z",
- "Rating": 3.60,
- "Address":
- {
- "StreetAddress": "677 5th Ave",
- "City": "New York",
- "StateProvince": "NY",
- "PostalCode": "10022",
- "Country": "USA"
- }
- },
- {
- "@search.action": "upload",
- "HotelId": "2",
- "HotelName": "Twin Dome Motel",
- "Description": "The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.",
- "Category": "Boutique",
- "Tags": [ "pool", "free wifi", "concierge" ],
- "ParkingIncluded": false,
- "LastRenovationDate": "1979-02-18T00:00:00Z",
- "Rating": 3.60,
- "Address":
- {
- "StreetAddress": "140 University Town Center Dr",
- "City": "Sarasota",
- "StateProvince": "FL",
- "PostalCode": "34243",
- "Country": "USA"
- }
- },
- {
- "@search.action": "upload",
- "HotelId": "3",
- "HotelName": "Triple Landscape Hotel",
- "Description": "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
- "Category": "Resort and Spa",
- "Tags": [ "air conditioning", "bar", "continental breakfast" ],
- "ParkingIncluded": true,
- "LastRenovationDate": "2015-09-20T00:00:00Z",
- "Rating": 4.80,
- "Address":
- {
- "StreetAddress": "3393 Peachtree Rd",
- "City": "Atlanta",
- "StateProvince": "GA",
- "PostalCode": "30326",
- "Country": "USA"
- }
- },
- {
- "@search.action": "upload",
- "HotelId": "4",
- "HotelName": "Sublime Cliff Hotel",
- "Description": "Sublime Cliff Hotel is located in the heart of the historic center of Sublime in an extremely vibrant and lively area within short walking distance to the sites and landmarks of the city and is surrounded by the extraordinary beauty of churches, buildings, shops and monuments. Sublime Cliff is part of a lovingly restored 1800 palace.",
- "Category": "Boutique",
- "Tags": [ "concierge", "view", "24-hour front desk service" ],
- "ParkingIncluded": true,
- "LastRenovationDate": "1960-02-06T00:00:00Z",
- "Rating": 4.60,
- "Address":
- {
- "StreetAddress": "7400 San Pedro Ave",
- "City": "San Antonio",
- "StateProvince": "TX",
- "PostalCode": "78216",
- "Country": "USA"
- }
- }
- ]
-}
-```
-
-In a few seconds, you should see an HTTP 201 response in the session list. This indicates the documents were created successfully.
-
-If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request: verify you changed the endpoint to include `/docs/index`.
-
-> [!Tip]
-> For selected data sources, you can choose the alternative *indexer* approach which simplifies and reduces the amount of code required for indexing. For more information, see [Indexer operations](/rest/api/searchservice/indexer-operations).
--
-## 3 - Search an index
-
-Now that an index and document set are loaded, you can issue queries against them using [Search Documents REST API](/rest/api/searchservice/search-documents).
-
-The URL is extended to include a query expression, specified using the search operator.
-
-To do this in Postman:
-
-1. Change the command to **GET**.
-
-2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/docs?search=*&$count=true&api-version=2020-06-30`.
-
-3. Click **Send**.
-
-This query is an empty and returns a count of the documents in the search results. The request and response should look similar to the following screenshot for Postman after you click **Send**. The status code should be 200.
-
- ![GET with search string on the URL](../media/search-get-started-rest/postman-query.png "GET with search string on the URL")
-
-Try a few other query examples to get a feel for the syntax. You can do a string search, verbatim $filter queries, limit the results set, scope the search to specific fields, and more.
-
-Swap out the current URL with the ones below, clicking **Send** each time to view the results.
-
-```
-# Query example 1 - Search on restaurant and wifi
-# Return only the HotelName, Description, and Tags fields
-https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=restaurant wifi&$count=true&$select=HotelName,Description,Tags&api-version=2020-06-30
-
-# Query example 2 - Apply a filter to the index to find hotels rated 4 or highter
-# Returns the HotelName and Rating. Two documents match
-https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=*&$filter=Rating gt 4&$select=HotelName,Rating&api-version=2020-06-30
-
-# Query example 3 - Take the top two results, and show only HotelName and Category in the results
-https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=boutique&$top=2&$select=HotelName,Category&api-version=2020-06-30
-
-# Query example 4 - Sort by a specific field (Address/City) in ascending order
-https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=pool&$orderby=Address/City asc&$select=HotelName, Address/City, Tags, Rating&api-version=2020-06-30
-```
-
-## Get index properties
-
-You can also use [Get Statistics](/rest/api/searchservice/get-index-statistics) to query for document counts and index size:
-
-```
-https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/stats?api-version=2020-06-30
-```
-
-Adding `/stats` to your URL returns index information. In Postman, your request should look similar to the following, and the response includes a document count and space used in bytes.
-
- ![Get index information](../media/search-get-started-rest/postman-system-query.png "Get index information")
-
-Notice that the api-version syntax differs. For this request, use `?` to append the api-version. The `?` separates the URL path from the query string, while & separates each 'name=value' pair in the query string. For this query, api-version is the first and only item in the query string.
\ No newline at end of file
search https://docs.microsoft.com/en-us/azure/search/search-get-started-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-get-started-rest.md
@@ -19,17 +19,267 @@ This article explains how to formulate REST API requests interactively using the
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-::: zone pivot="url-test-tool-rest-postman"
+The article uses the Postman desktop application. You can [download and import a Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/Quickstart) if you prefer to use predefined requests.
-[!INCLUDE [Send requests using Postman](includes/search-get-started-rest-postman.md)]
+## Prerequisites
-::: zone-end
+The following services and tools are required for this quickstart.
-::: zone pivot="url-test-tool-rest-vscode-ext"
++ [Postman desktop app](https://www.getpostman.com/) is used for sending requests to Azure Cognitive Search.
-[!INCLUDE [Send requests using Visual Studio Code](includes/search-get-started-rest-vscode-ext.md)]
++ [Create an Azure Cognitive Search service](search-create-service-portal.md) or [find an existing service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
-::: zone-end
+## Copy a key and URL
+
+REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
+
+1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+
+1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+
+![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
+
+All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
+
+## Connect to Azure Cognitive Search
+
+In this section, use your web tool of choice to set up connections to Azure Cognitive Search. Each tool persists request header information for the session, which means you only have to enter the api-key and Content-Type once.
+
+For either tool, you need to choose a command (GET, POST, PUT, and so forth), provide a URL endpoint, and for some tasks, provide JSON in the body of the request. Replace the search service name (YOUR-SEARCH-SERVICE-NAME) with a valid value. Add `$select=name` to return just the name of each index.
+
+> `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes?api-version=2020-06-30&$select=name`
+
+Notice the HTTPS prefix, the name of the service, the name of an object (in this case, the indexes collection), and the [api-version](search-api-versions.md). The api-version is a required, lowercase string specified as `?api-version=2020-06-30` for the current version. API versions are updated regularly. Including the api-version on each request gives you full control over which one is used.
+
+Request header composition includes two elements: `Content-Type` and the `api-key` used to authenticate to Azure Cognitive Search. Replace the admin API key (YOUR-AZURE-SEARCH-ADMIN-API-KEY) with a valid value.
+
+```http
+api-key: <YOUR-AZURE-SEARCH-ADMIN-API-KEY>
+Content-Type: application/json
+```
+
+In Postman, formulate a request that looks like the following screenshot. Choose **GET** as the command, provide the URL, and click **Send**. This command connects to Azure Cognitive Search, reads the indexes collection, and returns HTTP status code 200 on a successful connection. If your service has indexes already, the response will also include index definitions.
+
+![Postman request URL and header](media/search-get-started-rest/postman-url.png "Postman request URL and header")
+
+## 1 - Create an index
+
+In Azure Cognitive Search, you usually create the index before loading it with data. The [Create Index REST API](/rest/api/searchservice/create-index) is used for this task.
+
+The URL is extended to include the `hotels` index name.
+
+To do this in Postman:
+
+1. Change the command to **PUT**.
+
+2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart?api-version=2020-06-30`.
+
+3. Provide the index definition (copy-ready code is provided below) in the body of the request.
+
+4. Click **Send**.
+
+![Index JSON document in request body](media/search-get-started-rest/postman-request.png "Index JSON document in request body")
+
+### Index definition
+
+The fields collection defines document structure. Each document must have these fields, and each field must have a data type. String fields are used in full text search. If you need numeric data to be searchable, you will need to cast numeric data as strings.
+
+Attributes on the field determine allowed action. The REST APIs allow many actions by default. For example, all strings are searchable, retrievable, filterable, and facetable by default. Often, you only have to set attributes when you need to turn off a behavior.
+
+```json
+{
+ "name": "hotels-quickstart",
+ "fields": [
+ {"name": "HotelId", "type": "Edm.String", "key": true, "filterable": true},
+ {"name": "HotelName", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": true, "facetable": false},
+ {"name": "Description", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false, "analyzer": "en.lucene"},
+ {"name": "Category", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
+ {"name": "Tags", "type": "Collection(Edm.String)", "searchable": true, "filterable": true, "sortable": false, "facetable": true},
+ {"name": "ParkingIncluded", "type": "Edm.Boolean", "filterable": true, "sortable": true, "facetable": true},
+ {"name": "LastRenovationDate", "type": "Edm.DateTimeOffset", "filterable": true, "sortable": true, "facetable": true},
+ {"name": "Rating", "type": "Edm.Double", "filterable": true, "sortable": true, "facetable": true},
+ {"name": "Address", "type": "Edm.ComplexType",
+ "fields": [
+ {"name": "StreetAddress", "type": "Edm.String", "filterable": false, "sortable": false, "facetable": false, "searchable": true},
+ {"name": "City", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
+ {"name": "StateProvince", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
+ {"name": "PostalCode", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true},
+ {"name": "Country", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true, "facetable": true}
+ ]
+ }
+ ]
+}
+```
+
+When you submit this request, you should get an HTTP 201 response, indicating the index was created successfully. You can verify this action in the portal, but note that the portal page has refresh intervals so it could take a minute or two to catch up.
+
+> [!TIP]
+> If you get HTTP 504, verify the URL specifies HTTPS. If you see HTTP 400 or 404, check the request body to verify there were no copy-paste errors. An HTTP 403 typically indicates a problem with the api-key (either an invalid key or a syntax problem with how the api-key is specified).
+
+## 2 - Load documents
+
+Creating the index and populating the index are separate steps. In Azure Cognitive Search, the index contains all searchable data. In this scenario, the data is provided as JSON documents. The [Add, Update, or Delete Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) is used for this task.
+
+The URL is extended to include the `docs` collections and `index` operation.
+
+To do this in Postman:
+
+1. Change the command to **POST**.
+
+2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/docs/index?api-version=2020-06-30`.
+
+3. Provide the JSON documents (copy-ready code is below) in the body of the request.
+
+4. Click **Send**.
+
+![JSON documents in request body](media/search-get-started-rest/postman-docs.png "JSON documents in request body")
+
+### JSON documents to load into the index
+
+The Request Body contains four documents to be added to the hotels index.
+
+```json
+{
+ "value": [
+ {
+ "@search.action": "upload",
+ "HotelId": "1",
+ "HotelName": "Secret Point Motel",
+ "Description": "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "air conditioning", "concierge" ],
+ "ParkingIncluded": false,
+ "LastRenovationDate": "1970-01-18T00:00:00Z",
+ "Rating": 3.60,
+ "Address":
+ {
+ "StreetAddress": "677 5th Ave",
+ "City": "New York",
+ "StateProvince": "NY",
+ "PostalCode": "10022",
+ "Country": "USA"
+ }
+ },
+ {
+ "@search.action": "upload",
+ "HotelId": "2",
+ "HotelName": "Twin Dome Motel",
+ "Description": "The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "free wifi", "concierge" ],
+ "ParkingIncluded": false,
+ "LastRenovationDate": "1979-02-18T00:00:00Z",
+ "Rating": 3.60,
+ "Address":
+ {
+ "StreetAddress": "140 University Town Center Dr",
+ "City": "Sarasota",
+ "StateProvince": "FL",
+ "PostalCode": "34243",
+ "Country": "USA"
+ }
+ },
+ {
+ "@search.action": "upload",
+ "HotelId": "3",
+ "HotelName": "Triple Landscape Hotel",
+ "Description": "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
+ "Category": "Resort and Spa",
+ "Tags": [ "air conditioning", "bar", "continental breakfast" ],
+ "ParkingIncluded": true,
+ "LastRenovationDate": "2015-09-20T00:00:00Z",
+ "Rating": 4.80,
+ "Address":
+ {
+ "StreetAddress": "3393 Peachtree Rd",
+ "City": "Atlanta",
+ "StateProvince": "GA",
+ "PostalCode": "30326",
+ "Country": "USA"
+ }
+ },
+ {
+ "@search.action": "upload",
+ "HotelId": "4",
+ "HotelName": "Sublime Cliff Hotel",
+ "Description": "Sublime Cliff Hotel is located in the heart of the historic center of Sublime in an extremely vibrant and lively area within short walking distance to the sites and landmarks of the city and is surrounded by the extraordinary beauty of churches, buildings, shops and monuments. Sublime Cliff is part of a lovingly restored 1800 palace.",
+ "Category": "Boutique",
+ "Tags": [ "concierge", "view", "24-hour front desk service" ],
+ "ParkingIncluded": true,
+ "LastRenovationDate": "1960-02-06T00:00:00Z",
+ "Rating": 4.60,
+ "Address":
+ {
+ "StreetAddress": "7400 San Pedro Ave",
+ "City": "San Antonio",
+ "StateProvince": "TX",
+ "PostalCode": "78216",
+ "Country": "USA"
+ }
+ }
+ ]
+}
+```
+
+In a few seconds, you should see an HTTP 201 response in the session list. This indicates the documents were created successfully.
+
+If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request: verify you changed the endpoint to include `/docs/index`.
+
+> [!Tip]
+> For selected data sources, you can choose the alternative *indexer* approach which simplifies and reduces the amount of code required for indexing. For more information, see [Indexer operations](/rest/api/searchservice/indexer-operations).
++
+## 3 - Search an index
+
+Now that an index and document set are loaded, you can issue queries against them using [Search Documents REST API](/rest/api/searchservice/search-documents).
+
+The URL is extended to include a query expression, specified using the search operator.
+
+To do this in Postman:
+
+1. Change the command to **GET**.
+
+2. Copy in this URL `https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/docs?search=*&$count=true&api-version=2020-06-30`.
+
+3. Click **Send**.
+
+This query is an empty and returns a count of the documents in the search results. The request and response should look similar to the following screenshot for Postman after you click **Send**. The status code should be 200.
+
+ ![GET with search string on the URL](media/search-get-started-rest/postman-query.png "GET with search string on the URL")
+
+Try a few other query examples to get a feel for the syntax. You can do a string search, verbatim $filter queries, limit the results set, scope the search to specific fields, and more.
+
+Swap out the current URL with the ones below, clicking **Send** each time to view the results.
+
+```
+# Query example 1 - Search on restaurant and wifi
+# Return only the HotelName, Description, and Tags fields
+https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=restaurant wifi&$count=true&$select=HotelName,Description,Tags&api-version=2020-06-30
+
+# Query example 2 - Apply a filter to the index to find hotels rated 4 or highter
+# Returns the HotelName and Rating. Two documents match
+https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=*&$filter=Rating gt 4&$select=HotelName,Rating&api-version=2020-06-30
+
+# Query example 3 - Take the top two results, and show only HotelName and Category in the results
+https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=boutique&$top=2&$select=HotelName,Category&api-version=2020-06-30
+
+# Query example 4 - Sort by a specific field (Address/City) in ascending order
+https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/hotels-quickstart/docs?search=pool&$orderby=Address/City asc&$select=HotelName, Address/City, Tags, Rating&api-version=2020-06-30
+```
+
+## Get index properties
+
+You can also use [Get Statistics](/rest/api/searchservice/get-index-statistics) to query for document counts and index size:
+
+```http
+https://<YOUR-SEARCH-SERVICE-NAME>.search.windows.net/indexes/hotels-quickstart/stats?api-version=2020-06-30
+```
+
+Adding `/stats` to your URL returns index information. In Postman, your request should look similar to the following, and the response includes a document count and space used in bytes.
+
+ ![Get index information](media/search-get-started-rest/postman-system-query.png "Get index information")
+
+Notice that the api-version syntax differs. For this request, use `?` to append the api-version. The `?` separates the URL path from the query string, while & separates each 'name=value' pair in the query string. For this query, api-version is the first and only item in the query string.
## Clean up resources
search https://docs.microsoft.com/en-us/azure/search/search-get-started-vs-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-get-started-vs-code.md new file mode 100644
@@ -0,0 +1,402 @@
+---
+title: 'Quickstart: Get started with Visual Studio Code'
+titleSuffix: Azure Cognitive Search
+description: Learn how to install and use the Visual Studio Code extension for Azure Cognitive Search.
+
+author: dereklegenzoff
+manager: luisca
+ms.author: delegenz
+ms.service: cognitive-search
+ms.topic: quickstart
+ms.date: 01/12/2021
+---
+
+# Get started with Visual Studio Code and Azure Cognitive Search
+
+This article explains how to formulate REST API requests interactively using the [Azure Cognitive Search REST APIs](/rest/api/searchservice) and an API client for sending and receiving requests. With an API client and these instructions, you can send requests and view responses before writing any code.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+The article uses a Visual Studio code extension (preview) for Azure Cognitive Search REST APIs.
+
+> [!IMPORTANT]
+> This feature is currently in public preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+The following services and tools are required for this quickstart.
+++ [Visual Studio Code](https://code.visualstudio.com/download)+++ [Azure Cognitive Search for Visual Studio Code (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)+++ [Create an Azure Cognitive Search service](search-create-service-portal.md) or [find an existing service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart. +
+## Copy a key and URL
+
+REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
+
+1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+
+1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+
+![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
+
+All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
+
+## Install the extension
+
+Start by opening [VS Code](https://code.visualstudio.com). Select the **Extensions** tab on the activity bar then search for *Azure Cognitive Search*. Find the extension in the search results, and select **Install**.
+
+![VS Code extension pane](media/search-get-started-rest/download-extension.png "Downloading the VS Code extension")
+
+Alternatively, you can install the [Azure Cognitive Search extension](https://aka.ms/vscode-search) from the VS Code marketplace in a web browser.
+
+You should see a new Azure tab appear on the activity bar if you didn't already have it.
+
+![VS Code Azure pane](media/search-get-started-rest/azure-pane.png "Azure pane in VS Code")
+
+## Connect to your subscription
+
+Select **Sign in to Azure...** and log into your Azure Account.
+
+You should see your subscriptions appear. Select the subscription to see a list of the search services in the subscription.
+
+![VS Code Azure subscriptions](media/search-get-started-rest/subscriptions.png "Subscriptions in VS Code")
+
+To limit the subscriptions displayed, open the command palette (Ctrl+Shift+P or Cmd+Shift+P) and search for *Azure* or *Select Subscriptions*. There are also commands available for signing in and out of your Azure account.
+
+When you expand the search service, you will see tree items for each of the Cognitive Search resources: indexes, data sources, indexers, skillsets, and synonym maps.
+
+![VS Code Azure search tree](media/search-get-started-rest/search-tree.png "VS Code Azure search tree")
+
+These tree items can be expanded to show any resources you have in your search service
+
+## 1 - Create an index
+
+To get started with Azure Cognitive Search, you first need to create a search index. This is done using the [Create Index REST API](/rest/api/searchservice/create-index).
+
+With the VS Code extension, you only need to worry about the body of the request. For this quickstart, we provide a sample index definition and corresponding documents.
+
+### Index definition
+
+The index definition below is a sample schema for fictitious hotels.
+
+The `fields` collection defines the structure of documents in the search index. Each field has a data type and a number of additional attributes that determine how the field can be used.
+
+```json
+{
+ "name": "hotels-quickstart",
+ "fields": [
+ {
+ "name": "HotelId",
+ "type": "Edm.String",
+ "key": true,
+ "filterable": true
+ },
+ {
+ "name": "HotelName",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "sortable": true,
+ "facetable": false
+ },
+ {
+ "name": "Description",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "sortable": false,
+ "facetable": false,
+ "analyzer": "en.lucene"
+ },
+ {
+ "name": "Description_fr",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "sortable": false,
+ "facetable": false,
+ "analyzer": "fr.lucene"
+ },
+ {
+ "name": "Category",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "Tags",
+ "type": "Collection(Edm.String)",
+ "searchable": true,
+ "filterable": true,
+ "sortable": false,
+ "facetable": true
+ },
+ {
+ "name": "ParkingIncluded",
+ "type": "Edm.Boolean",
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "LastRenovationDate",
+ "type": "Edm.DateTimeOffset",
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "Rating",
+ "type": "Edm.Double",
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "Address",
+ "type": "Edm.ComplexType",
+ "fields": [
+ {
+ "name": "StreetAddress",
+ "type": "Edm.String",
+ "filterable": false,
+ "sortable": false,
+ "facetable": false,
+ "searchable": true
+ },
+ {
+ "name": "City",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "StateProvince",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "PostalCode",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "Country",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "sortable": true,
+ "facetable": true
+ }
+ ]
+ }
+ ],
+ "suggesters": [
+ {
+ "name": "sg",
+ "searchMode": "analyzingInfixMatching",
+ "sourceFields": [
+ "HotelName"
+ ]
+ }
+ ]
+}
+```
+
+To create a new index, right-click on **Indexes** and then select **Create new index**. An editor with a name similar to `indexes-new-28c972f661.azsindex` will pop up.
+
+Paste the index definition from above into the window. Save the file and select **Upload** when prompted if you want to update the index. This will create the index and it will be available in the tree view.
+
+![Gif of creating an index](media/search-get-started-rest/create-index.gif)
+
+If there is a problem with your index definition, you should see an error message pop up explaining the error.
+
+![Create index error message](media/search-get-started-rest/create-index-error.png)
+
+If this happens, fix the issue and resave the file.
+
+## 2 - Load documents
+
+Creating the index and populating the index are separate steps. In Azure Cognitive Search, the index contains all searchable data. In this scenario, the data is provided as JSON documents. The [Add, Update, or Delete Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) is used for this task.
+
+To add new documents in VS Code:
+
+1. Expand the `hotels-quickstart` index you created. Right-click on **Documents** and select **Create new document**.
+
+ ![Create a document](media/search-get-started-rest/create-document.png)
+
+2. This will open up a JSON editor that has inferred the schema of your index.
+
+ ![Create a document json](media/search-get-started-rest/create-document-2.png)
+
+3. Paste in the JSON below and then save the file. A prompt will open up asking you to confirm your changes. Select **Upload** to save the changes.
+
+ ```json
+ {
+ "HotelId": "1",
+ "HotelName": "Secret Point Motel",
+ "Description": "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "air conditioning", "concierge" ],
+ "ParkingIncluded": false,
+ "LastRenovationDate": "1970-01-18T00:00:00Z",
+ "Rating": 3.60,
+ "Address": {
+ "StreetAddress": "677 5th Ave",
+ "City": "New York",
+ "StateProvince": "NY",
+ "PostalCode": "10022",
+ "Country": "USA"
+ }
+ }
+ ```
+
+4. Repeat this process for the three remaining documents
+
+ Document 2:
+ ```json
+ {
+ "HotelId": "2",
+ "HotelName": "Twin Dome Motel",
+ "Description": "The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "free wifi", "concierge" ],
+ "ParkingIncluded": false,
+ "LastRenovationDate": "1979-02-18T00:00:00Z",
+ "Rating": 3.60,
+ "Address": {
+ "StreetAddress": "140 University Town Center Dr",
+ "City": "Sarasota",
+ "StateProvince": "FL",
+ "PostalCode": "34243",
+ "Country": "USA"
+ }
+ }
+ ```
+
+ Document 3:
+ ```json
+ {
+ "HotelId": "3",
+ "HotelName": "Triple Landscape Hotel",
+ "Description": "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
+ "Category": "Resort and Spa",
+ "Tags": [ "air conditioning", "bar", "continental breakfast" ],
+ "ParkingIncluded": true,
+ "LastRenovationDate": "2015-09-20T00:00:00Z",
+ "Rating": 4.80,
+ "Address": {
+ "StreetAddress": "3393 Peachtree Rd",
+ "City": "Atlanta",
+ "StateProvince": "GA",
+ "PostalCode": "30326",
+ "Country": "USA"
+ }
+ }
+ ```
+
+ Document 4:
+ ```json
+ {
+ "HotelId": "4",
+ "HotelName": "Sublime Cliff Hotel",
+ "Description": "Sublime Cliff Hotel is located in the heart of the historic center of Sublime in an extremely vibrant and lively area within short walking distance to the sites and landmarks of the city and is surrounded by the extraordinary beauty of churches, buildings, shops and monuments. Sublime Cliff is part of a lovingly restored 1800 palace.",
+ "Category": "Boutique",
+ "Tags": [ "concierge", "view", "24-hour front desk service" ],
+ "ParkingIncluded": true,
+ "LastRenovationDate": "1960-02-06T00:00:00Z",
+ "Rating": 4.60,
+ "Address": {
+ "StreetAddress": "7400 San Pedro Ave",
+ "City": "San Antonio",
+ "StateProvince": "TX",
+ "PostalCode": "78216",
+ "Country": "USA"
+ }
+ }
+ ```
+
+At this point, you should see all four documents available in the documents section.
+
+![status after uploading all documents](media/search-get-started-rest/create-document-finish.png)
+
+## 3 - Search an index
+
+Now that the index and document set are loaded, you can issue queries against them using [Search Documents REST API](/rest/api/searchservice/search-documents).
+
+To do this in VS Code:
+
+1. Right-click the index you want to search and select **Search index**. This will open an editor with a name similar to `sandbox-b946dcda48.azs`.
+
+ ![search view of extension](media/search-get-started-rest/search-vscode.png)
+
+2. A simple query is autopopulated. Press **Ctrl+Alt+R** or **Cmd+Alt+R** to submit the query. You'll see the results pop up in a window to the left.
+
+ ![search results in extension](media/search-get-started-rest/search-results.png)
++
+### Example queries
+
+Try a few other query examples to get a feel for the syntax. There's four additional queries below for you to try. You can add multiple queries to the same editor. When you press **Ctrl+Alt+R** or **Cmd+Alt+R**, the line your cursor determines which query will be submitted.
+
+![queries and results side-by-side](media/search-get-started-rest/all-searches.png)
+
+In the first query, we search `boutique` and `select` only certain fields. It's a best practice to only `select` the fields you need because pulling back unnecessary data can add latency to your queries. The query also sets `$count=true` to return the total number of results with the search results.
+
+```
+// Query example 1 - Search `boutique` with select and return count
+search=boutique&$count=true&$select=HotelId,HotelName,Rating,Category
+```
+
+In the next query, we specify the search term `wifi` and also include a filter to only return results where the state is equal to `'FL'`. Results are also ordered by the Hotel's `Rating`.
+
+```
+// Query example 2 - Search with filter, orderBy, select, and count
+search=wifi&$filter=Address/StateProvince eq 'FL'&$select=HotelId,HotelName,Rating&$orderby=Rating desc
+```
+
+Next, the search is limited to a single searchable field using the `searchFields` parameter. This is a great option to make your query more efficient if you know you're only interested in matches in certain fields.
+
+```
+// Query example 3 - Limit searchFields
+search=submlime cliff&$select=HotelId,HotelName,Rating&searchFields=HotelName
+```
+
+Another common option to include in a query is `facets`. Facets allow you to build out filters on your UI to make it easy for users to know what values they can filter down to.
+
+```
+// Query example 4 - Take the top two results, and show only HotelName and Category in the results
+search=*&$select=HotelId,HotelName,Rating&searchFields=HotelName&facet=Category
+```
+
+## Open index in the portal
+
+If you'd like to view your search service in the portal, right-click the name of the search service and select **Open in Portal**. This will take you to the search service in the Azure portal.
+
+## Clean up resources
+
+When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+
+You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+
+If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+
+## Next steps
+
+Now that you know how to perform core tasks, you can move forward with additional REST API calls for more advanced features, such as indexers or [setting up an enrichment pipeline](cognitive-search-tutorial-blob.md) that adds content transformations to indexing. For your next step, we recommend the following link:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
\ No newline at end of file
search https://docs.microsoft.com/en-us/azure/search/search-howto-index-cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb.md
@@ -134,7 +134,7 @@ Earlier in this article it is mentioned that [Azure Cosmos DB indexing](../cosmo
### 1 - Assemble inputs for the request
-For each request, you must provide the service name and admin key for Azure Cognitive Search (in the POST header), and the storage account name and key for blob storage. You can use [Postman or Visual Studio Code](search-get-started-rest.md) to send HTTP requests to Azure Cognitive Search.
+For each request, you must provide the service name and admin key for Azure Cognitive Search (in the POST header), and the storage account name and key for blob storage. You can use [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) to send HTTP requests to Azure Cognitive Search.
Copy the following four values into Notepad so that you can paste them into a request:
search https://docs.microsoft.com/en-us/azure/search/search-howto-index-json-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-json-blobs.md
@@ -104,7 +104,7 @@ You can use the REST API to index JSON blobs, following a three-part workflow co
You can review [REST example code](#rest-example) at the end of this section that shows how to create all three objects. This section also contains details about [JSON parsing modes](#parsing-modes), [single blobs](#parsing-single-blobs), [JSON arrays](#parsing-arrays), and [nested arrays](#nested-json-arrays).
-For code-based JSON indexing, use [Postman or Visual Studio Code](search-get-started-rest.md) and the REST API to create these objects:
+For code-based JSON indexing, use [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) and the REST API to create these objects:
+ [index](/rest/api/searchservice/create-index) + [data source](/rest/api/searchservice/create-data-source)
search https://docs.microsoft.com/en-us/azure/search/search-howto-reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-reindex.md
@@ -87,7 +87,7 @@ When you load the index, each field's inverted index is populated with all of th
You can begin querying an index as soon as the first document is loaded. If you know a document's ID, the [Lookup Document REST API](/rest/api/searchservice/lookup-document) returns the specific document. For broader testing, you should wait until the index is fully loaded, and then use queries to verify the context you expect to see.
-You can use [Search Explorer](search-explorer.md) or a Web testing tool like [Postman or Visual Studio Code](search-get-started-rest.md) to check for updated content.
+You can use [Search Explorer](search-explorer.md) or a Web testing tool like [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) to check for updated content.
If you added or renamed a field, use [$select](search-query-odata-select.md) to return that field: `search=*&$select=document-id,my-new-field,some-old-field&$count=true`
search https://docs.microsoft.com/en-us/azure/search/search-query-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-query-create.md
@@ -22,7 +22,7 @@ You'll need a tool or API to create a query. Any of the following suggestions ar
| Methodology | Description | |-------------|-------------| | Portal| [Search explorer (portal)](search-explorer.md) is a query interface in the Azure portal that runs queries against indexes on the underlying search service. The portal makes REST API calls behind the scenes to the [Search Documents](/rest/api/searchservice/search-documents) operation, but cannot invoke Autocomplete, Suggestions, or Document Lookup.<br/><br/> You can select any index and REST API version, including preview. A query string can use simple or full syntax, with support for all query parameters (filter, select, searchFields, and so on). In the portal, when you open an index, you can work with Search Explorer alongside the index JSON definition in side-by-side tabs for easy access to field attributes. Check what fields are searchable, sortable, filterable, and facetable while testing queries. <br/>Recommended for early investigation, testing, and validation. [Learn more.](search-explorer.md) |
-| Web testing tools| [Postman or Visual Studio Code](search-get-started-rest.md) are strong choices for formulating a [Search Documents](/rest/api/searchservice/search-documents) request, and any other request, in REST. The REST APIs support every possible programmatic operation in Azure Cognitive Search, and when you use a tool like Postman or Visual Studio Code, you can issue requests interactively to understand how the feature works before investing in code. A web testing tool is a good choice if you don't have contributor or administrative rights in the Azure portal. As long as you have a search URL and a query API key, you can use the tools to run queries against an existing index. |
+| Web testing tools| [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) are strong choices for formulating a [Search Documents](/rest/api/searchservice/search-documents) request, and any other request, in REST. The REST APIs support every possible programmatic operation in Azure Cognitive Search, and when you use a tool like Postman or Visual Studio Code, you can issue requests interactively to understand how the feature works before investing in code. A web testing tool is a good choice if you don't have contributor or administrative rights in the Azure portal. As long as you have a search URL and a query API key, you can use the tools to run queries against an existing index. |
| Azure SDK | When you are ready to write code, you can use the Azure.Search.Document client libraries in the Azure SDKs for .NET, Python, JavaScript, or Java. Each SDK is on its own release schedule, but you can create and query indexes in all of them. <br/><br/>[SearchClient (.NET)](/dotnet/api/azure.search.documents.searchclient) can be used to query a search index in C#. [Learn more.](search-howto-dotnet-sdk.md)<br/><br/>[SearchClient (Python)](/dotnet/api/azure.search.documents.searchclient) can be used to query a search index in Python. [Learn more.](search-get-started-python.md)<br/><br/>[SearchClient (JavaScript)](/dotnet/api/azure.search.documents.searchclient) can be used to query a search index in JavaScript. [Learn more.](search-get-started-javascript.md) | ## Set up a search client
search https://docs.microsoft.com/en-us/azure/search/search-security-manage-encryption-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-manage-encryption-keys.md
@@ -48,7 +48,7 @@ The following tools and services are used in this scenario.
You should have a search application that can create the encrypted object. Into this code, you'll reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK). > [!TIP]
-> You can use [Postman or Visual Studio Code](search-get-started-rest.md), or [Azure PowerShell](./search-get-started-powershell.md), to call REST APIs that create indexes and synonym maps that include an encryption key parameter. There is no portal support for adding a key to indexes or synonym maps at this time.
+> You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or [Azure PowerShell](./search-get-started-powershell.md), to call REST APIs that create indexes and synonym maps that include an encryption key parameter. There is no portal support for adding a key to indexes or synonym maps at this time.
## 1 - Enable key recovery
search https://docs.microsoft.com/en-us/azure/search/search-what-is-an-index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-what-is-an-index.md
@@ -65,7 +65,7 @@ Arriving at a final index design is an iterative process. It's common to start w
During development, plan on frequent rebuilds. Because physical structures are created in the service, [dropping and recreating indexes](search-howto-reindex.md) is necessary for most modifications to an existing field definition. You might consider working with a subset of your data to make rebuilds go faster. > [!Tip]
-> Code, rather than a portal approach, is recommended for working on index design and data import simultaneously. As an alternative, tools like [Postman and Visual Studio Code](search-get-started-rest.md) are helpful for proof-of-concept testing when development projects are still in early phases. You can make incremental changes to an index definition in a request body, and then send the request to your service to recreate an index using an updated schema.
+> Code, rather than a portal approach, is recommended for working on index design and data import simultaneously. As an alternative, tools like [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) are helpful for proof-of-concept testing when development projects are still in early phases. You can make incremental changes to an index definition in a request body, and then send the request to your service to recreate an index using an updated schema.
## Index schema
security-center https://docs.microsoft.com/en-us/azure/security-center/alerts-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
@@ -222,7 +222,7 @@ At the bottom of this page, there's a table describing the Azure Security Center
| **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High | | **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | DefenseEvasion, Execution | High |
-| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
| **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website. <br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High | | **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities. <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining <br>(Applies to: App Service on Linux) | DefenseEvasion, CommandAndControl, Exploitation | Medium |
@@ -243,7 +243,7 @@ At the bottom of this page, there's a table describing the Azure Security Center
| **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS). <br>(Applies to: App Service on Windows) | PreAttack | Medium | | **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
-| **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
| **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided. <br>(Applies to: App Service on Windows and App Service on Linux) | Collection | Medium | | | |
security-center https://docs.microsoft.com/en-us/azure/security-center/container-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/container-security.md
@@ -44,7 +44,7 @@ The following screenshot shows the asset inventory page and the various containe
To monitor images in your Azure Resource Manager-based Azure container registries, enable [Azure Defender for container registries](defender-for-container-registries-introduction.md). Security Center scans any images pulled within the last 30 days, pushed to your registry, or imported. The integrated scanner is provided by the industry-leading vulnerability scanning vendor, Qualys.
-When issues are found ΓÇô by Qualys or Security Center ΓÇô you'll get notified in the [Azure Defender dashboard](azure-defender-dashboard.md). For every vulnerability, Security Center provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Security Center's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#recs-containers).
+When issues are found ΓÇô by Qualys or Security Center ΓÇô you'll get notified in the [Azure Defender dashboard](azure-defender-dashboard.md). For every vulnerability, Security Center provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Security Center's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#recs-compute).
Security Center filters and classifies findings from the scanner. When an image is healthy, Security Center marks it as such. Security Center generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Security Center reduces the potential for unwanted informational alerts.
@@ -56,7 +56,7 @@ Azure Security Center identifies unmanaged containers hosted on IaaS Linux VMs,
Security Center includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. When it finds misconfigurations, Security Center generates security recommendations. Use Security Center's **recommendations page** to view recommendations and remediate issues. The CIS benchmark checks don't run on AKS-managed instances or Databricks-managed VMs.
-For details of the relevant Security Center recommendations that might appear for this feature, see the [container section](recommendations-reference.md#recs-containers) of the recommendations reference table.
+For details of the relevant Security Center recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-compute) of the recommendations reference table.
When you're exploring the security issues of a VM, Security Center provides additional information about the containers on the machine. Such information includes the Docker version and the number of images running on the host.
@@ -70,7 +70,7 @@ AKS provides security controls and visibility into the security posture of your
* Constantly monitor the configuration of your AKS clusters * Generate security recommendations aligned with industry standards
-For details of the relevant Security Center recommendations that might appear for this feature, see the [container section](recommendations-reference.md#recs-containers) of the recommendations reference table.
+For details of the relevant Security Center recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-compute) of the recommendations reference table.
### Workload protection best-practices using Kubernetes admission control
security-center https://docs.microsoft.com/en-us/azure/security-center/defender-for-container-registries-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-container-registries-introduction.md
@@ -24,7 +24,7 @@ Security Center identifies Azure Resource Manager based ACR registries in your s
**Azure Defender for container registries** includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. The integrated scanner is powered by Qualys, the industry-leading vulnerability scanning vendor.
-When issues are found ΓÇô by Qualys or Security Center ΓÇô you'll get notified in the Security Center dashboard. For every vulnerability, Security Center provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Security Center's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#recs-containers).
+When issues are found ΓÇô by Qualys or Security Center ΓÇô you'll get notified in the Security Center dashboard. For every vulnerability, Security Center provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Security Center's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#recs-compute).
Security Center filters and classifies findings from the scanner. When an image is healthy, Security Center marks it as such. Security Center generates security recommendations only for images that have issues to be resolved. Security Center provides details of each reported vulnerability and a severity classification. Additionally, it gives guidance for how to remediate the specific vulnerabilities found on each image.
security-center https://docs.microsoft.com/en-us/azure/security-center/kubernetes-workload-protections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/kubernetes-workload-protections.md
@@ -23,7 +23,7 @@ Security Center offers more container security features if you enable Azure Defe
- Get real-time threat detection alerts for your K8s clusters [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) > [!TIP]
-> For a list of *all* security recommendations that might appear for Kubernetes clusters and nodes, see the [container section](recommendations-reference.md#recs-containers) of the recommendations reference table.
+> For a list of *all* security recommendations that might appear for Kubernetes clusters and nodes, see the [compute section](recommendations-reference.md#recs-compute) of the recommendations reference table.
@@ -247,6 +247,6 @@ In this article, you learned how to configure Kubernetes workload protection.
For other related material, see the following pages: -- [Security Center recommendations for containers](recommendations-reference.md#recs-containers)
+- [Security Center recommendations for compute](recommendations-reference.md#recs-compute)
- [Alerts for AKS cluster level](alerts-reference.md#alerts-akscluster) - [Alerts for Container host level](alerts-reference.md#alerts-containerhost)\ No newline at end of file
security-center https://docs.microsoft.com/en-us/azure/security-center/recommendations-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/recommendations-reference.md
@@ -1,246 +1,67 @@
--- title: Reference table for all Azure Security Center recommendations description: This article lists Azure Security Center's security recommendations that help you protect your resources.
-services: security-center
-documentationcenter: na
author: memildin
-manager: rkarlin
ms.service: security-center
-ms.devlang: na
-ms.topic: overview
-ms.tgt_pltfrm: na
-ms.workload: na
-ms.date: 12/21/2020
+ms.topic: reference
+ms.date: 01/12/2021
ms.author: memildin-
+ms.custom: generated
---- # Security recommendations - a reference guide
-This article lists the recommendations you might see in Azure Security Center. The recommendations shown in your environment depend on the resources you're protecting and your customized configuration.
-
-Security Center's recommendations are based on best practices. Some are aligned with the **Azure Security Benchmark**, the Microsoft-authored, Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Azure Security Benchmark](../security/benchmarks/introduction.md).
-
-To learn about how to respond to these recommendations, see [Remediate recommendations in Azure Security Center](security-center-remediate-recommendations.md).
-
-Your Secure Score is based on the number of Security Center recommendations you've completed. To decide which recommendations to resolve first, look at the severity of each one and its potential impact on your Secure Score.
-
->[!TIP]
-> If a recommendation's description says "No related policy", it's usually because that recommendation is dependent on a different recommendation and *its* policy. For example, the recommendation "Endpoint protection health failures should be remediated...", relies on the recommendation that checks whether an endpoint protection solution is even *installed* ("Endpoint protection solution should be installed..."). The underlying recommendation *does* have a policy. Limiting the policies to only the foundational recommendation simplifies policy management.
-
-## <a name="recs-network"></a>Network recommendations
-
-| Recommendation | Description & related policy | Severity | Quick fix enabled?([Learn more](security-center-remediate-recommendations.md#quick-fix-remediation)) | Resource type |
-|----------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------------------------------------------------------------------------------------------------------|-----------------|
-| **Adaptive network hardening recommendations should be applied on internet facing virtual machines** | Azure Security Center has analyzed the internet traffic communication patterns of the virtual machines listed below, and determined that the existing rules in the NSGs associated to them are overly permissive, resulting in an increased potential attack surface.<br>This typically occurs when this IP address doesn't communicate regularly with this resource. Alternatively, the IP address has been flagged as malicious by Security Center's threat intelligence sources.<br>(Related policy: Adaptive network hardening recommendations should be applied on internet facing virtual machines) | High | N | Virtual machine |
-| **All Internet traffic should be routed via your deployed Azure Firewall** | Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall.<br>(Related policy: [Preview] All Internet traffic should be routed via your deployed Azure Firewall) | High | N | Subnet |
-| **All network ports should be restricted on network security groups associated to your virtual machine** | Harden the network security groups of your Internet-facing VMs by restricting the access of your existing allow rules.<br>This recommendation is triggered when any port is opened to *all* sources (except for ports 22, 3389, 5985, 5986, 80, and 1443).<br>(Related policy: All network ports should be restricted on network security groups associated to your virtual machine) | High | N | Virtual machine |
-| **DDoS Protection Standard should be enabled** | Protect virtual networks containing applications with public IPs by enabling DDoS protection service standard. DDoS protection enables mitigation of network volumetric and protocol attacks.<br>(Related policy: DDoS Protection Standard should be enabled) | High | N | Virtual network |
-| **Function App should only be accessible over HTTPS** | Enable "HTTPS only" access for function apps. Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.<br>(Related policy: Function App should only be accessible over HTTPS) | Medium | **Y** | Function app |
-| **Internet-facing virtual machines should be protected with Network Security Groups** | Enable Network Security Groups to control network access of your virtual machines.<br>(Related policy: Internet-facing virtual machines should be protected with Network Security Groups) | High/ Medium | N | Virtual machine |
-| **Non-internet-facing virtual machines should be protected with network security groups** | Protect your non-internet-facing virtual machines from potential threats by restricting access to them with network security groups (NSG).<br>NSGs contain access-control lists (ACL) and can be assigned to the VM's NIC or subnet. The ACL rules allow or deny network traffic to the assigned resource.<br>(Related policy: Non-internet-facing virtual machines should be protected with network security groups) | Low | N | Virtual machine |
-| **IP forwarding on your virtual machine should be disabled** | Disable IP forwarding. When IP forwarding is enabled on a virtual machine's NIC, the machine can receive traffic addressed to other destinations. IP forwarding is rarely required (for example, when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team.<br>(Related policy: [Preview]: IP Forwarding on your virtual machine should be disabled) | Medium | N | Virtual machine |
-| **Management ports of virtual machines should be protected with just-in-time network access control** | Apply just-in-time (JIT) virtual machine (VM) access control to permanently lock down access to selected ports, and enable authorized users to open them, via JIT, for a limited amount of time only.<br>(Related policy: Management ports of virtual machines should be protected with just-in-time network access control) | High | N | Virtual machine |
-| **Management ports should be closed on your virtual machines** | Harden the network security group of your virtual machines to restrict access to management ports.<br>(Related policy: Management ports should be closed on your virtual machines) | High | N | Virtual machine |
-| **Virtual networks should be protected by Azure Firewall** | Some of your virtual networks aren't protected with a firewall. Use Azure Firewall to restrict access to your virtual networks and prevent potential threats.<br>[Learn more about Azure Firewall](https://azure.microsoft.com/services/azure-firewall/).<br>(Related policy: Virtual networks should be protected by Azure Firewall) | Low | N | Virtual network |
-| **Secure transfer to storage accounts should be enabled** | Enable secure transfer to storage accounts. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks, such as man-in-the-middle, eavesdropping, and session-hijacking.<br>(Related policy: Secure transfer to storage accounts should be enabled) | High | **Y** | Storage account |
-| **Subnets should be associated with a Network Security Group** | Enable network security groups to control network access of resources deployed in your subnets.<br>(Related policy: Subnets should be associated with a Network Security Group.<br>This policy is disabled by default) | High/ Medium | N | Subnet |
-| **Web Application should only be accessible over HTTPS** | Enable "HTTPS only" access for web applications. Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.<br>(Related policy: Web Application should only be accessible over HTTPS) | Medium | **Y** | Web application |
-| | | | | |
--
-## <a name="recs-containers"></a>Container recommendations
-
-| Recommendation | Description & related policy | Severity | Quick fix enabled?([Learn more](security-center-remediate-recommendations.md#quick-fix-remediation)) | Resource type |
-|-------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------|--------------------|
-| **Azure Defender for container registries should be enabled** | To build secure containerized workloads, ensure the images that they're based on are free of known vulnerabilities. Azure Defender for container registries scans your registry for security vulnerabilities on each pushed container image and exposes detailed findings per image.<br>Important: Remediating this recommendation will result in charges for protecting your ACR registries. If you don't have any ACR registries in this subscription, no charges will be incurred. If you create any ACR registries on this subscription in the future, they will automatically be protected and charges will begin at that time.<br>(Related policy: [Advanced threat protection should be enabled on Azure Container Registry registries](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4)) | High | **Y** | Subscription |
-| **Azure Defender for Kubernetes should be enabled** | Azure Defender for Kubernetes provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.<br>Important: Remediating this recommendation will result in charges for protecting your AKS clusters. If you don't have any AKS clusters in this subscription, no charges will be incurred. If you create any AKS clusters on this subscription in the future, they will automatically be protected and charges will begin at that time.<br>(Related policy: [Advanced threat protection should be enabled on Azure Kubernetes Service clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f523b5cd1-3e23-492f-a539-13118b6d1e3a)) | High | **Y** | Subscription |
-| **Authorized IP ranges should be defined on Kubernetes Services** | Restrict access to the Kubernetes service management API by granting API access only to IP addresses in specific ranges. It is recommended to configure authorized IP ranges so only applications from allowed networks can access the cluster.<br>(Related policy: [Preview]: Authorized IP ranges should be defined on Kubernetes Services) | High | **Y** | Kubernetes Service |
-| **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters** | Azure Policy add-on for Kubernetes extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale automountServiceAccountToken enforcements and safeguards on your clusters in a centralized, consistent manner. <br>Security Center requires the add-on to audit and enforce security capabilities and compliance inside your clusters. [Learn more](../governance/policy/concepts/policy-for-kubernetes.md).<br>(Related policy: [Preview]: Azure Policy add-on for Kubernetes should be installed and enabled on your clusters) | High | **Y** | Kubernetes Service |
-| **Container CPU and memory limits should be enforced** | Enforcing CPU and memory limits prevents resource exhaustion attacks (a form of denial of service attack).<br>We recommend setting limits for containers to ensure the runtime prevents the container from using more than the configured resource limit.<br>(Related policy: [Preview]: Ensure container CPU and memory resource limits do not exceed the specified limits in Kubernetes cluster) | Medium | N | Kubernetes Service |
-| **Container images should be deployed from trusted registries only** | Images running on your Kubernetes cluster should come from known and monitored container image registries.<br>Trusted registries reduce your cluster exposure risk by limiting the potential for the introduction of unknown vulnerabilities, security issues, and malicious images.<br>(Related policy: [Preview]: Ensure only allowed container images in Kubernetes cluster) | High | N | Kubernetes Service |
-| **Container with privilege escalation should be avoided** | Containers shouldn't run with privilege escalation to root in your Kubernetes cluster.<br>The AllowPrivilegeEscalation attribute controls whether a process can gain more privileges than its parent process. <br>(Related policy: [Preview]: Kubernetes clusters should not allow container privilege escalation) | Medium | N | Kubernetes Service |
-| **Containers sharing sensitive host namespaces should be avoided** | To protect against privilege escalation outside the container, avoid pod access to sensitive host namespaces (host process ID and host IPC) in a Kubernetes cluster ΓÇï. <br>(Related policy: [Preview]: Kubernetes cluster containers should not share host process ID or host IPC namespace) | Medium | No | Kubernetes cluster |
-| **Containers should listen on allowed ports only** | To reduce the attack surface of your Kubernetes cluster, restrict access to the cluster by limiting containersΓÇÖ access to the configured ports.. <br>(Related policy: [Preview]: Ensure containers listen only on allowed ports in Kubernetes cluster) | Medium | N | Kubernetes Service |
-| **Immutable (read-only) root filesystem should be enforced for containers** | Containers should run with a read only root file system in your Kubernetes cluster.<br>Immutable filesystem protects containers from changes at run-time with malicious binaries being added to PATH.<br>(Related policy: [Preview]: Kubernetes cluster containers should run with a read only root file system) | Medium | N | Kubernetes Service |
-| **Least privileged Linux capabilities should be enforced for containers** | To reduce attack surface of your container, restrict Linux capabilities and grant specific privileges to containers without granting all the privileges of the root user.<br>We recommend dropping all capabilities, then adding those that are required.<br>(Related policy: [Preview]: Kubernetes cluster containers should only use allowed capabilities) | Medium | N | Kubernetes Service |
-| **Overriding or disabling of containers AppArmor profile should be restricted** | Containers running on your Kubernetes cluster should be limited to allowed AppArmor profiles only.<br>AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. <br>(Related policy: [Preview]: Kubernetes cluster containers should only use allowed AppArmor profiles) | High | N | Kubernetes Service |
-| **Privileged containers should be avoided** | To prevent unrestricted host access, avoid privileged containers whenever possible.<br>Privileged containers have all of the root capabilities of a host machine. They can be used as entry points for attacks, and to spread malicious code or malware to compromised applications, hosts, and networks. <br>(Related policy: [Preview]: Do not allow privileged containers in Kubernetes cluster) | Medium | N | Kubernetes Service |
-| **Role-based access control should be used to restrict access to a Kubernetes Service Cluster** | To provide granular filtering of the actions that users can perform, use Azure role-based access control (Azure RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. For more information see [Azure role-based access control](../aks/concepts-identity.md#azure-role-based-access-control-azure-rbac).<br>(Related policy: [Preview]: Azure role-based access control (Azure RBAC) should be used on Kubernetes Services) | Medium | N | Kubernetes Service |
-| **Running containers as root user should be avoided** | Containers should run as a non-root users in your Kubernetes cluster. <br>Running a process as the root user inside a container runs it as root on the host. <br>In case of compromise, an attacker has root in the container, and any misconfigurations become easier to exploit.<br>(Related policy: [Preview]: Kubernetes cluster containers should run as a non-root users) | High | N | Kubernetes Service |
-| **Services should listen on allowed ports only** | To reduce the attack surface of your Kubernetes cluster, restrict access to the cluster by limiting servicesΓÇÖ access to the configured ports.. <br>(Related policy: [Preview]: Ensure services listen only on allowed ports in Kubernetes cluster) | Medium | N | Kubernetes Service |
-| **The Kubernetes Service should be upgraded to the latest Kubernetes version** | Upgrade Azure Kubernetes Service clusters to the latest Kubernetes version in order to benefit from up-to-date vulnerability patches. For details regarding specific Kubernetes vulnerabilities see [Kubernetes CVEs](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=kubernetes).<br>(Related policy: [Preview]: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version) | High | N | Kubernetes Service |
-| **Usage of host networking and ports should be restricted** | Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster.<br>Pods created with the hostNetwork attribute enabled will share the nodeΓÇÖs network space. To avoid compromised container from sniffing network traffic, we recommend not putting your pods on the host network. If you need to expose a container port on the nodeΓÇÖs network, and using a Kubernetes Service node port does not meet your needs, another possibility is to specify a hostPort for the container in the pod spec.<br>(Related policy: [Preview]: Kubernetes cluster pods should only use approved host network and port range) | Medium | N | Kubernetes Service |
-| **Usage of pod HostPath volume mounts should be restricted to a known list** | To reduce the attack surface of your Kubernetes cluster, limitipod HostPath volume mounts in your Kubernetes cluster to the configured allowed host paths.<br>In case of compromise, the container node access from the containers should be restricted.<br>(Related policy: [Preview]: Kubernetes cluster pod hostPath volumes should only use allowed host paths) | Medium | N | Kubernetes Service |
-| **Vulnerabilities in Azure Container Registry images should be remediated (powered by Qualys)** | Container image vulnerability assessment scans your registry for security vulnerabilities on each pushed container image and exposes detailed findings per image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks.<br>(No related policy) | High | N | Container Registry |
-| | | | | |
--
-## <a name="recs-appservice"></a>App Service recommendations
-
-| Recommendation | Description & related policy | Severity | Quick fix enabled?([Learn more](security-center-remediate-recommendations.md#quick-fix-remediation)) | Resource type |
-|--------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------|---------------|
-| **API App should only be accessible over HTTPS** | Limit access of API Apps over HTTPS only.<br>(Related policy: API App should only be accessible over HTTPS) | Medium | N | App service |
-| **Azure Defender for App Service should be enabled** | Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks.<br>Azure Defender for App Service can discover attacks on your applications and identify emerging attacks.<br>Important: Remediating this recommendation will result in charges for protecting your App Service plans. If you don't have any App Service plans in this subscription, no charges will be incurred. If you create any App Service plans on this subscription in the future, they will automatically be protected and charges will begin at that time.<br>(Related policy: [Advanced threat protection should be enabled on Azure App Service plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2913021d-f2fd-4f3d-b958-22354e2bdbcb)) | High | **Y** | Subscription |
-| **CORS should not allow every resource to access your API App** | Allow only required domains to interact with your API application. Cross origin resource sharing (CORS) should not allow all domains to access your API application.<br>(Related policy: CORS should not allow every resource to access your API App) | Low | **Y** | App service |
-| **CORS should not allow every resource to access your Function App** | Allow only required domains to interact with your function application. Cross origin resource sharing (CORS) should not allow all domains to access your function application.<br>(Related policy: CORS should not allow every resource to access your Function App) | Low | **Y** | App service |
-| **CORS should not allow every resource to access your Web Applications** | Allow only required domains to interact with your web application. Cross origin resource sharing (CORS) should not allow all domains to access your web application.<br>(Related policy: CORS should not allow every resource to access your Web Application) | Low | **Y** | App service |
-| **Diagnostic logs in App Services should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in App Services should be enabled)</span> | Low | N | App service |
-| **Function App should only be accessible over HTTPS** | Enable "HTTPS only" access for function apps. Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.<br>(Related policy: Function App should only be accessible over HTTPS) | Medium | **Y** | App service |
-| **Remote debugging should be turned off for API App** | Turn off debugging for API App if you no longer need to use it. Remote debugging requires inbound ports to be opened on an API App.<br>(Related policy: Remote debugging should be turned off for API App) | Low | **Y** | App service |
-| **Remote debugging should be turned off for Function App** | Turn off debugging for Function App if you no longer need to use it. Remote debugging requires inbound ports to be opened on a Function App.<br>(Related policy: Remote debugging should be turned off for Function App) | Low | **Y** | App service |
-| **Remote debugging should be turned off for Web Applications** | Turn off debugging for Web Applications if you no longer need to use it. Remote debugging requires inbound ports to be opened on a Web App.<br>(Related policy: Remote debugging should be turned off for Web Application) | Low | **Y** | App service |
-| **Web Application should only be accessible over HTTPS** | Enable "HTTPS only" access for web applications. Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.<br>(Related policy: Web Application should only be accessible over HTTPS) | Medium | **Y** | App service |
-| **Web apps should request an SSL certificate for all incoming requests** | Client certificates allow for the app to request a certificate for incoming requests.<br>Only clients that have a valid certificate will be able to reach the app.<br>(Related policy: [[Preview]: Web apps should request an SSL certificate for all incoming requests](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5bb220d9-2698-4ee4-8404-b9c30c9df609)) | Medium | No | App service |
-| **TLS should be updated to the latest version for your API app** | Upgrade to the latest TLS version<br>(Related policy: [[Preview]: TLS should be updated to the latest version for your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e)) | High | No | App service |
-| **Diagnostic logs should be enabled in App Service** | Audit enabling of diagnostic logs on the app.<br>This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised<br>(Related policy: [[Preview]: Diagnostic logs should be enabled in App Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0)) | Medium | No | App service |
-| **Managed identity should be used in your API app** | For enhanced authentication security, use a managed identity.<br>On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.<br>(Related policy: [[Preview]: Managed identity should be used in your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef)) | Medium | No | App service |
-| **TLS should be updated to the latest version for your web app** | Upgrade to the latest TLS version<br>(Related policy: [[Preview]: TLS should be updated to the latest version for your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b)) | High | No | App service |
-| **TLS should be updated to the latest version for your function app** | Upgrade to the latest TLS version<br>(Related policy: [[Preview]: TLS should be updated to the latest version for your function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9d614c5-c173-4d56-95a7-b4437057d193)) | High | No | App service |
-| **PHP should be updated to the latest version for your API app** | Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality.<br>Using the latest PHP version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: PHP should be updated to the latest version for your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1bc1795e-d44a-4d48-9b3b-6fff0fd5f9ba)) | Medium | No | App service |
-| **PHP should be updated to the latest version for your web app** | Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality.<br>Using the latest PHP version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: PHP should be updated to the latest version for your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7261b898-8a84-4db8-9e04-18527132abb3)) | Medium | No | App service |
-| **Java should be updated to the latest version for your web app** | Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality.<br>Using the latest Java version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Java should be updated to the latest version for your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f496223c3-ad65-4ecd-878a-bae78737e9ed)) | Medium | No | App service |
-| **Java should be updated to the latest version for your function app** | Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality.<br>Using the latest Java version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Java should be updated to the latest version for your function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc)) | Medium | No | App service |
-| **Java should be updated to the latest version for your API app** | Periodically, newer versions are released for Java either due to security flaws or to include additional functionality.<br>Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Java should be updated to the latest version for your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f88999f4c-376a-45c8-bcb3-4058f713cf39)) | Medium | No | App service |
-| **Python should be updated to the latest version for your web app** | Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality.<br>Using the latest Python version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Python should be updated to the latest version for your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7008174a-fd10-4ef0-817e-fc820a951d73)) | Medium | No | App service |
-| **Python should be updated to the latest version for your function app** | Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality.<br>Using the latest Python version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Python should be updated to the latest version for your function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7238174a-fd10-4ef0-817e-fc820a951d73)) | Medium | No | App service |
-| **Python should be updated to the latest version for your API app** | Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality.<br>Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.<br>(Related policy: [[Preview]: Python should be updated to the latest version for your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f74c3584d-afae-46f7-a20a-6f8adba71a16)) | Medium | No | App service |
-| **FTPS should be required in your function App** | Enable FTPS enforcement for enhanced security<br>(Related policy: [[Preview]: FTPS should be required in your function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f399b2637-a50f-4f95-96f8-3a145476eb15)) | High | No | App service |
-| **FTPS should be required in your web App** | Enable FTPS enforcement for enhanced security<br>(Related policy: [[Preview]: FTPS should be required in your web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b)) | High | No | App service |
-| **FTPS should be required in your API App** | Enable FTPS enforcement for enhanced security<br>(Related policy: [[Preview]: FTPS should be required in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9a1b8c48-453a-4044-86c3-d8bfd823e4f5)) | High | No | App service |
-| | | | | |
--
-## <a name="recs-computeapp"></a>Compute and app recommendations
-
-| Recommendation | Description & related policy | Severity | Quick fix enabled?([Learn more](security-center-remediate-recommendations.md#quick-fix-remediation)) | Resource type |
-|-----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------|----------------------------------------|
-| **Adaptive Application Controls should be enabled on virtual machines** | Enable application control to control which applications can run on your VMs located in Azure. This will help harden your VMs against malware. Security Center uses machine learning to analyze the applications running on each VM and helps you apply allow rules using this intelligence. This capability simplifies the process of configuring and maintaining application allow rules.<br>(Related policy: Adaptive Application Controls should be enabled on virtual machines) | High | N | Machine |
-| **All authorization rules except RootManageSharedAccessKey should be removed from Event Hub namespace** | Event Hub clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. To align with the least privilege security model, you should create access policies at the entity level for queues and topics to provide access to only the specific entity.<br>(Related policy: All authorization rules except RootManageSharedAccessKey should be removed from Event Hub namespace) | Low | N | Compute resources (event hub) |
-| **All authorization rules except RootManageSharedAccessKey should be removed from Service Bus namespace** | Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. To align with the least privilege security model, you should create access policies at the entity level for queues and topics to provide access to only the specific entity.<br>(Related policy: All authorization rules except RootManageSharedAccessKey should be removed from Service Bus namespace) | Low | N | Compute resources (service bus) |
-| **Authorization rules on the Event Hub entity should be defined** | Audit authorization rules on the Event Hub entity to grant least-privileged access.<br>(Related policy: Authorization rules on the Event Hub entity should be defined) | Low | N | Compute resources (event hub) |
-| **Automation account variables should be encrypted** | Enable encryption of Automation account variable assets when storing sensitive data.<br>(Related policy: Encryption should be enabled on Automation account variables) | High | N | Compute resources (automation account) |
-| **Azure Defender for servers should be enabled** | Azure Defender for servers provides real-time threat protection for your server workloads and generates hardening recommendations as well as alerts about suspicious activities.<br>You can use this information to quickly remediate security issues and improve the security of your virtual machines.<br>Important: Remediating this recommendation will result in charges for protecting your virtual machines. If you don't have any virtual machines in this subscription, no charges will be incurred. If you create any virtual machines on this subscription in the future, they will automatically be protected and charges will begin at that time.<br>(Related policy: [Advanced threat protection should be enabled on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d)) | High | **Y** | Subscription |
-| **Diagnostic logs in Azure Stream Analytics should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Azure Stream Analytics should be enabled) | Low | **Y** | Compute resources (stream analytics) |
-| **Diagnostic logs in Batch accounts should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Batch accounts should be enabled) | Low | **Y** | Compute resources (batch) |
-| **Diagnostic logs in Event Hub should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Event Hub should be enabled) | Low | **Y** | Compute resources (event hub) |
-| **Diagnostic logs in Logic Apps should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Logic Apps should be enabled) | Low | **Y** | Compute resources (logic apps) |
-| **Diagnostic logs in Search services should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Search services should be enabled) | Low | **Y** | Compute resources (search) |
-| **Diagnostic logs in Service Bus should be enabled** | Enable logs and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.<br>(Related policy: Diagnostic logs in Service Bus should be enabled) | Low | **Y** | Compute resources (service bus) |
-| **Disk encryption should be applied on virtual machines** | Encrypt your virtual machine disks using Azure Disk Encryption both for Windows and Linux virtual machines. Azure Disk Encryption (ADE) leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide OS and data disk encryption to help protect and safeguard your data and help meet your organizational security and compliance commitments in customer Azure key vault. When your compliance and security requirement requires you to encrypt the data end to end using your encryption keys, including encryption of the ephemeral (locally attached temporary) disk, use Azure disk encryption. Alternatively, by default, Managed Disks are encrypted at rest by default using Azure Storage Service Encryption where the encryption keys are Microsoft-managed keys in Azure. If this meets your compliance and security requirements, you can leverage the default Managed disk encryption to meet your requirements.<br>(Related policy: Disk encryption should be applied on virtual machines) | High | N | Machine |
-| **Enable the built-in vulnerability assessment solution on virtual machines** | Install the Qualys agent (included with Azure Defender) to enable a best of breed vulnerability assessment solution on your virtual machines.<br>(Related policy: [Vulnerability assessment should be enabled on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9)) | Medium | **Y** | Machine |
-| **Endpoint protection health issues should be resolved on your machines** | For full Security Center protection, resolve monitoring agent issues on your machines by following the instructions in the Troubleshooting guide.<br>(This recommendation is dependent upon the recommendation "Install endpoint protection solution on your machines" and its policy) | Medium | N | Machine |
-| **Guest configuration extension should be installed on Windows virtual machines (Preview)** | Install the guest configuration agent to enable auditing settings inside a machine such as: the configuration of the operating system, application configuration or presence, environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'.<br>(Related policy: Audit prerequisites to enable Guest Configuration policies on Windows VMs) | High | **Y** | Machine |
-| **Install endpoint protection solution on virtual machines** | Install an endpoint protection solution on your virtual machines, to protect them from threats and vulnerabilities.<br>(No related policy) | Medium | N | Machine |
-| **Install endpoint protection solution on your machines** | Install an endpoint protection solution on your Windows and Linux machines, to protect them from threats and vulnerabilities.<br>(Related policy: Monitor missing Endpoint Protection in Azure Security Center) | Medium | N | Machine |
-| **Log Analytics agent health issues should be resolved on your machines** | Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is both installed on the virtual machines and properly collects security events to the configured workspace. In some cases, the agent may fail to properly report security events, due to multiple reasons. In these cases, coverage may be partial - security events won't be properly processed, and in turn threat detection for the affected VMs may fail to function. View remediation steps for more information on how to resolve each issue.<br>(No related policy - dependent upon "Log Analytics agent health issues should be resolved on your machines") | Medium | N | Machine |
-| **Log Analytics agent should be installed on your Linux-based Azure Arc machines (Preview)** | Security Center uses the [Log Analytics agent](../azure-monitor/platform/log-analytics-agent.md) (also known as MMA) to collect security events from your Azure Arc machines.<br>(Related policy: [Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines) | High | **Y** | Azure Arc Machine |
-| **Log Analytics agent should be installed on your virtual machines** | Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. Data is collected using the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis.<br>This agent is also is required if your VMs are used by an Azure managed service such as Azure Kubernetes Service or Azure Service Fabric.<br>We recommend configuring auto-provisioning to automatically deploy the agent.<br>If you choose not to use auto-provisioning, manually deploy the agent to your VMs using the instructions in the remediation steps.<br>(No related policy) | High | **Y** | Machine |
-| **Log Analytics agent should be installed on your Windows-based Azure Arc machines (Preview)** | Security Center uses the [Log Analytics agent](../azure-monitor/platform/log-analytics-agent.md) (also known as MMA) to collect security events from your Azure Arc machines.<br>(Related policy: [Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines) | High | **Y** | Azure Arc Machine |
-| **Network traffic data collection agent should be installed on Linux virtual machines (Preview)** | Security Center uses the Microsoft Monitoring Dependency Agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendation