Updates from: 12/29/2020 04:04:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new-archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
@@ -1199,7 +1199,7 @@ For more information about how to better secure your organization by using autom
In January 2020, we've added these 33 new apps with Federation support to the app gallery:
-[JOSA](../saas-apps/josa-tutorial.md), [Fastly Edge Cloud](../saas-apps/fastly-edge-cloud-tutorial.md), [Terraform Enterprise](../saas-apps/terraform-enterprise-tutorial.md), [Spintr SSO](../saas-apps/spintr-sso-tutorial.md), [Abibot Netlogistik](https://azuremarketplace.microsoft.com/marketplace/apps/aad.abibotnetlogistik), [SkyKick](https://login.skykick.com/login?state=g6Fo2SBTd3M5Q0xBT0JMd3luS2JUTGlYN3pYTE1remJQZnR1c6N0aWTZIDhCSkwzYVQxX2ZMZjNUaWxNUHhCSXg2OHJzbllTcmYto2NpZNkgM0h6czk3ZlF6aFNJV1VNVWQzMmpHeFFDbDRIMkx5VEc&client=3Hzs97fQzhSIWUMUd32jGxQCl4H2LyTG&protocol=oauth2&audience=https://papi.skykick.com&response_type=code&redirect_uri=https://portal.skykick.com/callback&scope=openid%20profile%20offline_access), [Upshotly](../saas-apps/upshotly-tutorial.md), [LeaveBot](https://leavebot.io/#home), [DataCamp](../saas-apps/datacamp-tutorial.md), [TripActions](../saas-apps/tripactions-tutorial.md), [SmartWork](https://www.intumit.com/english/SmartWork.html), [Dotcom-Monitor](../saas-apps/dotcom-monitor-tutorial.md), [SSOGEN - Azure AD SSO Gateway for Oracle E-Business Suite - EBS, PeopleSoft, and JDE](../saas-apps/ssogen-tutorial.md), [Hosted MyCirqa SSO](../saas-apps/hosted-mycirqa-sso-tutorial.md), [Yuhu Property Management Platform](../saas-apps/yuhu-property-management-platform-tutorial.md), [LumApps](https://sites.lumapps.com/login), [Upwork Enterprise](../saas-apps/upwork-enterprise-tutorial.md), [Talentsoft](../saas-apps/talentsoft-tutorial.md), [SmartDB for Microsoft Teams](http://teams.smartdb.jp/login/), [PressPage](../saas-apps/presspage-tutorial.md), [ContractSafe Saml2 SSO](../saas-apps/contractsafe-saml2-sso-tutorial.md), [Maxient Conduct Manager Software](../saas-apps/maxient-conduct-manager-software-tutorial.md), [Helpshift](../saas-apps/helpshift-tutorial.md), [PortalTalk 365](https://www.portaltalk.com/), [CoreView](https://portal.coreview.com/), [Squelch Cloud Office365 Connector](https://laxmi.squelch.io/login), [PingFlow Authentication](https://app-staging.pingview.io/), [ PrinterLogic SaaS](../saas-apps/printerlogic-saas-tutorial.md), [Taskize Connect](../saas-apps/taskize-connect-tutorial.md), [Sandwai](https://app.sandwai.com/), [EZRentOut](../saas-apps/ezrentout-tutorial.md), [AssetSonar](../saas-apps/assetsonar-tutorial.md), [Akari Virtual Assistant](https://akari.io/akari-virtual-assistant/)
+[JOSA](../saas-apps/josa-tutorial.md), [Fastly Edge Cloud](../saas-apps/fastly-edge-cloud-tutorial.md), [Terraform Enterprise](../saas-apps/terraform-enterprise-tutorial.md), [Spintr SSO](../saas-apps/spintr-sso-tutorial.md), [Abibot Netlogistik](https://azuremarketplace.microsoft.com/marketplace/apps/aad.abibotnetlogistik), [SkyKick](https://login.skykick.com/login?state=g6Fo2SBTd3M5Q0xBT0JMd3luS2JUTGlYN3pYTE1remJQZnR1c6N0aWTZIDhCSkwzYVQxX2ZMZjNUaWxNUHhCSXg2OHJzbllTcmYto2NpZNkgM0h6czk3ZlF6aFNJV1VNVWQzMmpHeFFDbDRIMkx5VEc&client=3Hzs97fQzhSIWUMUd32jGxQCl4H2LyTG&protocol=oauth2&audience=https://papi.skykick.com&response_type=code&redirect_uri=https://portal.skykick.com/callback&scope=openid%20profile%20offline_access), [Upshotly](../saas-apps/upshotly-tutorial.md), [LeaveBot](https://appsource.microsoft.com/en-us/product/office/WA200001175), [DataCamp](../saas-apps/datacamp-tutorial.md), [TripActions](../saas-apps/tripactions-tutorial.md), [SmartWork](https://www.intumit.com/english/SmartWork.html), [Dotcom-Monitor](../saas-apps/dotcom-monitor-tutorial.md), [SSOGEN - Azure AD SSO Gateway for Oracle E-Business Suite - EBS, PeopleSoft, and JDE](../saas-apps/ssogen-tutorial.md), [Hosted MyCirqa SSO](../saas-apps/hosted-mycirqa-sso-tutorial.md), [Yuhu Property Management Platform](../saas-apps/yuhu-property-management-platform-tutorial.md), [LumApps](https://sites.lumapps.com/login), [Upwork Enterprise](../saas-apps/upwork-enterprise-tutorial.md), [Talentsoft](../saas-apps/talentsoft-tutorial.md), [SmartDB for Microsoft Teams](http://teams.smartdb.jp/login/), [PressPage](../saas-apps/presspage-tutorial.md), [ContractSafe Saml2 SSO](../saas-apps/contractsafe-saml2-sso-tutorial.md), [Maxient Conduct Manager Software](../saas-apps/maxient-conduct-manager-software-tutorial.md), [Helpshift](../saas-apps/helpshift-tutorial.md), [PortalTalk 365](https://www.portaltalk.com/), [CoreView](https://portal.coreview.com/), [Squelch Cloud Office365 Connector](https://laxmi.squelch.io/login), [PingFlow Authentication](https://app-staging.pingview.io/), [ PrinterLogic SaaS](../saas-apps/printerlogic-saas-tutorial.md), [Taskize Connect](../saas-apps/taskize-connect-tutorial.md), [Sandwai](https://app.sandwai.com/), [EZRentOut](../saas-apps/ezrentout-tutorial.md), [AssetSonar](../saas-apps/assetsonar-tutorial.md), [Akari Virtual Assistant](https://akari.io/akari-virtual-assistant/)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
@@ -2303,7 +2303,7 @@ For more information, see the [Risk detection API reference documentation](/grap
In June 2019, we've added these 22 new apps with Federation support to the app gallery:
-[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://proptimise.co.uk/software/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
+[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://proptimise.co.uk/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
@@ -96,7 +96,7 @@ You can now automate creating, updating, and deleting user accounts for these ne
- [Bizagi Studio for Digital Process Automation](../saas-apps/bizagi-studio-for-digital-process-automation-provisioning-tutorial.md) - [CybSafe](../saas-apps/cybsafe-provisioning-tutorial.md) - [GroupTalk](../saas-apps/grouptalk-provisioning-tutorial.md)-- [PaperCut Cloud Print Management](/saas-apps/papercut-cloud-print-management-provisioning-tutorial.md)
+- [PaperCut Cloud Print Management](/azure/active-directory/saas-apps/papercut-cloud-print-management-provisioning-tutorial)
- [Parsable](../saas-apps/parsable-provisioning-tutorial.md) - [Shopify Plus](../saas-apps/shopify-plus-provisioning-tutorial.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/delete-application-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: app-mgmt ms.topic: quickstart ms.workload: identity
-ms.date: 07/01/2020
+ms.date: 12/28/2020
ms.author: kenwith ---
@@ -48,6 +48,6 @@ When your done with this quickstart series, consider deleting the app to clean u
## Next steps
-You have completed the quickstart series! As a next step, read about best practices in app management.
+You have completed the quickstart series! Next, learn about Single Sign-On (SSO), see [What is SSO?](what-is-single-sign-on.md) Or read about best practices in app management.
> [!div class="nextstepaction"]
-> [Application management best practices](application-management-fundamentals.md)
\ No newline at end of file
+> [Application management best practices](application-management-fundamentals.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-provisioning-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
@@ -14,7 +14,7 @@ ms.topic: conceptual
ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor
-ms.date: 10/07/2020
+ms.date: 12/28/2020
ms.author: markvi ms.reviewer: arvinh
@@ -40,6 +40,7 @@ This topic gives you an overview of the provisioning report.
### Who can access the data? * Application owners can view logs for applications they own * Users in the Security Administrator, Security Reader, Report Reader, Application Administrator, and Cloud Application Administrator roles
+* Users in a custom role with the [provisioningLogs permission](https://docs.microsoft.com/azure/active-directory/roles/custom-enterprise-app-permissions#full-list-of-permissions)
* Global Administrators
@@ -52,7 +53,7 @@ Your tenant must have an Azure AD Premium license associated with it to see the
The provisioning logs provide answers to the following questions: * What groups were successfully created in ServiceNow?
-* What roles were imported from Amazon Web Services?
+* What users were successfully removed from Adobe?
* What users were unsuccessfully created in DropBox? You can access the provisioning logs by selecting **Provisioning Logs** in the **Monitoring** section of the **Azure Active Directory** blade in the [Azure portal](https://portal.azure.com). It can take up to two hours for some provisioning records to show up in the portal.
@@ -215,7 +216,9 @@ The **summary** tab provides an overview of what happened and identifiers for th
- You may see skipped events for users that are not in scope. This is expected, especially when the sync scope is set to all users and groups. Our service will evaluate all the objects in the tenant, even the ones that are out of scope. -- The provisioning logs are currently unavailable in the government cloud. If you're unable to access the provisioning logs, please use the audit logs as a temporary workaround.
+- The provisioning logs are currently unavailable in the government cloud. If you're unable to access the provisioning logs, please use the audit logs as a temporary workaround.
+
+- The provisioning logs do not show role imports (applies to AWS, SalesForce, and ZenDesk). The logs for role imports can be found in the audit logs.
## Error Codes
@@ -247,4 +250,4 @@ Use the table below to better understand how to resolve errors you may find in t
* [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Provisioning logs graph API](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta)
\ No newline at end of file
+* [Provisioning logs graph API](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/concept-understand-roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/concept-understand-roles.md
@@ -40,7 +40,7 @@ When we say separate role-based access control system. it means there is a diffe
## Why some Azure AD roles are for other services
-Microsoft 365 has a number of role-based access control systems that developed independently over time, each with its own service portal. To make it convenient for you to manage identity across Microsoft 365 from the Azure AD portal, we have added some service-specific built-in roles, each of which grants administrative access to a Microsoft 365 service. An example of this addition is the Exchange Administrator role in Azure AD. This role is equivalent to the [Organization Management role group](/exchange/organization-management-exchange-2013-help) in the Exchange role-based access control system, and can manage all aspects of Exchange. Similarly, we added the Intune Administrator role, Teams Administrator, SharePoint Administrator, and so on. Server-specific roles is one category of Azure AD built-in roles in the following section.
+Microsoft 365 has a number of role-based access control systems that developed independently over time, each with its own service portal. To make it convenient for you to manage identity across Microsoft 365 from the Azure AD portal, we have added some service-specific built-in roles, each of which grants administrative access to a Microsoft 365 service. An example of this addition is the Exchange Administrator role in Azure AD. This role is equivalent to the [Organization Management role group](/exchange/organization-management-exchange-2013-help) in the Exchange role-based access control system, and can manage all aspects of Exchange. Similarly, we added the Intune Administrator role, Teams Administrator, SharePoint Administrator, and so on. Service-specific roles is one category of Azure AD built-in roles in the following section.
## Categories of Azure AD roles
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/google-apps-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/google-apps-tutorial.md
@@ -77,7 +77,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Google Cloud (G Suite) Connector supports **SP** initiated SSO
-* Google Cloud (G Suite) Connector supports [**Automated** user provisioning](./google-apps-provisioning-tutorial.md)
+* Google Cloud (G Suite) Connector supports [**Automated** user provisioning](g-suite-provisioning-tutorial.md)
* Once you configure Google Cloud (G Suite) Connector you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad) ## Adding Google Cloud (G Suite) Connector from the gallery
@@ -237,7 +237,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
The objective of this section is to [create a user in Google Cloud (G Suite) Connector](https://support.google.com/a/answer/33310?hl=en) called B.Simon. After the user has manually been created in Google Cloud (G Suite) Connector, the user will now be able to sign in using their Microsoft 365 login credentials.
-Google Cloud (G Suite) Connector also supports automatic user provisioning. To configure automatic user provisioning, you must first [configure Google Cloud (G Suite) Connector for automatic user provisioning](./google-apps-provisioning-tutorial.md).
+Google Cloud (G Suite) Connector also supports automatic user provisioning. To configure automatic user provisioning, you must first [configure Google Cloud (G Suite) Connector for automatic user provisioning](g-suite-provisioning-tutorial.md).
> [!NOTE] > Make sure that your user already exists in Google Cloud (G Suite) Connector if provisioning in Azure AD has not been turned on before testing Single Sign-on.
@@ -259,7 +259,7 @@ When you click the Google Cloud (G Suite) Connector tile in the Access Panel, yo
- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Configure User Provisioning](./google-apps-provisioning-tutorial.md)
+- [Configure User Provisioning](g-suite-provisioning-tutorial.md)
- [Try Google Cloud (G Suite) Connector with Azure AD](https://aad.portal.azure.com/)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tutorial-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
@@ -40,7 +40,7 @@ To find more tutorials, use the table of contents on the left.
| :--- | :--- | :--- | | ![logo-Amazon Web Services (AWS) Console](./media/tutorial-list/active-directory-saas-amazon-web-service-tutorial.png)| [Amazon Web Services (AWS) Console](amazon-web-service-tutorial.md)| [Amazon Web Services (AWS) Console - Role Provisioning](amazon-web-service-tutorial.md#configure-azure-ad-sso) | | ![logo-Alibaba Cloud Service (Role bases SSO)](./media/tutorial-list/active-directory-saas-alibaba-tutorial.png)| [Alibaba Cloud Service (Role bases SSO)](alibaba-cloud-service-role-based-sso-tutorial.md)| |
-| ![logo-Google Cloud Platform](./media/tutorial-list/active-directory-saas-google-apps-tutorial.png)| [Google Cloud Platform](google-apps-tutorial.md)| [Google Cloud Platform - User Provisioning](google-apps-provisioning-tutorial.md) |
+| ![logo-Google Cloud Platform](./media/tutorial-list/active-directory-saas-google-apps-tutorial.png)| [Google Cloud Platform](google-apps-tutorial.md)| [Google Cloud Platform - User Provisioning](g-suite-provisioning-tutorial.md) |
| ![logo-Salesforce](./media/tutorial-list/active-directory-saas-salesforce-tutorial.png)| [Salesforce](salesforce-tutorial.md)| [Salesforce - User Provisioning](salesforce-provisioning-tutorial.md) | | ![logo-SAP Cloud Identity Platform](./media/tutorial-list/active-directory-saas-sapboc-tutorial.png)| [SAP Cloud Identity Platform](saphana-tutorial.md)|[SAP Cloud Identity Platform - Provisioning](./sap-cloud-platform-identity-authentication-provisioning-tutorial.md) |
advisor https://docs.microsoft.com/en-us/azure/advisor/advisor-performance-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-performance-recommendations.md
@@ -175,7 +175,7 @@ Learn more about [Immersive reader SDK](../cognitive-services/immersive-reader/i
Advisor detects that you have a host pool that has depth first set as the load balancing algorithm, and that host pool's max session limit is greater than or equal to 99999. Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host, and this will cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you must set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs.
-To learn more about load balancing in Windows Virtual Desktop, see [Configure the Windows Virtual Desktop load-balancing method](/virtual-desktop/troubleshoot-set-up-overview.md).
+To learn more about load balancing in Windows Virtual Desktop, see [Configure the Windows Virtual Desktop load-balancing method](/azure/virtual-desktop/troubleshoot-set-up-overview).
## How to access performance recommendations in Advisor
app-service https://docs.microsoft.com/en-us/azure/app-service/configure-language-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
@@ -21,7 +21,7 @@ This guide provides key concepts and instructions for Java developers using App
## Deploying your app
-You can use [Azure Web App Plugin for Maven](/java/api/overview/azure/maven/azure-webapp-maven-plugin/readme) to deploy your .war or .jar files. Deployment with popular IDEs is also supported with the [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/) or [Azure Toolkit for Eclipse](/azure/developer/java/toolkit-for-eclipse).
+You can use [Azure Web App Plugin for Maven](https://github.com/microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md) to deploy your .war or .jar files. Deployment with popular IDEs is also supported with the [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/) or [Azure Toolkit for Eclipse](/azure/developer/java/toolkit-for-eclipse).
Otherwise, your deployment method will depend on your archive type:
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
@@ -214,7 +214,7 @@ Property | Required | Description | Version
`<subscriptionId>` | false | Specify the subscription id. | 0.1.0+ `<resourceGroup>` | true | Azure Resource Group for your Web App. | 0.1.0+ `<appName>` | true | The name of your Web App. | 0.1.0+
-`<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](/java/api/overview/azure/maven/azure-webapp-maven-plugin/readme) section. | 0.1.0+
+`<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](https://github.com/microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md) section. | 0.1.0+
`<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1V2** for production workload, while **B2** is the recommended minimum for Java dev/test. [Learn more](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+ `<runtime>` | true | The runtime environment configuration, you could see the detail [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+ `<deployment>` | true | The deployment configuration, you could see the details [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-impact-level-5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
@@ -360,7 +360,7 @@ The Container Instances Dedicated SKU provides an [isolated and dedicated comput
Azure Kubernetes Service (AKS) supports Impact Level 5 workloads in Azure Government with these configurations: -- Configure encryption at rest of content in AKS by [using customer-managed keys in Azure Key Vault](https://ddocs.microsoft.com/azure/aks/azure-disk-customer-managed-keys).
+- Configure encryption at rest of content in AKS by [using customer-managed keys in Azure Key Vault](/azure/aks/azure-disk-customer-managed-keys).
- For workloads that require isolation from other customer workloads, you can use [isolated virtual machines](../aks/concepts-security.md#compute-isolation) as the agent nodes in an AKS cluster. | **Service** | **US Gov VA** | **US Gov TX** | **US Gov AZ** | **US DoD East** | **US DoD Central** |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/live-stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/live-stream.md
@@ -32,6 +32,7 @@ Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
* [ASP.NET Core](./asp-net-core.md)- Live Metrics is enabled by default. * [.NET/.NET Core Console/Worker](./worker-service.md)- Live Metrics is enabled by default. * [.NET Applications - Enable using code](#enable-livemetrics-using-code-for-any-net-application).
+ * [Java](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent) - Live Metrics is enabled by default.
* [Node.js](./nodejs.md#live-metrics) 2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, and then open Live Stream.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections.md
@@ -24,6 +24,4 @@ The following ITSM products/services are supported. Select the product to view d
## Next steps
-* [ITSM Connector Overview](itsmc-overview.md)
-* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -28,10 +28,9 @@ Before you can create a connection, you need to add ITSMC.
![Screenshot that shows the Create button in Azure Marketplace.](media/itsmc-overview/add-itsmc-solution.png)
-3. In the **OMS Workspace** section, select the Azure Log Analytics workspace where you want to install ITSMC.
+3. In the **LA Workspace** section, select the Azure Log Analytics workspace where you want to install ITSMC.
>[!NOTE] >
- > * As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS Workspaces are now referred to as *Log Analytics workspaces*.
> * ITSMC can be installed only in Log Analytics workspaces in the following regions: East US, West US 2, South Central US, West Central US, US Gov Arizona, US Gov Virginia, Canada Central, West Europe, South UK, Southeast Asia, Japan East, Central India, and Australia Southeast. 4. In the **Log Analytics workspace** section, select the resource group where you want to create the ITSMC resource:
@@ -69,7 +68,12 @@ After you've prepped your ITSM tools, complete these steps to create a connectio
This page displays the list of connections. 1. Select **Add Connection**.
-4. Specify the connection settings as described in [Configuring the ITSMC connection with your ITSM products/services](./itsmc-connections.md).
+1. Specify the connection settings as described according to ITSM products/services:
+
+- [ServiceNow](./itsmc-connections-servicenow.md)
+- [System Center Service Manager](./itsmc-connections-scsm.md)
+- [Cherwell](./itsmc-connections-cherwell.md)
+- [Provance](./itsmc-connections-provance.md)
> [!NOTE] >
@@ -79,13 +83,7 @@ After you've prepped your ITSM tools, complete these steps to create a connectio
## Use ITSMC
- You can use ITSMC to create work items from Azure alerts, Log Analytics alerts, and Log Analytics log records.
-
-## Template definitions
-
- There are work item types that can use templates that are defined by the ITSM tool.
- By using templates, you can define fields that will be automatically populated according to fixed values that are defined as part of the action group. You define templates in the ITSM tool.
- You can define in which template you would like to use as a part of the definition of the action group.
+ You can use ITSMC to create alerts from Azure Monitor Alerts into the ITSM tool.
## Create ITSM work items from Azure alerts
@@ -96,7 +94,13 @@ Action groups provide a modular and reusable way to trigger actions for your Azu
> [!NOTE] > After you create the ITSM connection, you need to wait for 30 minutes for the sync process to finish.
-Use the following procedure to create work items:
+### Template definitions
+
+ There are work item types that can use templates that are defined by the ITSM tool.
+ By using templates, you can define fields that will be automatically populated according to fixed values that are defined as part of the action group. You define templates in the ITSM tool.
+ You can define which template you would like to use as a part of the definition of the action group.
+
+Use the following procedure to create action groups:
1. In the Azure portal, select **Alerts**. 2. In the menu at the top of the screen, select **Manage actions**:
@@ -117,7 +121,7 @@ Use the following procedure to create work items:
8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#template-definitions) in the **Template** list and enter the fixed values in the template fields.
-9. If you select **Create individual work items for each Configuration Item**, every configuration item will have its own work item. There will be one work item per configuration item. It will be updated according to the alerts that will be created.
+9. If you select **Create individual work items for each Configuration Item**, every configuration item will have its own work item. Meaning there will be one work item per configuration item.
* In a case you select in the work item dropdown "Incident" or "Alert": If you clear the **Create individual work items for each Configuration Item** check box, every alert will create a new work item. There can be more than one alert per configuration item.
@@ -256,6 +260,4 @@ ServiceDeskWorkItemType_s="ChangeRequest"
## Next steps
-* [ITSM Connector Overview](./itsmc-overview.md)
-* [Add ITSM products/services to IT Service Management Connector](./itsmc-connections.md)
* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-overview.md
@@ -32,20 +32,18 @@ ITSMC supports connections with the following ITSM tools:
With ITSMC, you can: -- Create work items in your ITSM tool, based on your Azure alerts (metric alerts, activity log alerts, and Log Analytics alerts).
+- Create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).
- Optionally, you can sync your incident and change request data from your ITSM tool to an Azure Log Analytics workspace. For information about legal terms and the privacy policy, see [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9). You can start using ITSMC by completing the following steps:
-1. [Connect ITSM products/services with IT Service Management Connector.](./itsmc-connections.md)
-1. [Add ITSMC.](./itsmc-definition.md#add-it-service-management-connector)
-1. [Create an ITSM connection.](./itsmc-definition.md#create-an-itsm-connection)
-1. [Use the connection.](./itsmc-definition.md#use-itsmc)
+1. [Setup your ITSM Environment to accept alerts from Azure.](./itsmc-connections.md)
+1. [Configure Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
+1. [Configure Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
+1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#use-itsmc)
## Next steps
-* [Add ITSM products/services to IT Service Management Connector](./itsmc-connections.md)
-* [Add ITSM Connector](./itsmc-definition.md)
* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -36,7 +36,7 @@ If you're using Service Map, you can view the service desk items created in ITSM
## Troubleshoot ITSM connections -- If a connection fails from the connected source's UI and you get an **Error in saving connection** message, take the following steps:
+- If a connection fails to connect to the ITSM system and you get an **Error in saving connection** message, take the following steps:
- For ServiceNow, Cherwell, and Provance connections: - Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections. - Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection.
@@ -54,7 +54,7 @@ If you're using Service Map, you can view the service desk items created in ITSM
- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section. - If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.
-### How to manually fix ServiceNow sync problems
+### How to manually fix sync problems
Azure Monitor can connect to third-party IT Service Management (ITSM) providers. ServiceNow is one of those providers.
@@ -89,10 +89,4 @@ Use the following synchronization process to reactivate the connection and refre
![New connection](media/itsmc-resync-servicenow/save-8bit.png)
-f. Review the notifications to see if the process finished with success
-
-## Next Steps
-
-* [ITSM Definition](./itsmc-definition.md)
-* [ITSM Connector Overview](./itsmc-overview.md)
-* [Add ITSM products/services to IT Service Management Connector](./itsmc-connections.md)
+f. Review the notifications to see if the process started.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/logs-data-export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/logs-data-export.md
@@ -37,7 +37,7 @@ Log Analytics workspace data export continuously exports data from a Log Analyti
- Your Log Analytics workspace can be in any region except for the following: - Switzerland North - Switzerland West
- - Azure government regions
+ - Azure Government regions
- The destination storage account or event hub must be in the same region as the Log Analytics workspace. - Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters to an event hub. Tables with longer names will not be exported.
@@ -212,6 +212,186 @@ Following is a sample body for the REST request for an event hub where event hub
} } ```+
+# [Template](#tab/json)
+
+Use the following command to create a data export rule to a storage account using template.
+
+```
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceName": {
+ "defaultValue": "workspace-name",
+ "type": "String"
+ },
+ "workspaceLocation": {
+ "defaultValue": "workspace-region",
+ "type": "string"
+ },
+ "storageAccountRuleName": {
+ "defaultValue": "storage-account-rule-name",
+ "type": "string"
+ },
+ "storageAccountResourceId": {
+ "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "name": "[parameters('workspaceName')]",
+ "location": "[parameters('workspaceLocation')]",
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces/dataexports",
+ "apiVersion": "2020-08-01",
+ "name": "[concat(parameters('workspaceName'), '/' , parameters('storageAccountRuleName'))]",
+ "dependsOn": [
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName'))]"
+ ],
+ "properties": {
+ "destination": {
+ "resourceId": "[parameters('storageAccountResourceId')]"
+ },
+ "tableNames": [
+ "Heartbeat",
+ "InsightsMetrics",
+ "VMConnection",
+ "Usage"
+ ],
+ "enable": true
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+Use the following command to create a data export rule to an event hub using template. A separate event hub is created for each table.
+
+```
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceName": {
+ "defaultValue": "workspace-name",
+ "type": "String"
+ },
+ "workspaceLocation": {
+ "defaultValue": "workspace-region",
+ "type": "string"
+ },
+ "eventhubRuleName": {
+ "defaultValue": "event-hub-rule-name",
+ "type": "string"
+ },
+ "namespacesResourceId": {
+ "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/microsoft.eventhub/namespaces/namespaces-name",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "name": "[parameters('workspaceName')]",
+ "location": "[parameters('workspaceLocation')]",
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces/dataexports",
+ "apiVersion": "2020-08-01",
+ "name": "[concat(parameters('workspaceName'), '/', parameters('eventhubRuleName'))]",
+ "dependsOn": [
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName'))]"
+ ],
+ "properties": {
+ "destination": {
+ "resourceId": "[parameters('namespacesResourceId')]"
+ },
+ "tableNames": [
+ "Usage",
+ "Heartbeat"
+ ],
+ "enable": true
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+Use the following command to create a data export rule to a specific event hub using template. All tables are exported to the provided event hub name.
+
+```
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceName": {
+ "defaultValue": "workspace-name",
+ "type": "String"
+ },
+ "workspaceLocation": {
+ "defaultValue": "workspace-region",
+ "type": "string"
+ },
+ "eventhubRuleName": {
+ "defaultValue": "event-hub-rule-name",
+ "type": "string"
+ },
+ "namespacesResourceId": {
+ "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/microsoft.eventhub/namespaces/namespaces-name",
+ "type": "String"
+ },
+ "eventhubName": {
+ "defaultValue": "event-hub-name",
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "name": "[parameters('workspaceName')]",
+ "location": "[parameters('workspaceLocation')]",
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces/dataexports",
+ "apiVersion": "2020-08-01",
+ "name": "[concat(parameters('workspaceName'), '/', parameters('eventhubRuleName'))]",
+ "dependsOn": [
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName'))]"
+ ],
+ "properties": {
+ "destination": {
+ "resourceId": "[parameters('namespacesResourceId')]",
+ "metaData": {
+ "eventHubName": "[parameters('eventhubName')]"
+ }
+ },
+ "tableNames": [
+ "Usage",
+ "Heartbeat"
+ ],
+ "enable": true
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+ --- ## View data export rule configuration
@@ -239,6 +419,11 @@ Use the following request to view the configuration of a data export rule using
```rest GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01 ```+
+# [Template](#tab/json)
+
+N/A
+ --- ## Disable an export rule
@@ -261,7 +446,7 @@ az monitor log-analytics workspace data-export update --resource-group resourceG
# [REST](#tab/rest)
-Use the following request to disable a data export rule using the REST API. The request should use bearer token authorization.
+Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Use the following request to disable a data export rule using the REST API. The request should use bearer token authorization.
```rest PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01
@@ -281,6 +466,11 @@ Content-type: application/json
} } ```+
+# [Template](#tab/json)
+
+Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Set ```"enable": false``` in template to disable a data export.
+ --- ## Delete an export rule
@@ -308,6 +498,11 @@ Use the following request to delete a data export rule using the REST API. The r
```rest DELETE https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01 ```+
+# [Template](#tab/json)
+
+N/A
+ --- ## View all data export rules in a workspace
@@ -335,6 +530,11 @@ Use the following request to view all data export rules in a workspace using the
```rest GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports?api-version=2020-08-01 ```+
+# [Template](#tab/json)
+
+N/A
+ --- ## Unsupported tables
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azacsnap-tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-tips.md
@@ -136,7 +136,7 @@ A storage volume snapshot can be restored to a new volume (`-c restore --restore
A snapshot can be copied back to the SAP HANA data area, but SAP HANA must not be running when a copy is made (`cp /hana/data/H80/mnt00001/.snapshot/hana_hourly.2020-06-17T113043.1586971Z/*`).
-For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request from Azure portal: <https://portal.azure.com.>
+For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request from Azure portal: <https://portal.azure.com>
If you decide to perform the disaster recovery failover, the `azacsnap -c restore --restore revertvolume` command at the DR site will automatically make available the most recent (`/hana/data` and `/hana/logbackups`) volume snapshots to allow for an SAP HANA recovery. Use this command with caution as it breaks replication between production and DR sites.
@@ -253,7 +253,7 @@ A 'boot' snapshot can be recovered as follows:
1. The customer will need to shut down the server. 1. After the Server is shut down, the customer will need to open a service request that contains the Machine ID and Snapshot to restore.
- > Customers can open a service request from the Azure portal: <https://portal.azure.com.>
+ > Customers can open a service request from the Azure portal: <https://portal.azure.com>
1. Microsoft will restore the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server. 1. The customer will then need to confirm Server is booted and healthy.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-support-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
@@ -1907,9 +1907,9 @@ Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | > | ------------- | ----------- | ---------- |
-> | workspaces | Yes | Yes |
-> | workspaces / bigdatapools | Yes | Yes |
-> | workspaces / sqlpools | Yes | Yes |
+> | workspaces | No | No |
+> | workspaces / bigdatapools | No | No |
+> | workspaces / sqlpools | No | No |
## Microsoft.TimeSeriesInsights
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-script-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template.md
@@ -5,7 +5,7 @@ services: azure-resource-manager
author: mumian ms.service: azure-resource-manager ms.topic: conceptual
-ms.date: 12/22/2020
+ms.date: 12/28/2020
ms.author: jgao ---
@@ -34,7 +34,7 @@ The deployment script resource is only available in the regions where Azure Cont
> [!IMPORTANT] > The deploymentScripts resource API version 2020-10-01 supports [OnBehalfofTokens(OBO)](../../active-directory/develop/v2-oauth2-on-behalf-of-flow.md). By using OBO, the deployment script service uses the deployment principal's token to create the underlying resources for running deployment scripts, which include Azure Container instance, Azure storage account, and role assignments for the managed identity. In older API version, the managed identity is used to create these resources.
-> Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same template where you run deployment scripts. The deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
+> Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same template where you run deployment scripts. The deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
## Configure the minimum permissions
@@ -66,13 +66,13 @@ To configure the least-privilege permissions, you need:
} ```
- If the Azure Storage and the Azure Container Instance resource providers haven't been registered, you also need to add **Microsoft.Storage/register/action** and **Microsoft.ContainerInstance/register/action**.
+ If the Azure Storage and the Azure Container Instance resource providers haven't been registered, you also need to add `Microsoft.Storage/register/action` and `Microsoft.ContainerInstance/register/action`.
- If a managed identity is used, the deployment principal needs the **Managed Identity Operator** role (a built-in role) assigned to the managed identity resource. ## Sample templates
-The following json is an example. The latest template schema can be found [here](/azure/templates/microsoft.resources/deploymentscripts).
+The following JSON is an example. For more information, see the latest [template schema](/azure/templates/microsoft.resources/deploymentscripts).
```json {
@@ -88,7 +88,7 @@ The following json is an example. The latest template schema can be found [here
} }, "properties": {
- "forceUpdateTag": 1,
+ "forceUpdateTag": "1",
"containerSettings": { "containerGroupName": "mycustomaci" },
@@ -124,42 +124,45 @@ The following json is an example. The latest template schema can be found [here
``` > [!NOTE]
-> The example is for demonstration purpose. **scriptContent** and **primaryScriptUri** can't coexist in a template.
+> The example is for demonstration purposes. The properties `scriptContent` and `primaryScriptUri` can't coexist in a template.
Property value details: -- **Identity**: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. Currently, only user-assigned managed identity is supported.-- **kind**: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**.-- **forceUpdateTag**: Changing this value between template deployments forces the deployment script to re-execute. If you use the newGuid() or the utcNow() function, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once).-- **containerSettings**: Specify the settings to customize Azure Container Instance. **containerGroupName** is for specifying the container group name. If not specified, the group name is automatically generated.-- **storageAccountSettings**: Specify the settings to use an existing storage account. If not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).-- **azPowerShellVersion**/**azCliVersion**: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
+- `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. Currently, only user-assigned managed identity is supported.
+- `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**.
+- `forceUpdateTag`: Changing this value between template deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once).
+- `containerSettings`: Specify the settings to customize Azure Container Instance. `containerGroupName` is for specifying the container group name. If not specified, the group name is automatically generated.
+- `storageAccountSettings`: Specify the settings to use an existing storage account. If not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).
+- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
>[!IMPORTANT]
- > Deployment script uses the available CLI images from Microsoft Container Registry(MCR) . It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If an un-supported version is used, the error message list the supported versions.
+ > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If an unsupported version is used, the error message lists the supported versions.
-- **arguments**: Specify the parameter values. The values are separated by spaces.
+- `arguments`: Specify the parameter values. The values are separated by spaces.
Deployment Scripts splits the arguments into an array of strings by invoking the [CommandLineToArgvW ](/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw) system call. This step is necessary because the arguments are passed as a [command property](/rest/api/container-instances/containergroups/createorupdate#containerexec) to Azure Container Instance, and the command property is an array of string.
- If the arguments contain escaped characters, use [JsonEscaper](https://www.jsonescaper.com/) to double escaped the characters. Paste your original escaped string into the tool, and then select **Escape**. The tool outputs a double escaped string. For example, in the previous sample template, The argument is **-name \\"John Dole\\"**. The escaped string is **-name \\\\\\"John dole\\\\\\"**.
+ If the arguments contain escaped characters, use [JsonEscaper](https://www.jsonescaper.com/) to double escaped the characters. Paste your original escaped string into the tool, and then select **Escape**. The tool outputs a double escaped string. For example, in the previous sample template, The argument is `-name \"John Dole\"`. The escaped string is `-name \\\"John Dole\\\"`.
- To pass an ARM template parameter of type object as an argument, convert the object to a string by using the [string()](./template-functions-string.md#string) function, and then use the [replace()](./template-functions-string.md#replace) function to replace any **\\"** into **\\\\\\"**. For example:
+ To pass an ARM template parameter of type object as an argument, convert the object to a string by using the [string()](./template-functions-string.md#string) function, and then use the [replace()](./template-functions-string.md#replace) function to replace any `\"` into `\\\"`. For example:
```json replace(string(parameters('tables')), '\"', '\\\"') ```
- To see a sample template, select [here](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-jsonEscape.json).
+ For more information, see the [sample template](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-jsonEscape.json).
-- **environmentVariables**: Specify the environment variables to pass over to the script. For more information, see [Develop deployment scripts](#develop-deployment-scripts).-- **scriptContent**: Specify the script content. To run an external script, use `primaryScriptUri` instead. For examples, see [Use inline script](#use-inline-scripts) and [Use external script](#use-external-scripts).-- **primaryScriptUri**: Specify a publicly accessible Url to the primary deployment script with supported file extensions.-- **supportingScriptUris**: Specify an array of publicly accessible Urls to supporting files that are called in either `ScriptContent` or `PrimaryScriptUri`.-- **timeout**: Specify the maximum allowed script execution time specified in the [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). Default value is **P1D**.-- **cleanupPreference**. Specify the preference of cleaning up deployment resources when the script execution gets in a terminal state. Default setting is **Always**, which means deleting the resources despite the terminal state (Succeeded, Failed, Canceled). To learn more, see [Clean up deployment script resources](#clean-up-deployment-script-resources).-- **retentionInterval**: Specify the interval for which the service retains the deployment script resources after the deployment script execution reaches a terminal state. The deployment script resources will be deleted when this duration expires. Duration is based on the [ISO 8601 pattern](https://en.wikipedia.org/wiki/ISO_8601). The retention interval is between 1 and 26 hours (PT26H). This property is used when cleanupPreference is set to *OnExpiration*. The *OnExpiration* property isn't enabled currently. To learn more, see [Clean up deployment script resources](#clean-up-deployment-script-resources).
+- `environmentVariables`: Specify the environment variables to pass over to the script. For more information, see [Develop deployment scripts](#develop-deployment-scripts).
+- `scriptContent`: Specify the script content. To run an external script, use `primaryScriptUri` instead. For examples, see [Use inline script](#use-inline-scripts) and [Use external script](#use-external-scripts).
+ > [!NOTE]
+ > The Azure portal can't parse a deployment script with multiple lines. To deploy a template with deployment script from the Azure portal, you can either chain the PowerShell commands by using semicolons into one line, or use the `primaryScriptUri` property with an external script file.
+
+- `primaryScriptUri`: Specify a publicly accessible Url to the primary deployment script with supported file extensions.
+- `supportingScriptUris`: Specify an array of publicly accessible Urls to supporting files that are called in either `scriptContent` or `primaryScriptUri`.
+- `timeout`: Specify the maximum allowed script execution time specified in the [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). Default value is **P1D**.
+- `cleanupPreference`. Specify the preference of cleaning up deployment resources when the script execution gets in a terminal state. Default setting is **Always**, which means deleting the resources despite the terminal state (Succeeded, Failed, Canceled). To learn more, see [Clean up deployment script resources](#clean-up-deployment-script-resources).
+- `retentionInterval`: Specify the interval for which the service retains the deployment script resources after the deployment script execution reaches a terminal state. The deployment script resources will be deleted when this duration expires. Duration is based on the [ISO 8601 pattern](https://en.wikipedia.org/wiki/ISO_8601). The retention interval is between 1 and 26 hours (PT26H). This property is used when `cleanupPreference` is set to **OnExpiration**. The **OnExpiration** property isn't enabled currently. To learn more, see [Clean up deployment script resources](#clean-up-deployment-script-resources).
### Additional samples
@@ -174,9 +177,9 @@ The following template has one resource defined with the `Microsoft.Resources/de
:::code language="json" source="~/resourcemanager-templates/deployment-script/deploymentscript-helloworld.json" range="1-44" highlight="24-30"::: > [!NOTE]
-> Because the inline deployment scripts are enclosed in double quotes, the strings inside the deployment scripts need to be escaped by using a **&#92;** or enclosed in single quotes. You can also consider using string substitution as it is shown in the previous JSON sample.
+> Because the inline deployment scripts are enclosed in double quotes, the strings inside the deployment scripts need to be escaped by using a backslash (**&#92;**) or enclosed in single quotes. You can also consider using string substitution as it is shown in the previous JSON sample.
-The script takes one parameter, and output the parameter value. **DeploymentScriptOutputs** is used for storing outputs. In the outputs section, the **value** line shows how to access the stored values. `Write-Output` is used for debugging purpose. To learn how to access the output file, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). For the property descriptions, see [Sample templates](#sample-templates).
+The script takes one parameter, and output the parameter value. `DeploymentScriptOutputs` is used for storing outputs. In the outputs section, the `value` line shows how to access the stored values. `Write-Output` is used for debugging purpose. To learn how to access the output file, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). For the property descriptions, see [Sample templates](#sample-templates).
To run the script, select **Try it** to open the Cloud Shell, and then paste the following code into the shell pane.
@@ -197,17 +200,17 @@ The output looks like:
## Use external scripts
-In addition to inline scripts, you can also use external script files. Only primary PowerShell scripts with the **ps1** file extension are supported. For CLI scripts, the primary scripts can have any extensions (or without an extension), as long as the scripts are valid bash scripts. To use external script files, replace `scriptContent` with `primaryScriptUri`. For example:
+In addition to inline scripts, you can also use external script files. Only primary PowerShell scripts with the _ps1_ file extension are supported. For CLI scripts, the primary scripts can have any extensions (or without an extension), as long as the scripts are valid bash scripts. To use external script files, replace `scriptContent` with `primaryScriptUri`. For example:
```json
-"primaryScriptURI": "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-helloworld.ps1",
+"primaryScriptUri": "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-helloworld.ps1",
```
-To see an example, select [here](https://github.com/Azure/azure-docs-json-samples/blob/master/deployment-script/deploymentscript-helloworld-primaryscripturi.json).
+For more information, see the [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/deployment-script/deploymentscript-helloworld-primaryscripturi.json).
-The external script files must be accessible. To secure your script files that are stored in Azure storage accounts, see [Deploy private ARM template with SAS token](./secure-template-with-sas-token.md).
+The external script files must be accessible. To secure your script files that are stored in Azure storage accounts, see [Deploy private ARM template with SAS token](./secure-template-with-sas-token.md).
-You're responsible for ensuring the integrity of the scripts that are referenced by deployment script, either **PrimaryScriptUri** or **SupportingScriptUris**. Reference only scripts that you trust.
+You're responsible for ensuring the integrity of the scripts that are referenced by deployment script, either `primaryScriptUri` or `supportingScriptUris`. Reference only scripts that you trust.
## Use supporting scripts
@@ -231,11 +234,11 @@ The supporting files are copied to `azscripts/azscriptinput` at the runtime. Use
## Work with outputs from PowerShell script
-The following template shows how to pass values between two deploymentScripts resources:
+The following template shows how to pass values between two `deploymentScripts` resources:
:::code language="json" source="~/resourcemanager-templates/deployment-script/deploymentscript-basic.json" range="1-68" highlight="30-31,50":::
-In the first resource, you define a variable called **$DeploymentScriptOutputs**, and use it to store the output values. To access the output value from another resource within the template, use:
+In the first resource, you define a variable called `$DeploymentScriptOutputs`, and use it to store the output values. To access the output value from another resource within the template, use:
```json reference('<ResourceName>').output.text
@@ -243,9 +246,9 @@ reference('<ResourceName>').output.text
## Work with outputs from CLI script
-Different from the PowerShell deployment script, CLI/bash support doesn't expose a common variable to store script outputs, instead, there's an environment variable called **AZ_SCRIPTS_OUTPUT_PATH** that stores the location where the script outputs file resides. If a deployment script is run from a Resource Manager template, this environment variable is set automatically for you by the Bash shell.
+Different from the PowerShell deployment script, CLI/bash support doesn't expose a common variable to store script outputs, instead, there's an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` that stores the location where the script outputs file resides. If a deployment script is run from a Resource Manager template, this environment variable is set automatically for you by the Bash shell.
-Deployment script outputs must be saved in the AZ_SCRIPTS_OUTPUT_PATH location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as { "MyResult": [ "foo", "bar"] }. Storing just the array results, for example [ "foo", "bar" ], is invalid.
+Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid.
:::code language="json" source="~/resourcemanager-templates/deployment-script/deploymentscript-basic-cli.json" range="1-44" highlight="32":::
@@ -268,7 +271,7 @@ A storage account and a container instance are needed for script execution and t
| Standard_RAGZRS | StorageV2 | | Standard_ZRS | StorageV2 |
- These combinations support file share. For more information, see [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md) and [Types of storage accounts](../../storage/common/storage-account-overview.md).
+ These combinations support file share. For more information, see [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md) and [Types of storage accounts](../../storage/common/storage-account-overview.md).
- Storage account firewall rules aren't supported yet. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md). - Deployment principal must have permissions to manage the storage account, which includes read, create, delete file shares.
@@ -299,9 +302,9 @@ When an existing storage account is used, the script service creates a file shar
### Handle non-terminating errors
-You can control how PowerShell responds to non-terminating errors by using the **$ErrorActionPreference** variable in your deployment script. If the variable isn't set in your deployment script, the script service uses the default value **Continue**.
+You can control how PowerShell responds to non-terminating errors by using the `$ErrorActionPreference` variable in your deployment script. If the variable isn't set in your deployment script, the script service uses the default value **Continue**.
-The script service sets the resource provisioning state to **Failed** when the script encounters an error despite the setting of $ErrorActionPreference.
+The script service sets the resource provisioning state to **Failed** when the script encounters an error despite the setting of `$ErrorActionPreference`.
### Pass secured strings to deployment script
@@ -317,17 +320,17 @@ The script service creates a [storage account](../../storage/common/storage-acco
The user script, the execution results, and the stdout file are stored in the files shares of the storage account. There's a folder called `azscripts`. In the folder, there are two more folders for the input and the output files: `azscriptinput` and `azscriptoutput`.
-The output folder contains a **executionresult.json** and the script output file. You can see the script execution error message in **executionresult.json**. The output file is created only when the script is executed successfully. The input folder contains a system PowerShell script file and the user deployment script files. You can replace the user deployment script file with a revised one, and rerun the deployment script from the Azure container instance.
+The output folder contains a _executionresult.json_ and the script output file. You can see the script execution error message in _executionresult.json_. The output file is created only when the script is executed successfully. The input folder contains a system PowerShell script file and the user deployment script files. You can replace the user deployment script file with a revised one, and rerun the deployment script from the Azure container instance.
### Use the Azure portal
-After you deploy a deployment script resource, the resource is listed under the resource group in the Azure portal. The following screenshot shows the Overview page of a deployment script resource:
+After you deploy a deployment script resource, the resource is listed under the resource group in the Azure portal. The following screenshot shows the **Overview** page of a deployment script resource:
![Resource Manager template deployment script portal overview](./media/deployment-script-template/resource-manager-deployment-script-portal.png) The overview page displays some important information of the resource, such as **Provisioning state**, **Storage account**, **Container instance**, and **Logs**.
-From the left menu, you can view the deployment script content, the arguments passed to the script, and the output. You can also export a template for the deployment script including the deployment script.
+From the left menu, you can view the deployment script content, the arguments passed to the script, and the output. You can also export a template for the deployment script including the deployment script.
### Use PowerShell
@@ -338,7 +341,7 @@ Using Azure PowerShell, you can manage deployment scripts at subscription or res
- [Remove-AzDeploymentScript](/powershell/module/az.resources/remove-azdeploymentscript): Removes a deployment script and its associated resources. - [Save-AzDeploymentScriptLog](/powershell/module/az.resources/save-azdeploymentscriptlog): Saves the log of a deployment script execution to disk.
-The Get-AzDeploymentScript output is similar to:
+The `Get-AzDeploymentScript` output is similar to:
```output Name : runPowerShellInlineWithOutput
@@ -523,29 +526,29 @@ A storage account and a container instance are needed for script execution and t
The life cycle of these resources is controlled by the following properties in the template: -- **cleanupPreference**: Clean up preference when the script execution gets in a terminal state. The supported values are:
+- `cleanupPreference`: Clean up preference when the script execution gets in a terminal state. The supported values are:
- - **Always**: Delete the automatically created resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created in the storage account. Because the deploymentScripts resource may still be present after the resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, return value, etc. before the resources are deleted.
+ - **Always**: Delete the automatically created resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created in the storage account. Because the `deploymentScripts` resource may still be present after the resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, and return value before the resources are deleted.
- **OnSuccess**: Delete the automatically created resources only when the script execution is successful. If an existing storage account is used, the script service removes the file share only when the script execution is successful. You can still access the resources to find the debug information.
- - **OnExpiration**: Delete the automatically created resources only when the **retentionInterval** setting is expired. If an existing storage account is used, the script service removes the file share, but retains the storage account.
+ - **OnExpiration**: Delete the automatically created resources only when the `retentionInterval` setting is expired. If an existing storage account is used, the script service removes the file share, but retains the storage account.
-- **retentionInterval**: Specify the time interval that a script resource will be retained and after which will be expired and deleted.
+- `retentionInterval`: Specify the time interval that a script resource will be retained and after which will be expired and deleted.
> [!NOTE] > It is not recommended to use the storage account and the container instance that are generated by the script service for other purposes. The two resources might be removed depending on the script life cycle.
-The container instance and storage account are deleted according to the **cleanupPreference**. However, if the script fails and **cleanupPreference** isn't set to **Always**, the deployment process automatically keeps the container running for one hour. You can use this hour to troubleshoot the script. If you want to keep the container running after successful deployments, add a sleep step to your script. For example, add [Start-Sleep](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/start-sleep) to the end of your script. If you don't add the sleep step, the container is set to a terminal state and can't be accessed even if it hasn't been deleted yet.
+The container instance and storage account are deleted according to the `cleanupPreference`. However, if the script fails and `cleanupPreference` isn't set to **Always**, the deployment process automatically keeps the container running for one hour. You can use this hour to troubleshoot the script. If you want to keep the container running after successful deployments, add a sleep step to your script. For example, add [Start-Sleep](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/start-sleep) to the end of your script. If you don't add the sleep step, the container is set to a terminal state and can't be accessed even if it hasn't been deleted yet.
## Run script more than once
-Deployment script execution is an idempotent operation. If none of the deploymentScripts resource properties (including the inline script) are changed, the script doesn't execute when you redeploy the template. The deployment script service compares the resource names in the template with the existing resources in the same resource group. There are two options if you want to execute the same deployment script multiple times:
+Deployment script execution is an idempotent operation. If none of the `deploymentScripts` resource properties (including the inline script) are changed, the script doesn't execute when you redeploy the template. The deployment script service compares the resource names in the template with the existing resources in the same resource group. There are two options if you want to execute the same deployment script multiple times:
-- Change the name of your deploymentScripts resource. For example, use the [utcNow](./template-functions-date.md#utcnow) template function as the resource name or as a part of the resource name. Changing the resource name creates a new deploymentScripts resource. It's good for keeping a history of script execution.
+- Change the name of your `deploymentScripts` resource. For example, use the [utcNow](./template-functions-date.md#utcnow) template function as the resource name or as a part of the resource name. Changing the resource name creates a new `deploymentScripts` resource. It's good for keeping a history of script execution.
> [!NOTE]
- > The utcNow function can only be used in the default value for a parameter.
+ > The `utcNow` function can only be used in the default value for a parameter.
-- Specify a different value in the `forceUpdateTag` template property. For example, use utcNow as the value.
+- Specify a different value in the `forceUpdateTag` template property. For example, use `utcNow` as the value.
> [!NOTE] > Write the deployment scripts that are idempotent. This ensures that if they run again accidentally, it will not cause system changes. For example, if the deployment script is used to create an Azure resource, verify the resource doesn't exist before creating it, so the script will succeed or you don't create the resource again.
@@ -593,4 +596,3 @@ In this article, you learned how to use deployment scripts. To walk through a de
> [!div class="nextstepaction"] > [Learn module: Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/)-
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-expressions.md
@@ -5,7 +5,7 @@ ms.topic: conceptual
ms.date: 03/17/2020 ---
-# Syntax and expressions in Azure Resource Manager templates
+# Syntax and expressions in ARM templates
The basic syntax of the Azure Resource Manager template (ARM template) is JavaScript Object Notation (JSON). However, you can use expressions to extend the JSON values available within the template. Expressions start and end with brackets: `[` and `]`, respectively. The value of the expression is evaluated when the template is deployed. An expression can return a string, integer, boolean, array, or object.
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/authenticate-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authenticate-application.md new file mode 100644
@@ -0,0 +1,120 @@
+---
+title: Authenticate an application to access Azure SignalR Service
+description: This article provides information about authenticating an application with Azure Active Directory to access Azure SignalR Service
+author: terencefan
+
+ms.author: tefa
+ms.service: signalr
+ms.topic: conceptual
+ms.date: 08/03/2020
+---
+
+# Authenticate an application with Azure Active Directory to access Azure SignalR Service
+Microsoft Azure provides integrated access control management for resources and applications based on Azure Active Directory (Azure AD). A key advantage of using Azure AD with Azure SignalR Service is that you don't need to store your credentials in the code anymore. Instead, you can request an OAuth 2.0 access token from the Microsoft Identity platform. The resource name to request a token is `https://signalr.azure.com/`. Azure AD authenticates the security principal (an application, resource group, or service principal) running the application. If the authentication succeeds, Azure AD returns an access token to the application, and the application can then use the access token to authorize request to Azure SignalR Service resources.
+
+When a role is assigned to an Azure AD security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, or the Azure SignalR resource. An Azure AD security can assign roles to a user, a group, an application service principal, or a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!NOTE]
+> A role definition is a collection of permissions. Role-based access control (RBAC) controls how these permissions are enforced through role assignment. A role assignment consists of three elements: security principal, role definition, and scope. For more information, see [Understanding the different roles](../role-based-access-control/overview.md).
+
+## Register your application with an Azure AD tenant
+The first step in using Azure AD to authorize Azure SignalR Service resources is registering your application with an Azure AD tenant from the [Azure portal](https://portal.azure.com/).
+When you register your application, you supply information about the application to AD. Azure AD then provides a client ID (also called an application ID) that you can use to associate your application with Azure AD runtime.
+To learn more about the client ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
+
+The following images show steps for registering a web application:
+
+![Register an application](./media/authenticate/app-registrations-register.png)
+
+> [!Note]
+> If you register your application as a native application, you can specify any valid URI for the Redirect URI. For native applications, this value does not have to be a real URL. For web applications, the redirect URI must be a valid URI, because it specifies the URL to which tokens are provided.
+
+After you've registered your application, you'll see the **Application (client) ID** under **Settings**:
+
+![Application ID of the registered application](./media/authenticate/application-id.png)
+
+For more information about registering an application with Azure AD, see [Integrating applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md).
++
+### Create a client secret
+The application needs a client secret to prove its identity when requesting a token. To add the client secret, follow these steps.
+
+1. Navigate to your app registration in the Azure portal.
+1. Select the **Certificates & secrets** setting.
+1. Under **Client secrets**, select **New client secret** to create a new secret.
+1. Provide a description for the secret, and choose the wanted expiration interval.
+1. Immediately copy the value of the new secret to a secure location. The fill value is displayed to you only once.
+
+![Create a Client secret](./media/authenticate/client-secret.png)
+
+### Upload a certificate
+
+You could also upload a certification instead of creating a client secret.
+
+![Upload a Certification](./media/authenticate/certification.png)
+
+## Add RBAC roles using the Azure portal
+To learn more on managing access to Azure resources using RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
+
+After you've determined the appropriate scope for a role assignment, navigate to that resource in the Azure portal. Display the access control (IAM) settings for the resource, and follow these instructions to manage role assignments:
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
+1. Select **Access Control (IAM)** to display access control settings for the Azure SignalR.
+1. Select the **Role assignments** tab to see the list of role assignments. Select the **Add** button on the toolbar and then select **Add role assignment**.
+
+ ![Add button on the toolbar](./media/authenticate/role-assignments-add-button.png)
+
+1. On the **Add role assignment** page, do the following steps:
+ 1. Select the **Azure SignalR role** that you want to assign.
+ 1. Search to locate the **security principal** (user, group, service principal) to which you want to assign the role.
+ 1. Select **Save** to save the role assignment.
+
+ ![Assign role to an application](./media/authenticate/assign-role-to-application.png)
+
+ 1. The identity to whom you assigned the role appears listed under that role. For example, the following image shows that application `signalr-dev` and `signalr-service` are in the SignalR App Server role.
+
+ ![Role Assignment List](./media/authenticate/role-assignment-list.png)
+
+You can follow similar steps to assign a role scoped to resource group, or subscription. Once you define the role and its scope, you can test this behavior with samples [in this GitHub location](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac).
+
+## Sample codes while configuring your app server.
+
+Add following options when `AddAzureSignalR`:
+
+```C#
+services.AddSignalR().AddAzureSignalR(option =>
+{
+ option.ConnectionString = "Endpoint=https://<name>.signalr.net;AuthType=aad;clientId=<clientId>;clientSecret=<clientSecret>;tenantId=<tenantId>";
+});
+```
+
+Or simply configure the `ConnectionString` in your `appsettings.json` file.
+
+```json
+{
+"Azure": {
+ "SignalR": {
+ "Enabled": true,
+ "ConnectionString": "Endpoint=https://<name>.signalr.net;AuthType=aad;clientId=<clientId>;clientSecret=<clientSecret>;tenantId=<tenantId>"
+ }
+ },
+}
+```
+
+When using `certificate`, change the `clientSecret` to `clientCert` like this:
+
+```C#
+ option.ConnectionString = "Endpoint=https://<name>.signalr.net;AuthType=aad;clientId=<clientId>;clientCert=<clientCertFilepath>;tenantId=<tenantId>";
+```
+
+## Next steps
+- To learn more about RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
+- To learn how to assign and manage Azure role assignments with Azure PowerShell, Azure CLI, or the REST API, see these articles:
+ - [Manage role-based access control (RBAC) with Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+ - [Manage role-based access control (RBAC) with Azure CLI](../role-based-access-control/role-assignments-cli.md)
+ - [Manage role-based access control (RBAC) with the REST API](../role-based-access-control/role-assignments-rest.md)
+ - [Manage role-based access control (RBAC) with Azure Resource Manager Templates](../role-based-access-control/role-assignments-template.md)
+
+See the following related articles:
+- [Authenticate a managed identity with Azure Active Directory to access Azure SignalR Service](authenticate-managed-identity.md)
+- [Authorize access to Azure SignalR Service using Azure Active Directory](authorize-access-azure-active-directory.md)
\ No newline at end of file
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/authenticate-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authenticate-managed-identity.md new file mode 100644
@@ -0,0 +1,91 @@
+---
+title: Authentication a managed identity with Azure Active Directory
+description: This article provides information about authenticating a managed identity with Azure Active Directory to access Azure SignalR Service
+author: terencefan
+
+ms.author: tefa
+ms.date: 08/03/2020
+ms.service: signalr
+ms.topic: conceptual
+---
+
+# Authenticate a managed identity with Azure Active Directory to access Azure SignalR Resources
+Azure SignalR Service supports Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Managed identities for Azure resources can authorize access to Azure SignalR Service resources using Azure AD credentials from applications running in Azure Virtual Machines (VMs), Function apps, Virtual Machine Scale Sets, and other services. By using managed identities for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud.
+
+This article shows how to authorize access to an Azure SignalR Service by using a managed identity from an Azure VM.
+
+## Enable managed identities on a VM
+Before you can use managed identities for Azure Resources to authorize Azure SIgnalR Service resources from your VM, you must first enable managed identities for Azure Resources on the VM. To learn how to enable managed identities for Azure Resources, see one of these articles:
+
+- [Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+- [Azure PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)
+- [Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)
+- [Azure Resource Manager template](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)
+- [Azure Resource Manager client libraries](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+
+## Grant permissions to a managed identity in Azure AD
+To authorize a request to Azure SignalR Service from a managed identity in your application, first configure role-based access control (RBAC) settings for that managed identity. Azure SignalR Service defines RBAC roles that encompass permissions for acquiring `AccessKey` or `ClientToken`. When the RBAC role is assigned to a managed identity, the managed identity is granted access to Azure SignalR Service at the appropriate scope.
+
+For more information about assigning RBAC roles, see [Authenticate with Azure Active Directory for access to Azure SignalR Service resources](authorize-access-azure-active-directory.md).
+
+## Connect to Azure SignalR Service with managed identities
+To connect to Azure SignalR Service with managed identities, you need to assign the identity the role and the appropriate scope. The procedure in this section uses a simple application that runs under a managed identity and accesses Azure SignalR Service resources.
+
+Here we're using a sample Azure virtual machine resource.
+
+1. Go to **Settings** and select **Identity**.
+1. Select the **Status** to be **On**.
+1. Select **Save** to save the setting.
+
+ ![Managed identity for a virtual machine](./media/authenticate/identity-virtual-machine.png)
+
+Once you've enabled this setting, a new service identity is created in your Azure Active Directory (Azure AD) and configured into the App Service host.
+
+Now, assign this service identity to a role in the required scope in your Azure SignalR Service resources.
+
+## Assign RBAC roles using the Azure portal
+To learn more on managing access to Azure resources using RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
+
+After you've determined the appropriate scope for a role assignment, navigate to that resource in the Azure portal. Display the access control (IAM) settings for the resource, and follow these instructions to manage role assignments:
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
+1. Select **Access Control (IAM)** to display access control settings for the Azure SignalR.
+1. Select the **Role assignments** tab to see the list of role assignments. Select the **Add** button on the toolbar and then select **Add role assignment**.
+
+ ![Add button on the toolbar](./media/authenticate/role-assignments-add-button.png)
+
+1. On the **Add role assignment** page, do the following steps:
+ 1. Select the **Azure SignalR role** that you want to assign.
+ 1. Search to locate the **security principal** (user, group, service principal) to which you want to assign the role.
+ 1. Select **Save** to save the role assignment.
+
+ ![Assign role to an application](./media/authenticate/assign-role-to-application.png)
+
+ 1. The identity to whom you assigned the role appears listed under that role. For example, the following image shows that application `signalr-dev` and `signalr-service` are in the SignalR App Server role.
+
+ ![Role Assignment List](./media/authenticate/role-assignment-list.png)
+
+You can follow similar steps to assign a role scoped to resource group, or subscription. Once you define the role and its scope, you can test this behavior with samples [in this GitHub location](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac).
+
+## Samples code while configuring your app server
+
+Add following options when `AddAzureSignalR`:
+
+```C#
+services.AddSignalR().AddAzureSignalR(option =>
+{
+ option.ConnectionString = "Endpoint=https://<name>.signalr.net;AuthType=aad;";
+});
+```
+
+## Next steps
+- To learn more about RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
+- To learn how to assign and manage Azure role assignments with Azure PowerShell, Azure CLI, or the REST API, see these articles:
+ - [Manage role-based access control (RBAC) with Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+ - [Manage role-based access control (RBAC) with Azure CLI](../role-based-access-control/role-assignments-cli.md)
+ - [Manage role-based access control (RBAC) with the REST API](../role-based-access-control/role-assignments-rest.md)
+ - [Manage role-based access control (RBAC) with Azure Resource Manager Templates](../role-based-access-control/role-assignments-template.md)
+
+See the following related articles:
+- [Authenticate an application with Azure Active Directory to access Azure SignalR Service](authenticate-application.md)
+- [Authorize access to Azure SignalR Service using Azure Active Directory](authorize-access-azure-active-directory.md)
\ No newline at end of file
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/authorize-access-azure-active-directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authorize-access-azure-active-directory.md new file mode 100644
@@ -0,0 +1,54 @@
+---
+title: Authorize access with Azure Active Directory
+description: This article provides information on authorizing access to Azure SignalR Service resources using Azure Active Directory.
+author: terencefan
+
+ms.author: tefa
+ms.date: 08/03/2020
+ms.service: signalr
+ms.topic: conceptual
+---
+
+# Authorize access to Azure SignalR Service resources using Azure Active Directory
+Azure SignalR Service supports using Azure Active Directory (Azure AD) to authorize requests to Azure SignalR Service resources. With Azure AD, you can use role-based access control (RBAC) to grant permissions to a security principal, which may be a user, or an application service principal. To learn more about roles and role assignments, see [Understanding the different roles](../role-based-access-control/overview.md).
+
+## Overview
+When a security principal (an application) attempts to access a Azure SignalR Service resource, the request must be authorized. With Azure AD, access to a resource is a two-step process.
+
+ 1. First, the security principalΓÇÖs identity is authenticated, and an OAuth 2.0 token is returned. The resource name to request a token is `https://signalr.azure.com/`.
+ 2. Next, the token is passed as part of a request to the Azure SignalR Service to authorize access to the specified resource.
+
+The authentication step requires that an application request contains an OAuth 2.0 access token at runtime. If your hub server is running within an Azure entity such as an Azure VM, a virtual machine scale set, or an Azure Function app, it can use a managed identity to access the resources. To learn how to authenticate requests made by a managed identity to Azure SignalR Service, see [Authenticate access to Azure SignalR Service resources with Azure Active Directory and managed identities for Azure Resources](authenticate-managed-identity.md).
+
+The authorization step requires that one or more RBAC roles be assigned to the security principal. Azure SignalR Service provides RBAC roles that encompass sets of permissions for Azure SignalR resources. The roles that are assigned to a security principal determine the permissions that the principal will have. For more information about RBAC roles, see [Azure built-in roles for Azure SignalR Service](#azure-built-in-roles-for-azure-signalr-service).
+
+SignalR Hub Server that isn't running within an Azure entity can also authorize with Azure AD. To learn how to request an access token and use it to authorize requests for Azure SignalR Service resources, see [Authenticate access to Azure SignalR Service with Azure AD from an application](authenticate-application.md).
+
+## Azure Built-in roles for Azure SignalR Service
+
+- [SignalR App Server]
+- [SignalR Service Reader]
+- [SignalR Service Owner]
+
+## Assign RBAC roles for access rights
+Azure Active Directory (Azure AD) authorizes access rights to secured resources through [role-based access control (RBAC)](../role-based-access-control/overview.md). Azure SignalR Service defines a set of Azure built-in roles that encompass common sets of permissions used to access Azure SignalR Service and you can also define custom roles for accessing the resource.
+
+When an RBAC role is assigned to an Azure AD security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, or any Azure SignalR Service resource. An Azure AD security principal may be a user, or an application, or a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+## Built-in roles for Azure SignalR Service
+Azure provides the following Azure built-in roles for authorizing access to Azure SignalR Service resource using Azure AD and OAuth:
+
+### SignalR App Server
+
+Use this role to give the access to Get a temporary access key for signing client tokens.
+
+### SignalR Serverless Contributor
+
+Use this role to give the access to Get a client token from Azure SignalR Service directly.
+
+## Next steps
+
+See the following related articles:
+
+- [Authenticate an application with Azure AD to to access Azure SignalR Service](authenticate-application.md)
+- [Authenticate a managed identity with Azure AD to access Azure SignalR Service](authenticate-managed-identity.md)
\ No newline at end of file
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/server-graceful-shutdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/server-graceful-shutdown.md new file mode 100644
@@ -0,0 +1,113 @@
+---
+title: Stop your app server gracefully.
+description: This article provides information about gracefully shutdown SignalR app server
+author: terencefan
+
+ms.author: tefa
+ms.date: 11/12/2020
+ms.service: signalr
+ms.topic: conceptual
+---
+
+# Server graceful shutdown
+Microsoft Azure SignalR Service provides two modes for gracefully shutdown a server.
+
+The key advantage of using this feature is to prevent your customer from experiencing unexpectedly connection drops.
+
+Instead, you could either wait your client connections to close themselves with respect to your business logic, or even migrate the client connection to another server without losing data.
+
+## How it works
+
+In general, there will be four stages in a graceful shutdown process:
+
+1. **Set the server offline**
+
+ It means no more client connections will be routed to this server.
+
+2. **Trigger `OnShutdown` hooks**
+
+ You could register shutdown hooks for each hub you have owned in your server.
+ They will be called with respect to the registered order right after we got an **FINACK** response from our Azure SignalR Service, which means this server has been set offline in the Azure SignalR Service.
+
+ You can broadcast messages or do some cleaning jobs in this stage, once all shutdown hooks has been executed, we will proceed to the next stage.
+
+3. **Wait until all client connections finished**, depends on the mode you choose, it could be:
+
+ **Mode set to WaitForClientsToClose**
+
+ Azure SignalR Service will hold existing clients.
+
+ You may have to design a way, like broadcast a closing message to all clients, and then let your clients to decide when to close/reconnect itself.
+
+ Read [ChatSample](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample/ChatSample) for sample usage, which we broadcast a 'exit' message to trigger client close in shutdown hook.
+
+ **Mode set to MigrateClients**
+
+ Azure SignalR Service will try to reroute the client connection on this server to another valid server.
+
+ In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `HttpContext`, which can be used to identify if the client connection was being migrated-in our migrated-out. It could be useful especially for stateful scenarios.
+
+ The client connection will be immediately migrated after the current message has been delivered, which means the next message will be routed to the new server.
+
+4. **Stop server connections**
+
+ After all client connections have been closed/migrated, or timeout (30s by default) exceeded,
+
+ SignalR Server SDK will proceed the shutdown process to this stage, and close all server connections.
+
+ Client connections will still be dropped if it failed to be closed/migrated. For example, no suitable target server / current client-to-server message hasn't finished.
+
+## Sample codes.
+
+Add following options when `AddAzureSignalR`:
+
+```csharp
+services.AddSignalR().AddAzureSignalR(option =>
+{
+ option.GracefulShutdown.Mode = GracefulShutdownMode.WaitForClientsClose;
+ // option.GracefulShutdown.Mode = GracefulShutdownMode.MigrateClients;
+ option.GracefulShutdown.Timeout = TimeSpan.FromSeconds(30);
+
+ option.GracefulShutdown.Add<Chat>(async (c) =>
+ {
+ await c.Clients.All.SendAsync("exit");
+ });
+});
+```
+
+### configure `OnConnected` and `OnDisconnected` while setting graceful shutdown mode to `MigrateClients`.
+
+We have introduced an "IConnectionMigrationFeature" to indicate if a connection was being migrated-in/out.
+
+```csharp
+public class Chat : Hub {
+
+ public override async Task OnConnectedAsync()
+ {
+ Console.WriteLine($"{Context.ConnectionId} connected.");
+
+ var feature = Context.GetHttpContext().Features.Get<IConnectionMigrationFeature>();
+ if (feature != null)
+ {
+ Console.WriteLine($"[{feature.MigrateTo}] {Context.ConnectionId} is migrated from {feature.MigrateFrom}.");
+ // Your business logic.
+ }
+
+ await base.OnConnectedAsync();
+ }
+
+ public override async Task OnDisconnectedAsync(Exception e)
+ {
+ Console.WriteLine($"{Context.ConnectionId} disconnected.");
+
+ var feature = Context.GetHttpContext().Features.Get<IConnectionMigrationFeature>();
+ if (feature != null)
+ {
+ Console.WriteLine($"[{feature.MigrateFrom}] {Context.ConnectionId} will be migrated to {feature.MigrateTo}.");
+ // Your business logic.
+ }
+
+ await base.OnDisconnectedAsync(e);
+ }
+}
+```
\ No newline at end of file
azure-sql-edge https://docs.microsoft.com/en-us/azure/azure-sql-edge/tutorial-deploy-azure-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
@@ -26,7 +26,7 @@ In this three-part tutorial, you'll create a machine learning model to predict i
5. Install the latest version of [Azure CLI](https://github.com/Azure/azure-powershell/releases/tag/v3.5.0-February2020). The following scripts require that AZ PowerShell be the latest version (3.5.0, Feb 2020). 6. Set up the environment to debug, run, and test IoT Edge solution by installing [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). 7. Install Docker.
-8. Download the [DACPAC](https://github.com/microsoft/sql-server-samples/tree/master/samples/demos/azure-sql-edge-demos/iron-ore-silica-impurities/DACPAC) file that will be utilized in the tutorial.
+8. Download the DACPAC file that will be utilized in the tutorial.
## Deploy Azure resources using PowerShell Script
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
@@ -64,6 +64,7 @@ Limitations:
- With a SQL Managed Instance, you can back up an instance database to a backup with up to 32 stripes, which is enough for databases up to 4 TB if backup compression is used. - You can't execute `BACKUP DATABASE ... WITH COPY_ONLY` on a database that's encrypted with service-managed Transparent Data Encryption (TDE). Service-managed TDE forces backups to be encrypted with an internal TDE key. The key can't be exported, so you can't restore the backup. Use automatic backups and point-in-time restore, or use [customer-managed (BYOK) TDE](../database/transparent-data-encryption-tde-overview.md#customer-managed-transparent-data-encryption---bring-your-own-key) instead. You also can disable encryption on the database.
+- Native backups taken on a Managed Instance cannot be restored to a SQL Server. This is because Managed Instance has higher internal database version compared to any version of SQL Server.
- The maximum backup stripe size by using the `BACKUP` command in SQL Managed Instance is 195 GB, which is the maximum blob size. Increase the number of stripes in the backup command to reduce individual stripe size and stay within this limit. > [!TIP]
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
@@ -255,6 +255,8 @@ Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGro
> If you're using the Azure Government cloud, then use the value `ff281ffe-705c-4f53-9f37-a40e6f2c68f3` for the parameter **ServicePrincipalName** in [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet. >
+If you want to selectively backup few disks and exclude others as mentioned in [these scenarios](selective-disk-backup-restore.md#scenarios), you can configure protection and backup only the relevant disks as documented [here](selective-disk-backup-restore.md#enable-backup-with-powershell).
+ ## Monitoring a backup job You can monitor long-running operations, such as backup jobs, without using the Azure portal. To get the status of an in-progress job, use the [Get-AzRecoveryservicesBackupJob](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjob) cmdlet. This cmdlet gets the backup jobs for a specific vault, and that vault is specified in the vault context. The following example gets the status of an in-progress job as an array, and stores the status in the $joblist variable.
@@ -335,6 +337,10 @@ $bkpPol.AzureBackupRGNameSuffix="ForVMs"
Set-AzureRmRecoveryServicesBackupProtectionPolicy -policy $bkpPol ```
+### Exclude disks for a protected VM
+
+Azure VM backup provides a capability to selectively exclude or include disks which is helpful in [these scenarios](selective-disk-backup-restore.md#scenarios). If the virtual machine is already protected by Azure VM backup and if all disks are backed up, then you can modify the protection to selectively include or exclude disks as mentioned [here](selective-disk-backup-restore.md#modify-protection-for-already-backed-up-vms-with-powershell).
+ ### Trigger a backup Use [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-azrecoveryservicesbackupitem) to trigger a backup job. If it's the initial backup, it is a full backup. Subsequent backups take an incremental copy. The following example takes a VM backup to be retained for 60 days.
@@ -508,6 +514,13 @@ $restorejob = Get-AzRecoveryServicesBackupJob -Job $restorejob -VaultId $targetV
$details = Get-AzRecoveryServicesBackupJobDetails -Job $restorejob -VaultId $targetVault.ID ```
+#### Restore selective disks
+
+A user can selectively restore few disks instead of the entire backed up set. Provide the required disk LUNs as parameter to only restore them instead of the entire set as documented [here](selective-disk-backup-restore.md#restore-selective-disks-with-powershell).
+
+> [!IMPORTANT]
+> One has to selectively back up disks to selectively restore disks. More details are provided [here](selective-disk-backup-restore.md#selective-disk-restore).
+ Once you restore the disks, go to the next section to create the VM. ## Replace disks in Azure VM
bastion https://docs.microsoft.com/en-us/azure/bastion/bastion-nsg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-nsg.md
@@ -46,7 +46,7 @@ Azure Bastion is deployed specifically to ***AzureBastionSubnet***.
* **Egress Traffic to target VMs:** Azure Bastion will reach the target VMs over private IP. The NSGs need to allow egress traffic to other target VM subnets for port 3389 and 22. * **Egress Traffic to Azure Bastion data plane:** For data plane communication between the underlying components of Azure Bastion, enable ports 8080, 5701 outbound from the **VirtualNetwork** service tag to the **VirtualNetwork** service tag. This enables the components of Azure Bastion to talk to each other. * **Egress Traffic to other public endpoints in Azure:** Azure Bastion needs to be able to connect to various public endpoints within Azure (for example, for storing diagnostics logs and metering logs). For this reason, Azure Bastion needs outbound to 443 to **AzureCloud** service tag.
- * **Egress Traffic to Internet:** Azure Bastion needs to be able to communicate with the Internet for session and certificate validation. For this reason, we recommend enabling port 80 outbound to the **Internet**.
+ * **Egress Traffic to Internet:** Azure Bastion needs to be able to communicate with the Internet for session and certificate validation. For this reason, we recommend enabling port 80 outbound to the **Internet.**
:::image type="content" source="./media/bastion-nsg/outbound.png" alt-text="Screenshot shows outbound security rules for Azure Bastion connectivity.":::
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-delete-complete-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-delete-complete-event.md
@@ -2,7 +2,7 @@
title: Azure Batch pool delete complete event description: Reference for Batch pool delete complete event. This event is emitted when a pool delete operation has completed. ms.topic: reference
-ms.date: 04/20/2017
+ms.date: 12/28/2020
--- # Pool delete complete event
@@ -13,9 +13,9 @@ ms.date: 04/20/2017
``` {
- "id": "myPool1",
- "startTime": "2016-09-09T22:13:48.579Z",
- "endTime": "2016-09-09T22:14:08.836Z"
+ "id": "myPool1",
+ "startTime": "2016-09-09T22:13:48.579Z",
+ "endTime": "2016-09-09T22:14:08.836Z"
} ```
@@ -26,4 +26,5 @@ ms.date: 04/20/2017
|`endTime`|DateTime|The time the pool delete completed.| ## Remarks+ For more information about states and error codes for pool resize operation, see [Delete a pool from an account](/rest/api/batchservice/delete-a-pool-from-an-account).
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-delete-start-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-delete-start-event.md
@@ -2,7 +2,7 @@
title: Azure Batch pool delete start event description: Reference for Batch pool delete start event. This event is emitted when a pool delete operation has started. ms.topic: reference
-ms.date: 04/20/2017
+ms.date: 12/28/2020
--- # Pool delete start event
@@ -13,7 +13,7 @@ ms.date: 04/20/2017
``` {
- "id": "myPool1"
+ "id": "myPool1"
} ```
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-resize-complete-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-resize-complete-event.md
@@ -2,7 +2,7 @@
title: Azure Batch pool resize complete event description: Reference for Batch pool resize complete event. See an example of a pool that increased in size and completed successfully. ms.topic: reference
-ms.date: 04/20/2017
+ms.date: 12/28/2020
--- # Pool resize complete event
@@ -13,18 +13,18 @@ ms.date: 04/20/2017
``` {
- "id": "myPool",
- "nodeDeallocationOption": "invalid",
- "currentDedicatedNodes": 10,
- "targetDedicatedNodes": 10,
- "currentLowPriorityNodes": 5,
- "targetLowPriorityNodes": 5,
- "enableAutoScale": false,
- "isAutoPool": false,
- "startTime": "2016-09-09T22:13:06.573Z",
- "endTime": "2016-09-09T22:14:01.727Z",
- "resultCode": "Success",
- "resultMessage": "The operation succeeded"
+ "id": "myPool",
+ "nodeDeallocationOption": "invalid",
+ "currentDedicatedNodes": 10,
+ "targetDedicatedNodes": 10,
+ "currentLowPriorityNodes": 5,
+ "targetLowPriorityNodes": 5,
+ "enableAutoScale": false,
+ "isAutoPool": false,
+ "startTime": "2016-09-09T22:13:06.573Z",
+ "endTime": "2016-09-09T22:14:01.727Z",
+ "resultCode": "Success",
+ "resultMessage": "The operation succeeded"
} ```
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-resize-start-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-resize-start-event.md
@@ -2,7 +2,7 @@
title: Azure Batch pool resize start event description: Reference for Batch pool resize start event. Example shows the body of a pool resize start event for a pool resizing from 0 to 2 nodes with a manual resize. ms.topic: reference
-ms.date: 04/20/2017
+ms.date: 12/28/2020
--- # Pool resize start event
@@ -13,14 +13,14 @@ ms.date: 04/20/2017
``` {
- "id": "myPool1",
- "nodeDeallocationOption": "Invalid",
- "currentDedicatedNodes": 0,
- "targetDedicatedNodes": 2,
- "currentLowPriorityNodes": 0,
- "targetLowPriorityNodes": 2,
- "enableAutoScale": false,
- "isAutoPool": false
+ "id": "myPool1",
+ "nodeDeallocationOption": "Invalid",
+ "currentDedicatedNodes": 0,
+ "targetDedicatedNodes": 2,
+ "currentLowPriorityNodes": 0,
+ "targetLowPriorityNodes": 2,
+ "enableAutoScale": false,
+ "isAutoPool": false
} ```
batch https://docs.microsoft.com/en-us/azure/batch/batch-task-dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-task-dependencies.md
@@ -2,31 +2,29 @@
title: Create task dependencies to run tasks description: Create tasks that depend on the completion of other tasks for processing MapReduce style and similar big data workloads in Azure Batch. ms.topic: how-to
-ms.date: 05/22/2017
+ms.date: 12/28/2020
ms.custom: "H1Hack27Feb2017, devx-track-csharp" --- # Create task dependencies to run tasks that depend on other tasks
-You can define task dependencies to run a task or set of tasks only after a parent task has completed. Some scenarios where task dependencies are useful include:
+With Batch task dependencies, you create tasks that are scheduled for execution on compute nodes after the completion of one or more parent tasks. For example, you can create a job that renders each frame of a 3D movie with separate, parallel tasks. The final task--the "merge task"--merges the rendered frames into the complete movie only after all frames have been successfully rendered.
+
+Some scenarios where task dependencies are useful include:
- MapReduce-style workloads in the cloud. - Jobs whose data processing tasks can be expressed as a directed acyclic graph (DAG). - Pre-rendering and post-rendering processes, where each task must complete before the next task can begin. - Any other job in which downstream tasks depend on the output of upstream tasks.
-With Batch task dependencies, you can create tasks that are scheduled for execution on compute nodes after the completion of one or more parent tasks. For example, you can create a job that renders each frame of a 3D movie with separate, parallel tasks. The final task--the "merge task"--merges the rendered frames into the complete movie only after all frames have been successfully rendered.
-
-By default, dependent tasks are scheduled for execution only after the parent task has completed successfully. You can specify a dependency action to override the default behavior and run tasks when the parent task fails. See the [Dependency actions](#dependency-actions) section for details.
-
-You can create tasks that depend on other tasks in a one-to-one or one-to-many relationship. You can also create a range dependency where a task depends on the completion of a group of tasks within a specified range of task IDs. You can combine these three basic scenarios to create many-to-many relationships.
+By default, dependent tasks are scheduled for execution only after the parent task has completed successfully. You can optionally specify a [dependency action](#dependency-actions) to override the default behavior and run tasks when the parent task fails.
## Task dependencies with Batch .NET
-In this article, we discuss how to configure task dependencies by using the [Batch .NET][net_msdn] library. We first show you how to [enable task dependency](#enable-task-dependencies) on your jobs, and then demonstrate how to [configure a task with dependencies](#create-dependent-tasks). We also describe how to specify a dependency action to run dependent tasks if the parent fails. Finally, we discuss the [dependency scenarios](#dependency-scenarios) that Batch supports.
+In this article, we discuss how to configure task dependencies by using the [Batch .NET](/dotnet/api/microsoft.azure.batch) library. We first show you how to [enable task dependency](#enable-task-dependencies) on your jobs, and then demonstrate how to [configure a task with dependencies](#create-dependent-tasks). We also describe how to specify a dependency action to run dependent tasks if the parent fails. Finally, we discuss the [dependency scenarios](#dependency-scenarios) that Batch supports.
## Enable task dependencies
-To use task dependencies in your Batch application, you must first configure the job to use task dependencies. In Batch .NET, enable it on your [CloudJob][net_cloudjob] by setting its [UsesTaskDependencies][net_usestaskdependencies] property to `true`:
+To use task dependencies in your Batch application, you must first configure the job to use task dependencies. In Batch .NET, enable it on your [CloudJob](/dotnet/api/microsoft.azure.batch.cloudjob) by setting its [UsesTaskDependencies](/dotnet/api/microsoft.azure.batch.cloudjob.usestaskdependencies) property to `true`:
```csharp CloudJob unboundJob = batchClient.JobOperations.CreateJob( "job001",
@@ -36,11 +34,11 @@ CloudJob unboundJob = batchClient.JobOperations.CreateJob( "job001",
unboundJob.UsesTaskDependencies = true; ```
-In the preceding code snippet, "batchClient" is an instance of the [BatchClient][net_batchclient] class.
+In the preceding code snippet, "batchClient" is an instance of the [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient) class.
## Create dependent tasks
-To create a task that depends on the completion of one or more parent tasks, you can specify that the task "depends on" the other tasks. In Batch .NET, configure the [CloudTask][net_cloudtask].[DependsOn][net_dependson] property with an instance of the [TaskDependencies][net_taskdependencies] class:
+To create a task that depends on the completion of one or more parent tasks, you can specify that the task "depends on" the other tasks. In Batch .NET, configure the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property with an instance of the [TaskDependencies](/dotnet/api/microsoft.azure.batch.taskdependencies) class:
```csharp // Task 'Flowers' depends on completion of both 'Rain' and 'Sun'
@@ -54,26 +52,26 @@ new CloudTask("Flowers", "cmd.exe /c echo Flowers")
This code snippet creates a dependent task with task ID "Flowers". The "Flowers" task depends on tasks "Rain" and "Sun". Task "Flowers" will be scheduled to run on a compute node only after tasks "Rain" and "Sun" have completed successfully. > [!NOTE]
-> By default, a task is considered to be completed successfully when it is in the **completed** state and its **exit code** is `0`. In Batch .NET, this means a [CloudTask][net_cloudtask].[State][net_taskstate] property value of `Completed` and the CloudTask's [TaskExecutionInformation][net_taskexecutioninformation].[ExitCode][net_exitcode] property value is `0`. For how to change this, see the [Dependency actions](#dependency-actions) section.
+> By default, a task is considered to be completed successfully when it is in the completed state and its exit code is `0`. In Batch .NET, this means a [CloudTask.State](/dotnet/api/microsoft.azure.batch.cloudtask.state) property value is `Completed` and the CloudTask's [TaskExecutionInformation.ExitCode](/dotnet/api/microsoft.azure.batch.taskexecutioninformation.exitcode) property value is `0`. To learn how to change this, see the [Dependency actions](#dependency-actions) section.
## Dependency scenarios
-There are three basic task dependency scenarios that you can use in Azure Batch: one-to-one, one-to-many, and task ID range dependency. These can be combined to provide a fourth scenario, many-to-many.
+There are three basic task dependency scenarios that you can use in Azure Batch: one-to-one, one-to-many, and task ID range dependency. These three scenarios can be combined to provide a fourth scenario: many-to-many.
| Scenario&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Example | Illustration | |:---:| --- | --- |
-| [One-to-one](#one-to-one) |*taskB* depends on *taskA* <p/> *taskB* will not be scheduled for execution until *taskA* has completed successfully |![Diagram: one-to-one task dependency][1] |
-| [One-to-many](#one-to-many) |*taskC* depends on both *taskA* and *taskB* <p/> *taskC* will not be scheduled for execution until both *taskA* and *taskB* have completed successfully |![Diagram: one-to-many task dependency][2] |
-| [Task ID range](#task-id-range) |*taskD* depends on a range of tasks <p/> *taskD* will not be scheduled for execution until the tasks with IDs *1* through *10* have completed successfully |![Diagram: Task id range dependency][3] |
+| [One-to-one](#one-to-one) |*taskB* depends on *taskA* <p/> *taskB* will not be scheduled for execution until *taskA* has completed successfully |:::image type="content" source="media/batch-task-dependency/01_one_to_one.png" alt-text="Diagram showing the one-to-one task dependency scenario."::: |
+| [One-to-many](#one-to-many) |*taskC* depends on both *taskA* and *taskB* <p/> *taskC* will not be scheduled for execution until both *taskA* and *taskB* have completed successfully |:::image type="content" source="media/batch-task-dependency/02_one_to_many.png" alt-text="Diagram showing the one-to-many task dependency scenario."::: |
+| [Task ID range](#task-id-range) |*taskD* depends on a range of tasks <p/> *taskD* will not be scheduled for execution until the tasks with IDs *1* through *10* have completed successfully |:::image type="content" source="media/batch-task-dependency/03_task_id_range.png" alt-text="Diagram showing the task ID range task dependency scenario."::: |
> [!TIP] > You can create **many-to-many** relationships, such as where tasks C, D, E, and F each depend on tasks A and B. This is useful, for example, in parallelized preprocessing scenarios where your downstream tasks depend on the output of multiple upstream tasks. >
-> In the examples in this section, a dependent task runs only after the parent tasks complete successfully. This behavior is the default behavior for a dependent task. You can run a dependent task after a parent task fails by specifying a dependency action to override the default behavior. See the [Dependency actions](#dependency-actions) section for details.
+> In the examples in this section, a dependent task runs only after the parent tasks complete successfully. This behavior is the default behavior for a dependent task. You can run a dependent task after a parent task fails by specifying a [dependency action](#dependency-actions) to override the default behavior.
### One-to-one
-In a one-to-one relationship, a task depends on the successful completion of one parent task. To create the dependency, provide a single task ID to the [TaskDependencies][net_taskdependencies].[OnId][net_onid] static method when you populate the [DependsOn][net_dependson] property of [CloudTask][net_cloudtask].
+In a one-to-one relationship, a task depends on the successful completion of one parent task. To create the dependency, provide a single task ID to the [TaskDependencies.OnId](/dotnet/api/microsoft.azure.batch.taskdependencies.onid) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
```csharp // Task 'taskA' doesn't depend on any other tasks
@@ -88,7 +86,7 @@ new CloudTask("taskB", "cmd.exe /c echo taskB")
### One-to-many
-In a one-to-many relationship, a task depends on the completion of multiple parent tasks. To create the dependency, provide a collection of task IDs to the [TaskDependencies][net_taskdependencies].[OnIds][net_onids] static method when you populate the [DependsOn][net_dependson] property of [CloudTask][net_cloudtask].
+In a one-to-many relationship, a task depends on the completion of multiple parent tasks. To create the dependency, provide a collection of task IDs to the [TaskDependencies.OnIds](/dotnet/api/microsoft.azure.batch.taskdependencies.onids) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
```csharp // 'Rain' and 'Sun' don't depend on any other tasks
@@ -101,19 +99,19 @@ new CloudTask("Flowers", "cmd.exe /c echo Flowers")
{ DependsOn = TaskDependencies.OnIds("Rain", "Sun") },
-```
+```
### Task ID range In a dependency on a range of parent tasks, a task depends on the completion of tasks whose IDs lie within a range.
-To create the dependency, provide the first and last task IDs in the range to the [TaskDependencies][net_taskdependencies].[OnIdRange][net_onidrange] static method when you populate the [DependsOn][net_dependson] property of [CloudTask][net_cloudtask].
+To create the dependency, provide the first and last task IDs in the range to the [TaskDependencies](/dotnet/api/microsoft.azure.batch.taskdependencies.onidrange) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
> [!IMPORTANT]
-> When you use task ID ranges for your dependencies, only tasks with IDs representing integer values will be selected by the range. So the range `1..10` will select tasks `3` and `7`, but not `5flamingoes`.
+> When you use task ID ranges for your dependencies, only tasks with IDs representing integer values will be selected by the range. For example, the range `1..10` will select tasks `3` and `7`, but not `5flamingoes`.
> > Leading zeroes are not significant when evaluating range dependencies, so tasks with string identifiers `4`, `04` and `004` will all be *within* the range and they will all be treated as task `4`, so the first one to complete will satisfy the dependency. >
-> Every task in the range must satisfy the dependency, either by completing successfully or by completing with a failure thatΓÇÖs mapped to a dependency action set to **Satisfy**. See the [Dependency actions](#dependency-actions) section for details.
+> Every task in the range must satisfy the dependency, either by completing successfully or by completing with a failure that is mapped to a [dependency action](#dependency-actions) set to **Satisfy**.
```csharp // Tasks 1, 2, and 3 don't depend on any other tasks. Because
@@ -135,11 +133,11 @@ new CloudTask("4", "cmd.exe /c echo 4")
## Dependency actions
-By default, a dependent task or set of tasks runs only after a parent task has completed successfully. In some scenarios, you may want to run dependent tasks even if the parent task fails. You can override the default behavior by specifying a dependency action. A dependency action specifies whether a dependent task is eligible to run, based on the success or failure of the parent task.
+By default, a dependent task or set of tasks runs only after a parent task has completed successfully. In some scenarios, you may want to run dependent tasks even if the parent task fails. You can override the default behavior by specifying a dependency action.
-For example, suppose that a dependent task is awaiting data from the completion of the upstream task. If the upstream task fails, the dependent task may still be able to run using older data. In this case, a dependency action can specify that the dependent task is eligible to run despite the failure of the parent task.
+A dependency action specifies whether a dependent task is eligible to run, based on the success or failure of the parent task. For example, suppose that a dependent task is awaiting data from the completion of the upstream task. If the upstream task fails, the dependent task may still be able to run using older data. In this case, a dependency action can specify that the dependent task is eligible to run despite the failure of the parent task.
-A dependency action is based on an exit condition for the parent task. You can specify a dependency action for any of the following exit conditions; for .NET, see the [ExitConditions][net_exitconditions] class for details:
+A dependency action is based on an exit condition for the parent task. You can specify a dependency action for any of the following exit conditions:
- When a pre-processing error occurs. - When a file upload error occurs. If the task exits with an exit code that was specified via **exitCodes** or **exitCodeRanges**, and then encounters a file upload error, the action specified by the exit code takes precedence.
@@ -147,10 +145,12 @@ A dependency action is based on an exit condition for the parent task. You can s
- When the task exits with an exit code that falls within a range specified by the **ExitCodeRanges** property. - The default case, if the task exits with an exit code not defined by **ExitCodes** or **ExitCodeRanges**, or if the task exits with a pre-processing error and the **PreProcessingError** property is not set, or if the task fails with a file upload error and the **FileUploadError** property is not set.
-To specify a dependency action in .NET, set the [ExitOptions][net_exitoptions].[DependencyAction][net_dependencyaction] property for the exit condition. The **DependencyAction** property takes one of two values:
+For .NET, see the [ExitConditions](/dotnet/api/microsoft.azure.batch.exitconditions) class for more details on these conditions.
+
+To specify a dependency action in .NET, set the [ExitOptions.DependencyAction](/dotnet/api/microsoft.azure.batch.exitoptions.dependencyaction) property for the exit condition to one of the following:
-- Setting the **DependencyAction** property to **Satisfy** indicates that dependent tasks are eligible to run if the parent task exits with a specified error.-- Setting the **DependencyAction** property to **Block** indicates that dependent tasks are not eligible to run.
+- **Satisfy**: Indicates that dependent tasks are eligible to run if the parent task exits with a specified error.
+- **Block**: Indicates that dependent tasks are not eligible to run.
The default setting for the **DependencyAction** property is **Satisfy** for exit code 0, and **Block** for all other exit conditions.
@@ -191,37 +191,13 @@ new CloudTask("B", "cmd.exe /c echo B")
## Code sample
-The [TaskDependencies][github_taskdependencies] sample project is one of the [Azure Batch code samples][github_samples] on GitHub. This Visual Studio solution demonstrates:
+The [TaskDependencies](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/TaskDependencies) sample project on GitHub demonstrates:
-- How to enable task dependency on a job-- How to create tasks that depend on other tasks
+- How to enable task dependency on a job.
+- How to create tasks that depend on other tasks.
- How to execute those tasks on a pool of compute nodes. ## Next steps -- The [application packages](batch-application-packages.md) feature of Batch provides an easy way to both deploy and version the applications that your tasks execute on compute nodes.-- See [Installing applications and staging data on Batch compute nodes][forum_post] in the Azure Batch forum for an overview of methods for preparing your nodes to run tasks. Written by one of the Azure Batch team members, this post is a good primer on the different ways to copy applications, task input data, and other files to your compute nodes.-
-[forum_post]: https://social.msdn.microsoft.com/Forums/en-US/87b19671-1bdf-427a-972c-2af7e5ba82d9/installing-applications-and-staging-data-on-batch-compute-nodes?forum=azurebatch
-[github_taskdependencies]: https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/TaskDependencies
-[github_samples]: https://github.com/Azure/azure-batch-samples
-[net_batchclient]: /dotnet/api/microsoft.azure.batch.batchclient
-[net_cloudjob]: /dotnet/api/microsoft.azure.batch.cloudjob
-[net_cloudtask]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_dependson]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_exitcode]: /dotnet/api/microsoft.azure.batch.taskexecutioninformation
-[net_exitconditions]: /dotnet/api/microsoft.azure.batch.exitconditions
-[net_exitoptions]: /dotnet/api/microsoft.azure.batch.exitoptions
-[net_dependencyaction]: /dotnet/api/microsoft.azure.batch.exitoptions
-[net_msdn]: /dotnet/api/microsoft.azure.batch
-[net_onid]: /dotnet/api/microsoft.azure.batch.taskdependencies
-[net_onids]: /dotnet/api/microsoft.azure.batch.taskdependencies
-[net_onidrange]: /dotnet/api/microsoft.azure.batch.taskdependencies
-[net_taskexecutioninformation]: /dotnet/api/microsoft.azure.batch.taskexecutioninformation
-[net_taskstate]: /dotnet/api/microsoft.azure.batch.common.taskstate
-[net_usestaskdependencies]: /dotnet/api/microsoft.azure.batch.cloudjob
-[net_taskdependencies]: /dotnet/api/microsoft.azure.batch.taskdependencies
-
-[1]: ./media/batch-task-dependency/01_one_to_one.png "Diagram: one-to-one dependency"
-[2]: ./media/batch-task-dependency/02_one_to_many.png "Diagram: one-to-many dependency"
-[3]: ./media/batch-task-dependency/03_task_id_range.png "Diagram: task id range dependency"
+- Learn about the [application packages](batch-application-packages.md) feature of Batch, which provides an easy way to deploy and version the applications that your tasks execute on compute nodes.
+- Learn about [error checking for jobs and tasks](batch-job-task-error-checking.md).
batch https://docs.microsoft.com/en-us/azure/batch/security-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-best-practices.md
@@ -144,7 +144,7 @@ Batch nodes can [securely access credentials and secrets](credential-access-key-
### Compliance
-To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains a [large portfolio of compliance offerings](/overview/trusted-cloud/compliance/).
+To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains a [large portfolio of compliance offerings](https://azure.microsoft.com/overview/trusted-cloud/compliance).
These offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. Review the [comprehensive overview of compliance offerings](https://aka.ms/AzureCompliance) to determine which ones may be relevant to your Batch solutions.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
@@ -20,7 +20,7 @@ The Speech service SDK **Compressed Audio Input Stream** API provides a way to s
Platform | Languages | Supported GStreamer version | :--- | ---: | :---:
-Windows (excluding UWP) | C++, C#, Java, Python | [1.15.1](https://gstreamer.freedesktop.org/data/pkg/windows/1.15.1/)
+Windows (excluding UWP) | C++, C#, Java, Python | [1.15.1](https://gstreamer.freedesktop.org/releases/gstreamer/1.5.1.html)
Linux | C++, C#, Java, Python | [supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md) Android | Java | [1.14.4](https://gstreamer.freedesktop.org/data/pkg/android/1.14.4/)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/csharp/prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/csharp/prerequisites.md
@@ -6,5 +6,5 @@ ms.date: 03/09/2020
ms.author: trbye ---
-Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c). Gstreamer binaries need to be in the system path, so that the speech SDK can load gstreamer binaries during runtime. If speech SDK is able to find libgstreamer-1.0-0.dll during runtime it means the gstreamer binaries are in the system path.
+Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. If the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/python/prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/python/prerequisites.md
@@ -6,5 +6,5 @@ ms.date: 03/09/2020
ms.author: amishu ---
-Handling compressed audio is implemented using [`GStreamer`](https://gstreamer.freedesktop.org). For licensing reasons `GStreamer` binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c). `GStreamer` binaries need to be in the system path, so that the speech SDK can load the binaries during runtime. If the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the binaries are in the system path.
+Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. If the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
container-registry https://docs.microsoft.com/en-us/azure/container-registry/zone-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md
@@ -52,7 +52,7 @@ To create a zone-redundant replication:
### Create a resource group
-If needed, run the [az group create](/cli/az/group#az_group_create) command to create a resource group for the registry in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *eastus*.
+If needed, run the [az group create](/cli/azure/group) command to create a resource group for the registry in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *eastus*.
```azurecli az group create --name <resource-group-name> --location <location>
@@ -158,7 +158,7 @@ Copy the following contents to a new file and save it using a filename such as `
} ```
-Run the following [az deployment group create](/cli/az/deployment#az_group_deployment_create) command to create the registry using the preceding template file. Where indicated, provide:
+Run the following [az deployment group create](/cli/azure/deployment?view=azure-cli-latest) command to create the registry using the preceding template file. Where indicated, provide:
* a unique registry name, or deploy the template without parameters and it will create a unique name for you * a location for the replica that supports availability zones, such as *westus2*
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-kafka-connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-kafka-connect.md
@@ -17,7 +17,7 @@ Existing Cassandra applications can easily work with the [Azure Cosmos DB Cassan
Data in Apache Kafka (topics) is only useful when consumed by other applications or ingested into other systems. It's possible to build a solution using the [Kafka Producer/Consumer](https://kafka.apache.org/documentation/#api) APIs [using a language and client SDK of your choice](https://cwiki.apache.org/confluence/display/KAFKA/Clients). Kafka Connect provides an alternative solution. It's a platform to stream data between Apache Kafka and other systems in a scalable and reliable manner. Since Kafka Connect supports off the shelf connectors which includes Cassandra, you don't need to write custom code to integrate Kafka with Azure Cosmos DB Cassandra API.
-In this article, we will be using the open-source [DataStax Apache Kafka connector](https://docs.datastax.com/kafka/doc/kafka/kafkaIntro.html), that works on top of Kafka Connect framework to ingest records from a Kafka topic into rows of one or more Cassandra tables. The example provides a reusable setup using Docker Compose. This is quite convenient since it enables you to bootstrap all the required components locally with a single command. These components include Kafka, Zookeeper, Kafka Connect worker, and the sample data generator application.
+In this article, we will be using the open-source [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html), that works on top of Kafka Connect framework to ingest records from a Kafka topic into rows of one or more Cassandra tables. The example provides a reusable setup using Docker Compose. This is quite convenient since it enables you to bootstrap all the required components locally with a single command. These components include Kafka, Zookeeper, Kafka Connect worker, and the sample data generator application.
Here is a breakdown of the components and their service definitions - you can refer to the complete `docker-compose` file [in the GitHub repo](https://github.com/Azure-Samples/cosmosdb-cassandra-kafka/blob/main/docker-compose.yaml).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-troubleshoot.md
@@ -24,7 +24,7 @@ This article describes common errors and solutions for applications consuming Az
| OverloadedException (Java) | The total number of request units consumed is more than the request-units provisioned on the keyspace or table. So the requests are throttled. | Consider scaling the throughput assigned to a keyspace or table from the Azure portal (see [here](manage-scale-cassandra.md) for scaling operations in Cassandra API) or you can implement a retry policy. For Java, see retry samples for [v3.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and [v4.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4). See also [Azure Cosmos Cassandra Extensions for Java](https://github.com/Azure/azure-cosmos-cassandra-extensions) | | OverloadedException (Java) even with sufficient throughput | The system appears to be throttling requests despite sufficient throughput being provisioned for request volume and/or consumed request unit cost | Cassandra API implements a system throughput budget for schema-level operations (CREATE TABLE, ALTER TABLE, DROP TABLE). This budget should be enough for schema operations in a production system. However, if you have a high number of schema-level operations, it is possible you are exceeding this limit. As this budget is not user controlled, you will need to consider lowering the number of schema operations being run. If taking this action does not resolve the issue, or it is not feasible for your workload, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).| | ClosedConnectionException (Java) | After a period of idle time following successful connections, application is unable to connect| This error could be due to idle timeout of Azure LoadBalancers, which is 4 minutes. Set keep alive setting in driver (see below) and increase keep-alive settings in operating system, or [adjust idle timeout in Azure Load Balancer](../load-balancer/load-balancer-tcp-idle-timeout.md?tabs=tcp-reset-idle-portal). |
-| Other intermittent connectivity errors (Java) | Connection drops or times out unexpectedly | The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB Cassandra API, we recommend `ConstantReconnectionPolicy` with a delay of 2 seconds. See the [driver documentation](https://docs.datastax.com/developer/java-driver/4.9/manual/core/reconnection/) for Java v4.x driver, and [here](https://docs.datastax.com/developer/java-driver/3.7/manual/reconnection/) for Java 3.x guidance (see also the examples below).|
+| Other intermittent connectivity errors (Java) | Connection drops or times out unexpectedly | The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB Cassandra API, we recommend `ConstantReconnectionPolicy` with a delay of 2 seconds. See the [driver documentation](https://docs.datastax.com/en/developer/java-driver/4.9/manual/core/reconnection/) for Java v4.x driver, and [here](https://docs.datastax.com/en/developer/java-driver/3.7/manual/reconnection/) for Java 3.x guidance (see also the examples below).|
If your error is not listed above, and you are experiencing an error when executing a [supported operation in Cassandra API](cassandra-support.md), where the error is *not present when using native Apache Cassandra*, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/introduction.md
@@ -51,7 +51,7 @@ After data is present in a centralized data store in the cloud, process or trans
If you prefer to code transformations by hand, ADF supports external activities for executing your transformations on compute services such as HDInsight Hadoop, Spark, Data Lake Analytics, and Machine Learning. ### CI/CD and publish
-Data Factory offers full support for CI/CD of your data pipelines using Azure DevOps and GitHub. This allows you to incrementally develop and deliver your ETL processes before publishing the finished product. After the raw data has been refined into a business-ready consumable form, load the data into Azure Data Warehouse, Azure SQL Database, Azure CosmosDB, or whichever analytics engine your business users can point to from their business intelligence tools.
+Data Factory offers full support for CI/CD of your data pipelines using Azure DevOps and GitHub. This allows you to incrementally develop and deliver your ETL processes before publishing the finished product. After the raw data has been refined into a business-ready consumable form, load the data into Azure Synapse Analytics, Azure SQL Database, Azure CosmosDB, or whichever analytics engine your business users can point to from their business intelligence tools.
### Monitor After you have successfully built and deployed your data integration pipeline, providing business value from refined data, monitor the scheduled activities and pipelines for success and failure rates. Azure Data Factory has built-in support for pipeline monitoring via Azure Monitor, API, PowerShell, Azure Monitor logs, and health panels on the Azure portal.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/security-baseline.md
@@ -325,7 +325,7 @@ Note that additional permissions might be required to get visibility into worklo
## Logging and Threat Detection
-*For more information, see the [Azure Security Benchmark: Logging and Threat Detection](/azure/security/benchmarks/security-controls-v2-logging-threat-protection).*
+*For more information, see the [Azure Security Benchmark: Logging and Threat Detection](/azure/security/benchmarks/security-controls-v2-logging-threat-detection).*
### LT-1: Enable threat detection for Azure resources
@@ -498,7 +498,7 @@ Use workflow automation features in Azure Security Center and Azure Sentinel to
## Posture and Vulnerability Management
-*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](/azure/security/benchmarks/security-controls-v2-vulnerability-management).*
+*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](/azure/security/benchmarks/security-controls-v2-posture-vulnerability-management).*
### PV-3: Establish secure configurations for compute resources
@@ -668,9 +668,9 @@ For more information, see the following references:
- [Cloud Adoption Framework - Azure data security and encryption best practices](https://docs.microsoft.com/azure/security/fundamentals/data-encryption-best-practices?toc=/azure/cloud-adoption-framework/toc.json&amp;bc=/azure/cloud-adoption-framework/_bread/toc.json) -- [Azure Security Benchmark - Asset management](/azure/security/benchmarks/security-benchmark-v2-asset-management)
+- [Azure Security Benchmark - Asset management](/azure/security/benchmarks/security-controls-v2-asset-management)
-- [Azure Security Benchmark - Data Protection](/azure/security/benchmarks/security-benchmark-v2-data-protection)
+- [Azure Security Benchmark - Data Protection](/azure/security/benchmarks/security-controls-v2-data-protection)
**Azure Security Center monitoring**: Not applicable
@@ -698,7 +698,7 @@ Ensure that the segmentation strategy is implemented consistently across control
**Guidance**: Continuously measure and mitigate risks to your individual assets and the environment they are hosted in. Prioritize high value assets and highly-exposed attack surfaces, such as published applications, network ingress and egress points, user and administrator endpoints, etc. -- [Azure Security Benchmark - Posture and vulnerability management](/azure/security/benchmarks/security-benchmark-v2-posture-vulnerability-management)
+- [Azure Security Benchmark - Posture and vulnerability management](/azure/security/benchmarks/security-controls-v2-posture-vulnerability-management)
**Azure Security Center monitoring**: Not applicable
@@ -739,7 +739,7 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: - [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy) -- [Azure Security Benchmark - Network Security](/azure/security/benchmarks/security-benchmark-v2-network-security)
+- [Azure Security Benchmark - Network Security](/azure/security/benchmarks/)
- [Azure network security overview](../security/fundamentals/network-overview.md)
@@ -767,9 +767,9 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: -- [Azure Security Benchmark - Identity management](/azure/security/benchmarks/security-benchmark-v2-identity-management)
+- [Azure Security Benchmark - Identity management](/azure/security/benchmarks/security-controls-v2-identity-management)
-- [Azure Security Benchmark - Privileged access](/azure/security/benchmarks/security-benchmark-v2-privileged-access)
+- [Azure Security Benchmark - Privileged access](/azure/security/benchmarks/security-controls-v2-privileged-access)
- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
@@ -801,9 +801,9 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: -- [Azure Security Benchmark - Logging and threat detection](/azure/security/benchmarks/security-benchmark-v2-logging-threat-detection)
+- [Azure Security Benchmark - Logging and threat detection](/azure/security/benchmarks/security-controls-v2-logging-threat-detection)
-- [Azure Security Benchmark - Incident response](/azure/security/benchmarks/security-benchmark-v2-incident-response)
+- [Azure Security Benchmark - Incident response](/azure/security/benchmarks/security-controls-v2-incident-response)
- [Azure Security Best Practice 4 - Process. Update Incident Response Processes for Cloud](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-overview.md
@@ -285,11 +285,11 @@ jobs](../stream-analytics/stream-analytics-quick-create-portal.md) that
integrate [inputs](../stream-analytics/stream-analytics-add-inputs.md) and [outputs](../stream-analytics/stream-analytics-define-outputs.md) and integrate the data from the inputs through
-[queries](https://docs.microsoft.com/stream-analytics-query/stream-analytics-query-language-reference.md)
+[queries](/stream-analytics-query/stream-analytics-query-language-reference)
that yield a result that is then made available on the outputs. Queries are based on the [SQL query
-language](https://docs.microsoft.com/stream-analytics-query/stream-analytics-query-language-reference.md)
+language](/stream-analytics-query/stream-analytics-query-language-reference)
and can be used to easily filter, sort, aggregate, and join streaming data over a period of time. You can also extend this SQL language with [JavaScript](../stream-analytics/stream-analytics-javascript-user-defined-functions.md)
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-patterns.md
@@ -233,7 +233,7 @@ replication tasks.
The last scenario requires excluding already replicated events from being replicated again. The technique is demonstrated and explained in the
-[EventHubToEventHubMerge](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/main/code/EventHubToEventHubMerge)
+[EventHubToEventHubMerge](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/code/EventHubMerge)
sample. ## Editor
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -6,7 +6,7 @@ author: duongau
ms.service: expressroute ms.topic: conceptual
-ms.date: 12/10/2020
+ms.date: 12/28/2020
ms.author: duau --- # ExpressRoute partners and peering locations
@@ -224,7 +224,7 @@ If you are remote and don't have fiber connectivity or you want to explore other
| **New York** |Equinix, Megaport | Altice Business, Crown Castle, Spectrum Enterprise, Webair | | **Paris** | Equinix | Proximus | | **Quebec City** | Megaport | Fibrenoire |
-| **Sao Paula** | Equinix | Venha Pra Nuvem |
+| **Sao Paulo** | Equinix | Venha Pra Nuvem |
| **Seattle** |Equinix | Alaska Communications | | **Silicon Valley** |Coresite, Equinix | Cox Business, Spectrum Enterprise, Windstream, X2nsat Inc. | | **Singapore** |Equinix |1CLOUDSTAR, BICS, CMC Telecom, Epsilon Telecommunications Limited, LGA Telecom, United Information Highway (UIH) |
governance https://docs.microsoft.com/en-us/azure/governance/policy/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
@@ -57,7 +57,7 @@ For detailed information about when and how policy evaluation happens, see
### Control the response to an evaluation Business rules for handling non-compliant resources vary widely between organizations. Examples of
-how an organization wants the platform to respond to a non-complaint resource include:
+how an organization wants the platform to respond to a non-compliant resource include:
- Deny the resource change - Log the change to the resource
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/partner-ecosystem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/partner-ecosystem.md
@@ -18,7 +18,7 @@ When creating an end-to-end solution built around Azure API for FHIR, you may re
| Partner | Capabilities | Supported Countries/Regions | Contact | |------------------|--------------------------------------------|-----------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
-| Medal | De-identification, Legacy-FHIR conversion | USA | [Contact](http://www.medal.com/) |
+| Medal | De-identification, Legacy-FHIR conversion | USA | [Contact](https://asab.squarespace.com/asab-medal/) |
| Rhapsody | Legacy-FHIR conversion | USA, Australia, New Zealand | [Contact](https://rhapsody.health/contact-us) | | iNTERFACEWARE | Legacy-FHIR conversion | USA, Canada | [Contact](https://www.interfaceware.com/contact) | | Darena Solutions | Application Development, System Integrator | USA | [Contact](https://www.darenasolutions.com/contact) |
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/tutorial-web-app-public-app-reg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/tutorial-web-app-public-app-reg.md
@@ -17,7 +17,7 @@ In the previous tutorial, you deployed and set up your Azure API for FHIR. Now t
1. Navigate to Azure Active Directory 1. Select **App Registration** --> **New Registration** 1. Name your application
-1. Select **Public client/native (mobile & desktop)** and set the redirect URI to https://www.getpostman.com/oauth2/callback.
+1. Select **Public client/native (mobile & desktop)** and set the redirect URI to `https://www.getpostman.com/oauth2/callback`.
:::image type="content" source="media/tutorial-web-app/register-public-app.png" alt-text="Screenshot of the Register an application pane, and an example application name and redirect URL.":::
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/access-policies.md
@@ -4,7 +4,7 @@ description: How to create and apply custom access policies to limit client acce
author: ekpgh ms.service: hpc-cache ms.topic: how-to
-ms.date: 12/22/2020
+ms.date: 12/28/2020
ms.author: v-erkel ---
@@ -86,6 +86,17 @@ If you turn on root squash, you must also set the anonymous ID user value to one
* **65535** (no access) * **0** (unprivileged root)
+## Update access policies
+
+You can edit or delete access policies from the table in the **Client access policies** page.
+
+Click the policy name to open it for editing.
+
+To delete a policy, mark the checkbox next to its name in the list, then click the **Delete** button at the top of the list. You can't delete the policy named "default".
+
+> [!NOTE]
+> You can't delete an access policy that is in use. Remove the policy from any namespace paths that include it before trying to delete it.
+ ## Next steps * Apply access policies in the namespace paths for your storage targets. Read [Set up the aggregated namespace](add-namespace-paths.md) to learn how.
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/directory-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/directory-services.md
@@ -39,7 +39,7 @@ Under **Active directory details**, supply these values:
* **AD DNS domain name** - Provide the fully qualified domain name of the AD server that the cache will join to get the credentials.
-* **Cache server name (computer account)** - Set the name that will be assigned to this HPC cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, hyphens (-), and underscores (_).
+* **Cache server name (computer account)** - Set the name that will be assigned to this HPC cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, and hyphens (-).
In the **Credentials** section, provide an AD administrator username and password that the Azure HPC Cache can use to access the AD server. This information is encrypted when stored, and can't be queried.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-app-templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
@@ -3,12 +3,11 @@ title: What are application templates in Azure IoT Central | Microsoft Docs
description: Azure IoT Central application templates allow you to jump in to IoT solution development. author: philmea ms.author: philmea
-ms.date: 10/25/2019
+ms.date: 12/19/2020
ms.topic: conceptual ms.service: iot-central services: iot-central ---- # What are application templates? Application templates in Azure IoT Central are a tool to help solution builders kickstart their IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing and your application for resale to your customers.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
@@ -3,7 +3,7 @@ title: Architectural concepts in Azure IoT Central | Microsoft Docs
description: This article introduces key concepts relating the architecture of Azure IoT Central author: dominicbetts ms.author: dobett
-ms.date: 11/27/2019
+ms.date: 12/19/2020
ms.topic: conceptual ms.service: iot-central services: iot-central
@@ -12,8 +12,6 @@ manager: philmea
# Azure IoT Central architecture -- This article provides an overview of the Microsoft Azure IoT Central architecture. ![Top-level architecture](media/concepts-architecture/architecture.png)
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-device-templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-device-templates.md
@@ -3,7 +3,7 @@ title: What are device templates in Azure IoT Central | Microsoft Docs
description: Azure IoT Central device templates let you specify the behavior of the devices connected to your application. A device template specifies the telemetry, properties, and commands the device must implement. A device template also defines the UI for the device in IoT Central such as the forms and dashboards an operator uses. author: dominicbetts ms.author: dobett
-ms.date: 11/05/2020
+ms.date: 12/19/2020
ms.topic: conceptual ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-iot-edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
@@ -3,7 +3,7 @@ title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. author: dominicbetts ms.author: dobett
-ms.date: 12/12/2019
+ms.date: 12/19/2020
ms.topic: conceptual ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-telemetry-properties-commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-telemetry-properties-commands.md
@@ -3,7 +3,7 @@ title: Telemetry, property, and command payloads in Azure IoT Central | Microsof
description: Azure IoT Central device templates let you specify the telemetry, properties, and commands of a device must implement. Understand the format of the data a device can exchange with IoT Central. author: dominicbetts ms.author: dobett
-ms.date: 11/05/2020
+ms.date: 12/19/2020
ms.topic: conceptual ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-add-tiles-to-your-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-add-tiles-to-your-dashboard.md
@@ -3,7 +3,7 @@ title: Configure to your Azure IoT Central dashboard | Microsoft Docs
description: As a builder, learn how to configure the default Azure IoT Central application dashboard with tiles. author: TheJasonAndrew ms.author: v-anjaso
-ms.date: 11/06/2020
+ms.date: 12/19/2020
ms.topic: how-to ms.service: iot-central ---
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-administer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-administer.md
@@ -3,7 +3,7 @@ title: Change Azure IoT Central application settings | Microsoft Docs
description: As an administrator, how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application author: viv-liu ms.author: viviali
-ms.date: 11/27/2019
+ms.date: 12/19/2020
ms.topic: how-to ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-configure-file-uploads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-file-uploads.md
@@ -4,11 +4,10 @@ description: How to configure file uploads from your devices to the cloud. After
services: iot-central author: dominicbetts ms.author: dobett
-ms.date: 08/06/2020
+ms.date: 12/23/2020
ms.topic: how-to ms.service: iot-central ---- # Upload files from your devices to the cloud *This topic applies to administrators and device developers.*
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-configure-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules.md
@@ -3,7 +3,7 @@ title: Configure rules and actions in Azure IoT Central | Microsoft Docs
description: This how-to article shows you, as a builder, how to configure telemetry-based rules and actions in your Azure IoT Central application. author: vavilla ms.author: vavilla
-ms.date: 11/27/2019
+ms.date: 12/23/2020
ms.topic: how-to ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-create-and-manage-applications-csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-and-manage-applications-csp.md
@@ -6,7 +6,7 @@ services: iot-central
ms.service: iot-central author: dominicbetts ms.author: dobett
-ms.date: 08/23/2019
+ms.date: 12/11/2020
ms.topic: how-to manager: philmea ---
@@ -15,6 +15,8 @@ manager: philmea
The Microsoft Cloud Solution Provider (CSP) program is a Microsoft Reseller program. Its intent is to provide our channel partners with a one-stop program to resell all Microsoft Commercial Online Services. Learn more about the [Cloud Solution Provider program](https://partner.microsoft.com/cloud-solution-provider).
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+ As a CSP, you can create and manage Microsoft Azure IoT Central applications on behalf of your customers through the [Microsoft Partner Center](https://partnercenter.microsoft.com/partner/home). When Azure IoT Central applications are created on behalf of customers by CSPs, just like with other CSP managed Azure services, CSPs manage billing for customers. A charge for Azure IoT Central will appear in your total bill in the Microsoft Partner Center. To get started, sign-in to your account on the Microsoft Partner Portal and select a customer for whom you want to create an Azure IoT Central application. Navigate to Service Management for the customer from the left nav.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-faq.md
@@ -3,7 +3,7 @@ title: Azure IoT Central frequently asked questions | Microsoft Docs
description: Azure IoT Central frequently asked questions (FAQ) and answers author: dominicbetts ms.author: dobett
-ms.date: 09/23/2020
+ms.date: 12/20/2020
ms.topic: how-to ms.service: iot-central services: iot-central
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-cli.md
@@ -21,11 +21,13 @@ Instead of creating and managing IoT Central applications on the [Azure IoT Cent
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
- - If you need to run your CLI commands in a different Azure subscription, see [Change the active subscription](/cli/azure/manage-azure-subscriptions-azure-cli?view=azure-cli-latest#change-the-active-subscription).
+ - If you need to run your CLI commands in a different Azure subscription, see [Change the active subscription](/cli/azure/manage-azure-subscriptions-azure-cli?view=azure-cli-latest#change-the-active-subscription&preserve-view=true).
## Create an application
-Use the [az iot central app create](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-create) command to create an IoT Central application in your Azure subscription. For example:
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+
+Use the [az iot central app create](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-create&preserve-view=true) command to create an IoT Central application in your Azure subscription. For example:
```azurecli-interactive # Create a resource group for the IoT Central application
@@ -58,11 +60,11 @@ These commands first create a resource group in the east US region for the appli
## View your applications
-Use the [az iot central app list](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-list) command to list your IoT Central applications and view metadata.
+Use the [az iot central app list](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-list&preserve-view=true) command to list your IoT Central applications and view metadata.
## Modify an application
-Use the [az iot central app update](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-update) command to update the metadata of an IoT Central application. For example, to change the display name of your application:
+Use the [az iot central app update](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-update&preserve-view=true) command to update the metadata of an IoT Central application. For example, to change the display name of your application:
```azurecli-interactive az iot central app update --name myiotcentralapp \
@@ -72,7 +74,7 @@ az iot central app update --name myiotcentralapp \
## Remove an application
-Use the [az iot central app delete](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-delete) command to delete an IoT Central application. For example:
+Use the [az iot central app delete](/cli/azure/iot/central/app?view=azure-cli-latest#az-iot-central-app-delete&preserve-view=true) command to delete an IoT Central application. For example:
```azurecli-interactive az iot central app delete --name myiotcentralapp \
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-portal.md
@@ -18,6 +18,8 @@ Instead of creating and managing IoT Central applications on the [Azure IoT Cent
## Create IoT Central applications
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+ To create an application, navigate to the [Azure portal](https://ms.portal.azure.com) and select **Create a resource**. In **Search the Marketplace** bar, type *IoT Central*:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-from-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-powershell.md
@@ -23,10 +23,12 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+ If you prefer to run Azure PowerShell on your local machine, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps). When you run Azure PowerShell locally, use the **Connect-AzAccount** cmdlet to sign in to Azure before you try the cmdlets in this article. > [!TIP]
-> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps?view=azps-3.4.0#change-the-active-subscription).
+> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps?view=azps-3.4.0#change-the-active-subscription&preserve-view=true).
## Install the IoT Central module
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-iot-central-programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-programmatically.md
@@ -5,10 +5,9 @@ services: iot-central
ms.service: iot-central author: dominicbetts ms.author: dobett
-ms.date: 05/19/2020
+ms.date: 12/23/2020
ms.topic: how-to ---- # Manage IoT Central programmatically [!INCLUDE [iot-central-selector-manage](../../../includes/iot-central-selector-manage.md)]
@@ -32,6 +31,8 @@ The following table lists the SDK repositories and package installation commands
The [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository has code samples for multiple programming languages that show you how to create, update, list, and delete Azure IoT Central applications.
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+ ## Next steps Now that you've learned how to manage Azure IoT Central applications programmatically, a suggested next step is to learn more about the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) service.\ No newline at end of file
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-manage-preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-preferences.md
@@ -3,13 +3,12 @@ title: Manage your personal preferences on IoT Central | Microsoft Docs
description: How to manage your personal application preferences such as changing language and theme in your IoT Central application. author: lmasieri ms.author: lmasieri
-ms.date: 07/10/2019
+ms.date: 12/23/2020
ms.topic: how-to ms.service: iot-central services: iot-central manager: peterpr ---- # Manage your personal application preferences *This article applies to operators, builders, and administrators.*
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/quick-deploy-iot-central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-deploy-iot-central.md
@@ -3,7 +3,7 @@ title: Quickstart - Create an Azure IoT Central application | Microsoft Docs
description: Quickstart - Create a new Azure IoT Central application. Create the application using either the free pricing plan or one of the standard pricing plans. author: viv-liu ms.author: viviali
-ms.date: 11/23/2020
+ms.date: 12/28/2020
ms.topic: quickstart ms.service: iot-central services: iot-central
@@ -14,6 +14,9 @@ manager: corywink
This quickstart shows you how to create an Azure IoT Central application. +
+[!INCLUDE [Warning About Access Required](../../../includes/warning-contribitorrequireaccess.md)]
+ ## Create an application Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account.
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-kit-c-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
@@ -93,7 +93,7 @@ Prepare the microSD card for installation of the Raspbian image.
1. Download Raspbian.
- 1. [Download Raspbian Stretch with Desktop](https://www.raspberrypi.org/downloads/raspbian/) (the .zip file).
+ 1. [Download Raspbian Stretch with Desktop](https://www.raspberrypi.org/software/) (the .zip file).
2. Extract the Raspbian image to a folder on your computer.
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-kit-node-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
@@ -90,7 +90,7 @@ Prepare the microSD card for installation of the Raspbian image.
1. Download Raspbian.
- a. [Raspbian Buster with desktop](https://www.raspberrypi.org/downloads/raspbian/) (the .zip file).
+ a. [Raspbian Buster with desktop](https://www.raspberrypi.org/software/) (the .zip file).
b. Extract the Raspbian image to a folder on your computer.
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/howto-certify-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-certify-device.md
@@ -36,7 +36,7 @@ To meet the certification requirements, your device must:
- Connects to Azure IoT Hub using the [DPS](../iot-dps/about-iot-dps.md). - Implement of telemetry, properties, or commands following the IoT Plug and Play convention. - Describe the device interactions with a [DTDL v2](https://aka.ms/dtdl) model.-- Publish the model, and all required interfaces, in the [Azure IoT Public Model Repository](https://devicemodels.azureiotsolutions.com/)
+- Publish the model, and all required interfaces, in the Azure IoT Public Model Repository
- Send the model ID during [DPS registration](./concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload. - Announce the model ID during the [MQTT connection](./concepts-developer-guide-device.md#model-id-announcement). - All device models must be compatible with [Azure IoT Central](../iot-central/core/overview-iot-central-developer.md).
@@ -169,7 +169,7 @@ The following steps show you how to use the [Azure Certified Device portal](http
To use the [certification portal](https://aka.ms/acdp), you must use an Azure Active Directory from your work or school tenant.
-To publish the models to the Azure IoT Public Model Repository, your account must be a member of the [Microsoft Partner Network](https://partner.microsoft.com). The system checks that the Microsoft Partner Network ID exists and the account is fully vetted before publishing to the device catalog.
+To publish the models to the [Azure IoT Public Model Repository](https://github.com/Azure/iot-plugandplay-models), your account must be a member of the [Microsoft Partner Network](https://partner.microsoft.com). The system checks that the Microsoft Partner Network ID exists and the account is fully vetted before publishing to the device catalog.
### Company profile
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/secure-your-key-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/secure-your-key-vault.md
@@ -91,7 +91,7 @@ You grant a user, group, or application access to execute specific operations fo
You can see the full list of vault and secret operations here: [Key Vault Operation Reference](/rest/api/keyvault/#vault-operations) <a id="key-vault-access-policies"></a>
-Key Vault access policies grant permissions separately to keys, secrets, and certificate. Access permissions for keys, secrets, and certificates are at the vault level.
+Key Vault access policies grant permissions separately to keys, secrets, and certificates. Access permissions for keys, secrets, and certificates are at the vault level.
For more information about using key vault access policies, see [Assign a Key Vault access policy](assign-access-policy-portal.md)
@@ -127,7 +127,7 @@ For more information about Key Vault firewall and virtual networks, see [Configu
## Private endpoint connection
-In case of a need of to completely block Key Vault exposure to public, an [Azure Private Endpoint](../../private-link/private-endpoint-overview.md) can be used. An Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+In case of a need to completely block Key Vault exposure to the public, an [Azure Private Endpoint](../../private-link/private-endpoint-overview.md) can be used. An Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
Common scenarios for using Private Link for Azure services:
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
@@ -1735,7 +1735,7 @@ decodeUriComponent('<value>')
This example replaces the escape characters in this string with decoded versions: ```
-decodeUriComponent('http%3A%2F%2Fcontoso.com')
+decodeUriComponent('https%3A%2F%2Fcontoso.com')
``` And returns this result: `"https://contoso.com"`
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
@@ -318,7 +318,7 @@ When you have your `AutoMLConfig` object ready, you can submit the experiment. A
```python ws = Workspace.from_config()
-experiment = Experiment(ws, "forecasting_example")
+experiment = Experiment(ws, "Tutorial-automl-forecasting")
local_run = experiment.submit(automl_config, show_output=True) best_run, fitted_model = local_run.get_output() ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-remote https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-remote.md
@@ -156,7 +156,7 @@ Now submit the configuration to automatically select the algorithm, hyper parame
```python from azureml.core.experiment import Experiment
-experiment = Experiment(ws, 'automl_remote')
+experiment = Experiment(ws, 'Tutorial-automl-remote')
remote_run = experiment.submit(automl_config, show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
@@ -320,7 +320,7 @@ from azureml.core.experiment import Experiment
ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder.
-experiment_name = 'automl-classification'
+experiment_name = 'Tutorial-automl'
project_folder = './sample_projects/automl-classification' experiment = Experiment(ws, experiment_name)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-experiments.md
@@ -87,7 +87,7 @@ The following notebooks demonstrate concepts in this article:
* [how-to-use-azureml/training/train-on-local](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-local) * [how-to-use-azureml/track-and-monitor-experiments/logging-api](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/logging-api)
-[!INCLUDE aml-clone-in-azure-notebook](/includes/aml-clone-for-examples.md)
+[!INCLUDE aml-clone-in-azure-notebook](https://github.com/MicrosoftDocs/azure-docs-pr/blob/live/includes/aml-clone-for-examples.md)
## Next steps
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-keras https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-keras.md
@@ -196,7 +196,7 @@ For more information on configuring jobs with ScriptRunConfig, see [Configure an
The [Run object](/python/api/azureml-core/azureml.core.run%28class%29?preserve-view=true&view=azure-ml-py) provides the interface to the run history while the job is running and after it has completed. ```Python
-run = Experiment(workspace=ws, name='keras-mnist').submit(src)
+run = Experiment(workspace=ws, name='Tutorial-Keras-Minst').submit(src)
run.wait_for_completion(show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-pytorch.md
@@ -203,7 +203,7 @@ For more information on configuring jobs with ScriptRunConfig, see [Configure an
The [Run object](/python/api/azureml-core/azureml.core.run%28class%29?preserve-view=true&view=azure-ml-py) provides the interface to the run history while the job is running and after it has completed. ```Python
-run = Experiment(ws, name='pytorch-birds').submit(src)
+run = Experiment(ws, name='Tutorial-pytorch-birds').submit(src)
run.wait_for_completion(show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-scikit-learn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-scikit-learn.md
@@ -129,7 +129,7 @@ src = ScriptRunConfig(source_directory='.',
```python from azureml.core import Experiment
-run = Experiment(ws,'train-iris').submit(src)
+run = Experiment(ws,'Tutorial-TrainIRIS').submit(src)
run.wait_for_completion(show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-tensorflow.md
@@ -220,7 +220,7 @@ For more information on configuring jobs with ScriptRunConfig, see [Configure an
The [Run object](/python/api/azureml-core/azureml.core.run%28class%29?preserve-view=true&view=azure-ml-py) provides the interface to the run history while the job is running and after it has completed. ```Python
-run = Experiment(workspace=ws, name='tf-mnist').submit(src)
+run = Experiment(workspace=ws, name='Tutorial-TF-Mnist').submit(src)
run.wait_for_completion(show_output=True) ``` ### What happens during run execution
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-with-custom-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-custom-image.md
@@ -153,7 +153,7 @@ When you submit a training run by using a `ScriptRunConfig` object, the `submit`
```python from azureml.core import Experiment
-run = Experiment(ws,'fastai-custom-image').submit(src)
+run = Experiment(ws,'Tutorial-fastai').submit(src)
run.wait_for_completion(show_output=True) ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-automated-ml-for-ml-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
@@ -86,7 +86,7 @@ Otherwise, you'll see a list of your recent automated machine learning experimen
Select **Next**. 1. Select your newly created dataset once it appears. You are also able to view a preview of the dataset and sample statistics.
-1. On the **Configure run** form, enter a unique experiment name.
+1. On the **Configure run** form, select **Create new** and enter **Tutorial-automl-deploy** for the experiment name.
1. Select a target column; this is the column that you would like to do predictions on.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-designer-automobile-price-train-score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
@@ -268,7 +268,7 @@ Now that your pipeline is all setup, you can submit a pipeline run to train your
> [!NOTE] > Experiments group similar pipeline runs together. If you run a pipeline multiple times, you can select the same experiment for successive runs.
- 1. Enter a descriptive name for **New experiment Name**.
+ 1. For **New experiment Name**, enter **Tutorial-CarPrices**.
1. Select **Submit**.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-train-deploy-model-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-model-cli.md
@@ -302,10 +302,10 @@ For more information on run configuration files, see [Use compute targets for mo
To start a training run on the `cpu-cluster` compute target, use the following command: ```azurecli-interactive
-az ml run submit-script -c mnist -e myexperiment --source-directory scripts -t runoutput.json
+az ml run submit-script -c mnist -e tutorial-cli --source-directory scripts -t runoutput.json
```
-This command specifies a name for the experiment (`myexperiment`). The experiment stores information about this run in the workspace.
+This command specifies a name for the experiment (`tutorial-cli`). The experiment stores information about this run in the workspace.
The `-c mnist` parameter specifies the `.azureml/mnist.runconfig` file.
@@ -322,7 +322,7 @@ This text is logged from the training script and displays the accuracy of the mo
If you inspect the training script, you'll notice that it also uses the alpha value when it stores the trained model to `outputs/sklearn_mnist_model.pkl`.
-The model was saved to the `./outputs` directory on the compute target where it was trained. In this case, the Azure Machine Learning Compute instance in the Azure cloud. The training process automatically uploads the contents of the `./outputs` directory from the compute target where training occurs to your Azure Machine Learning workspace. It's stored as part of the experiment (`myexperiment` in this example).
+The model was saved to the `./outputs` directory on the compute target where it was trained. In this case, the Azure Machine Learning Compute instance in the Azure cloud. The training process automatically uploads the contents of the `./outputs` directory from the compute target where training occurs to your Azure Machine Learning workspace. It's stored as part of the experiment (`tutorial-cli` in this example).
## Register the model
@@ -340,13 +340,13 @@ The output of this command is similar to the following JSON:
{ "createdTime": "2019-09-19T15:25:32.411572+00:00", "description": "",
- "experimentName": "myexperiment",
+ "experimentName": "tutorial-cli",
"framework": "Custom", "frameworkVersion": null, "id": "mymodel:1", "name": "mymodel", "properties": "",
- "runId": "myexperiment_1568906070_5874522d",
+ "runId": "tutorial-cli_1568906070_5874522d",
"tags": "", "version": 1 }
media-services https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-face-redaction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-face-redaction.md
@@ -381,4 +381,4 @@ namespace FaceRedaction
[Azure Media Services Analytics Overview](./legacy-components.md)
-[Azure Media Analytics demos](https://azuremedialabs.azurewebsites.net/demos/Analytics.html)
+[Azure Media Analytics demos](http://amslabs.azurewebsites.net/demos/Analytics.html)
media-services https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-redactor-walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-redactor-walkthrough.md
@@ -129,6 +129,6 @@ If you are a developer trying to parse the JSON annotation data, look inside Mod
## Related links [Azure Media Services Analytics Overview](./legacy-components.md)
-[Azure Media Analytics demos](https://azuremedialabs.azurewebsites.net/demos/Analytics.html)
+[Azure Media Analytics demos](http://amslabs.azurewebsites.net/demos/Analytics.html)
[Announcing Face Redaction for Azure Media Analytics](https://azure.microsoft.com/blog/azure-media-redactor/)\ No newline at end of file
migrate https://docs.microsoft.com/en-us/azure/migrate/migrate-support-matrix-physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical.md
@@ -62,7 +62,7 @@ The following table summarizes port requirements for assessment.
**Device** | **Connection** --- | --- **Appliance** | Inbound connections on TCP port 3389, to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368, to remotely access the appliance management app using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS), to send discovery and performance metadata to Azure Migrate.
-**Physical servers** | **Windows:** Inbound connection on WinRM port 5985 (HTTP) to pull configuration and performance metadata from Windows servers. <br/><br/> **Linux:** Inbound connections on port 22 (TCP), to pull configuration and performance metadata from Linux servers. |
+**Physical servers** | **Windows:** Inbound connection on WinRM port 5985 (HTTP) or 5986 (HTTPS) to pull configuration and performance metadata from Windows servers. <br/><br/> **Linux:** Inbound connections on port 22 (TCP), to pull configuration and performance metadata from Linux servers. |
## Agent-based dependency analysis requirements
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-aws.md
@@ -38,7 +38,7 @@ Before you start this tutorial, check you have these prerequisites in place.
--- | --- **Appliance** | You need an EC2 VM on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed. Running the appliance on a machine with Windows Server 2019 isn't supported.<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy. **Windows instances** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata.
-**Linux instances** | Allow inbound connections on port 22 (TCP).
+**Linux instances** | Allow inbound connections on port 22 (TCP).<br/><br/> The instances should use `bash` as the default shell, otherwise discovery will fail.
## Prepare an Azure user account
@@ -277,4 +277,4 @@ After discovery finishes, you can verify that the servers appear in the portal.
## Next steps - [Assess physical servers](tutorial-migrate-aws-virtual-machines.md) for migration to Azure VMs.-- [Review the data](migrate-appliance.md#collected-data---physical) that the appliance collects during discovery.\ No newline at end of file
+- [Review the data](migrate-appliance.md#collected-data---physical) that the appliance collects during discovery.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-migrate-vmware-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-migrate-vmware-agent.md
@@ -11,7 +11,7 @@ ms.custom: MVC
# Migrate VMware VMs to Azure (agent-based)
-This article shows you how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate:Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware VMs using agent-based migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
+This article shows you how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate:Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware VMs using agentless migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
In this tutorial, you learn how to:
mysql https://docs.microsoft.com/en-us/azure/mysql/connect-ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/connect-ruby.md
@@ -67,7 +67,7 @@ Get the connection information needed to connect to the Azure Database for MySQL
## Connect and create a table Use the following code to connect and create a table by using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-The code uses a [mysql2::client](https://www.rubydoc.info/gems/mysql2) class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating.
+The code uses a mysql2::client class to connect to MySQL server. Then it calls method ```query()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. Finally, call the ```close()``` to close the connection before terminating.
Replace the `host`, `database`, `username`, and `password` strings with your own values. ```ruby
networking https://docs.microsoft.com/en-us/azure/networking/azure-orbital-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/azure-orbital-overview.md
@@ -12,12 +12,12 @@ ms.author: wamota
# What is Azure Orbital? (Preview)
-Azure Orbital is a fully managed cloud-based ground station as a service that lets you communicate with your spacecraft or satellite constellations, downlink and uplink data, process your data in the cloud, chain services with Azure services in unique scenarios, and generate products for your customers. Azure Orbital lets you focus on the mission and product data by off-loading the responsibility of deployment and maintenance of ground station assets. This system is built on top of the Azure global infrastructure and low-latency global fiber network.
+Azure Orbital is a fully managed cloud-based ground station as a service that lets you communicate with your spacecraft or satellite constellations, downlink and uplink data, process your data in the cloud, chain services with Azure services in unique scenarios, and generate products for your customers. Azure Orbital lets you focus on the mission and product data by off-loading the responsibility for deployment and maintenance of ground station assets. This system is built on top of the Azure global infrastructure and low-latency global fiber network.
[:::image type="content" source="./media/azure-orbital-overview/orbital-all-ignite-link.png" alt-text="Azure Orbital Ignite Launch Video":::](https://aka.ms/orbitalatignite) [Watch the Azure Orbital announcement at Ignite on the Azure YouTube Channel](https://aka.ms/orbitalatignite)
-Azure Orbital focuses on building a partner ecosystem to enable customers to use partner ground stations in addition to Orbital ground stations as well as use partner cloud modems in addition to integrated cloud modems. We have partnered Azure Orbital focuses on partnering with industry leaders such as KSAT, in addition to other ground station/teleport providers like ViaSat Real-time Earth (RTE) and US Electrodynamics Inc. to provide broad coverage that is available up-front. This partnership also extends to satcom telecom providers like SES and other ground station/teleport providers, ViaSat Real-time Earth (RTE), and US Electrodynamics Inc. to offer unprecedented connectivity such as global access to your LEO/MEO fleet or direct Azure access for communication constellations or global access to your LEO/MEO fleet. WeΓÇÖve taken the steps to virtualize the RF signal and partner with leaders ΓÇô like Kratos and Amergint ΓÇô to bring their modems in the Marketplace. Our aim is to empower our customers to achieve more and build systems with our rich, scalable, and highly flexible ground station service platform.
+Azure Orbital focuses on building a partner ecosystem to enable customers to use partner ground stations in addition to Orbital ground stations as well as use partner cloud modems in addition to integrated cloud modems. Azure Orbital focuses on partnering with industry leaders such as KSAT, in addition to other ground station/teleport providers like ViaSat Real-time Earth (RTE) and US Electrodynamics Inc. to provide broad coverage that is available up-front. This partnership also extends to satcom telecom providers like SES and other ground station/teleport providers, ViaSat Real-time Earth (RTE), and US Electrodynamics Inc. to offer unprecedented connectivity such as global access to your LEO/MEO fleet or direct Azure access for communication constellations or global access to your LEO/MEO fleet. WeΓÇÖve taken the steps to virtualize the RF signal and partner with leaders ΓÇô like Kratos and Amergint ΓÇô to bring their modems in the Marketplace. Our aim is to empower our customers to achieve more and build systems with our rich, scalable, and highly flexible ground station service platform.
Azure Orbital enables multiple use cases for our customers, including Earth Observation and Global Communications. It also provides a platform that enables digital transformation of existing ground stations using virtualization. You have direct access to all Azure services, the Azure global infrastructure, the Marketplace, and access to our world-class partner ecosystem through our service.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/connect-ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/connect-ruby.md
@@ -38,7 +38,7 @@ Get the connection information needed to connect to the Azure Database for Postg
## Connect and create a table Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table.
-The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See [Ruby Pg reference documentation](https://www.rubydoc.info/gems/pg/PG) for more information on these classes and methods.
+The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
Replace the `host`, `database`, `user`, and `password` strings with your own values.
private-link https://docs.microsoft.com/en-us/azure/private-link/troubleshoot-private-endpoint-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/troubleshoot-private-endpoint-connectivity.md
@@ -97,12 +97,26 @@ Review these steps to make sure all the usual configurations are as expected to
![NSG outbound rules](./media/private-endpoint-tsg/nsg-outbound-rules.png)
+1. Source Virtual Machine should have the route to Private Endpoint IP next hop as InterfaceEndpoints in the NIC Effective Routes.
+
+ a. If you are not able to see the Private Endpoint Route in the Source VM, check if
+ - The Source VM and the Private Endpoint belongs to the same VNET. If yes, then you need to engage support.
+ - The Source VM and the Private Endpoint are part of different VNETs, then check for the IP connectivity between the VNETS. If there are IP connectivity and still you are not able to see the route, engage support.
+ 1. If the connection has validated results, the connectivity problem might be related to other aspects like secrets, tokens, and passwords at the application layer.
- - In this case, review the configuration of the private link resource associated with the private endpoint. For more information, see the [Azure Private Link troubleshooting guide](troubleshoot-private-link-connectivity.md).
+ - In this case, review the configuration of the private link resource associated with the private endpoint. For more information, see the [Azure Private Link troubleshooting guide](troubleshoot-private-link-connectivity.md)
+
+1. It is always good to narrow down before raising the support ticket.
+ a. If the Source is On-Premises connecting to Private Endpoint in Azure having issues, then try to connect
+ - To another Virtual Machine from On-Premises and check if you have IP connectivity to the Virtual Network from On-Premises.
+ - From a Virtual Machine in the Virtual Network to the Private Endpoint.
+ b. If the Source is Azure and Private Endpoint is in different Virtual Network, then try to connect
+ - To the Private Endpoint from a different Source. By doing this you can isolate any Virtual Machine specific issues.
+ - To any Virtual Machine which is part of the same Virtual Network of that of Private Endpoint.
1. Contact the [Azure Support](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) team if your problem is still unresolved and a connectivity problem still exists. ## Next steps * [Create a private endpoint on the updated subnet (Azure portal)](./create-private-endpoint-portal.md)
- * [Azure Private Link troubleshooting guide](troubleshoot-private-link-connectivity.md)
\ No newline at end of file
+ * [Azure Private Link troubleshooting guide](troubleshoot-private-link-connectivity.md)
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/move-region-within-resource-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/move-region-within-resource-group.md
@@ -53,6 +53,9 @@ In this article, learn how to move resources in a specific resource group to a d
Select resources you want to move. You move resources to a target region in the source region subscription. If you want to change the subscription, you can do that after the resources are moved.
+> [!NOTE]
+> Don't select associated disks or the operation will fail. Associated disks are automatically included in a VM move.
+ 1. In the Azure portal, open the relevant resource group. 2. In the resource group page, select the resources that you want to move. 3. Select **Move** > **Move to another region**.
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/24/2020
+ms.date: 12/28/2020
ms.author: memildin ---
@@ -39,6 +39,7 @@ Updates in December include:
- [Revitalized Security Center experience in Azure SQL Database & SQL Managed Instance](#revitalized-security-center-experience-in-azure-sql-database--sql-managed-instance) - [Asset inventory tools and filters updated](#asset-inventory-tools-and-filters-updated) - [Recommendation about web apps requesting SSL certificates no longer part of secure score](#recommendation-about-web-apps-requesting-ssl-certificates-no-longer-part-of-secure-score)
+- [Recommendations page has new filters for environment, severity, and available responses](#recommendations-page-has-new-filters-for-environment-severity-and-available-responses)
- [Continuous export gets new data types and improved deployifnotexist policies](#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies)
@@ -153,6 +154,29 @@ Wish this change, the recommendation is now a recommended best practice which do
Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations).
+### Recommendations page has new filters for environment, severity, and available responses
+
+Azure Security Center monitors all connected resources and generates security recommendations. Use these recommendations to strengthen your hybrid cloud posture and track compliance with the policies and standards relevant to your organization, industry, and country.
+
+As Security Center continues to expand its coverage and features, the list of security recommendations is growing every month. For example, see [29 preview recommendations added to increase coverage of Azure Security Benchmark](#29-preview-recommendations-added-to-increase-coverage-of-azure-security-benchmark).
+
+With the growing list, there's a need to be able to filter to the recommendations of greatest interest. In November, we added filters to the recommendations page (see [Recommendations list now includes filters](#recommendations-list-now-includes-filters)).
+
+The filters added this month provide options to refine the recommendations list according to:
+
+- **Environment** - View recommendations for your AWS, GCP, or Azure resources (or any combination)
+- **Severity** - View recommendations according to the severity classification set by Security Center
+- **Response actions** - View recommendations according to the availability of Security Center response options: Quick fix, Deny, and Enforce
+
+ > [!TIP]
+ > The response actions filter replaces the **Quick fix available (Yes/No)** filter.
+ >
+ > Learn more about each of these response options:
+ > - [Quick fix remediation](security-center-remediate-recommendations.md#quick-fix-remediation)
+ > - [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)
+
+:::image type="content" source="./media/release-notes/added-recommendations-filters.png" alt-text="Recommendations grouped by security control" lightbox="./media/release-notes/added-recommendations-filters.png":::
+ ### Continuous export gets new data types and improved deployifnotexist policies Azure Security Center's continuous export tools enable you to export Security Center's recommendations and alerts for use with other monitoring tools in your environment.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-recommendations.md
@@ -11,7 +11,7 @@ ms.devlang: na
ms.topic: conceptual ms.tgt_pltfrm: na ms.workload: na
-ms.date: 09/22/2020
+ms.date: 12/25/2020
ms.author: memildin ---
@@ -37,7 +37,11 @@ Security Center analyzes the security state of your resources to identify potent
1. From Security Center's menu, open the **Recommendations** page to see the recommendations applicable to your environment. Recommendations are grouped into security controls.
- ![Recommendations grouped by security control](./media/security-center-recommendations/view-recommendations.png)
+ :::image type="content" source="./media/security-center-recommendations/view-recommendations.png" alt-text="Recommendations grouped by security control" lightbox="./media/security-center-recommendations/view-recommendations.png":::
+
+1. To find recommendations specific to the resource type, severity, environment, or other criteria that are important to you, use the optional filters above the list of recommendations.
+
+ :::image type="content" source="media/security-center-recommendations/recommendation-list-filters.png" alt-text="Filters for refining the list of Azure Security Center recommendations":::
1. Expand a control and select a specific recommendation to view the recommendation details page.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
@@ -103,12 +103,6 @@ For information about when recommendations are generated for each of these prote
## Feature support in government clouds
-We strive for feature parity between our government clouds and our commercial cloud. When there are gaps, it's usually for one of these reasons:
--- **Preview feature** - Features typically donΓÇÖt reach parity before they're offered in general availability.-- **Irrelevant to gov cloud** - Some features, such as adaptive network hardening, aren't relevant to a gov cloud.-- | Service / Feature | US Gov | China Gov | |------|:----:|:----:| |[Just-in-time VM access](security-center-just-in-time.md) (1)|Γ£ö|Γ£ö|
sentinel https://docs.microsoft.com/en-us/azure/sentinel/tutorial-detect-threats-custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-custom.md
@@ -24,28 +24,37 @@ Once you have [connected your data sources](quickstart-onboard.md) to Azure Se
This tutorial helps you detect threats with Azure Sentinel. > [!div class="checklist"] > * Create analytics rules
+> * Define how events and alerts are processed
+> * Define how alerts and incidents are generated
> * Automate threat responses
-## Create custom analytics rules
+## Create a custom analytics rule with a scheduled query
-You can create custom analytics rules to help you search for the types of threats and anomalies that are suspicious in your environment. The rule makes sure you are notified right away, so that you can triage, investigate, and remediate the threats.
+You can create custom analytics rules to help you discover threats and anomalous behaviors that are present in your environment. The rule makes sure you are notified right away, so that you can triage, investigate, and remediate the threats.
1. In the Azure portal under Azure Sentinel, select **Analytics**. 1. In the top menu bar, select **+Create** and select **Scheduled query rule**. This opens the **Analytics rule wizard**.
- :::image type="content" source="media/tutorial-detect-threats-custom/create-scheduled-query.png" alt-text="Create scheduled query":::
+ :::image type="content" source="media/tutorial-detect-threats-custom/create-scheduled-query-small.png" alt-text="Create scheduled query" lightbox="media/tutorial-detect-threats-custom/create-scheduled-query-full.png":::
1. In the **General** tab, provide a unique **Name** and a **Description**. In the **Tactics** field, you can choose from among categories of attacks by which to classify the rule. Set the alert **Severity** as necessary. When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it.
- ![Start creating a custom analytics rule](media/tutorial-detect-threats-custom/general-tab.png)
+ :::image type="content" source="media/tutorial-detect-threats-custom/general-tab.png" alt-text="Start creating a custom analytics rule":::
-1. In the **Set rule logic** tab, you can either write a query directly in the **Rule query** field, or create the query in Log Analytics, and then copy and paste it there.
-
- ![Create query in Azure Sentinel](media/tutorial-detect-threats-custom/settings-tab.png)
+## Define the rule query logic and configure settings
- - See the **Results preview** area to the right, where Azure Sentinel shows the number of results (log events) the query will generate, changing on-the-fly as you write and configure your query. The graph shows the number of results over the defined time period, which is determined by the settings in the **Query scheduling** section.
- - If you see that your query would trigger too many or too frequent alerts, you can set a baseline in the **Alert threshold** section.
+1. In the **Set rule logic** tab, you can either write a query directly in the **Rule query** field, or create the query in Log Analytics, and then copy and paste it there. Queries are written in Kusto Query Language (KQL). Learn more about KQL [concepts](/azure/data-explorer/kusto/concepts/) and [queries](/azure/data-explorer/kusto/query/), and see this handy [quick reference guide](/azure/data-explorer/kql-quick-reference).
+
+ :::image type="content" source="media/tutorial-detect-threats-custom/set-rule-logic-tab-1.png" alt-text="Configure query rule logic and settings" lightbox="media/tutorial-detect-threats-custom/set-rule-logic-tab-all-1.png":::
+
+ - In the **Results simulation** area to the right, select **Test with current data** and Azure Sentinel will show you a graph of the results (log events) the query would have generated over the last 50 times it would have run, according to the currently defined schedule. If you modify the query, select **Test with current data** again to update the graph. The graph shows the number of results over the defined time period, which is determined by the settings in the **Query scheduling** section.
+
+ Here's what the results simulation might look like for the query in the screenshot above. The left side is the default view, and the right side is what you see when you hover over a point in time on the graph.
+
+ :::image type="content" source="media/tutorial-detect-threats-custom/results-simulation.png" alt-text="Results simulation screenshots":::
+
+ - If you see that your query would trigger too many or too frequent alerts, you can set a baseline in the **Alert threshold** section (see below).
Here's a sample query that would alert you when an anomalous number of resources is created in Azure Activity.
@@ -61,56 +70,66 @@ You can create custom analytics rules to help you search for the types of threat
> > - Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window **is not supported**.
- 1. Use the **Map entities** section to link parameters from your query results to Azure Sentinel-recognized entities. These entities form the basis for further analysis, including the grouping of alerts into incidents in the **Incident settings** tab.
+1. Use the **Map entities** section to link parameters from your query results to Azure Sentinel-recognized entities. These entities form the basis for further analysis, including the grouping of alerts into incidents in the **Incident settings** tab.
+
+ Learn more about [entities](identify-threats-with-entity-behavior-analytics.md#entities-in-azure-sentinel) in Azure Sentinel.
- 1. In the **Query scheduling** section, set the following parameters:
+1. In the **Query scheduling** section, set the following parameters:
- 1. Set **Run query every** to control how often the query is run - as frequently as every 5 minutes or as infrequently as once a day.
+ :::image type="content" source="media/tutorial-detect-threats-custom/set-rule-logic-tab-2.png" alt-text="Set query schedule and event grouping" lightbox="media/tutorial-detect-threats-custom/set-rule-logic-tab-all-2.png":::
- 1. Set **Lookup data from the last** to determine the time period of the data covered by the query - for example, it can query the past 10 minutes of data, or the past 6 hours of data.
+ 1. Set **Run query every** to control how often the query is run - as frequently as every 5 minutes or as infrequently as once a day.
- > [!NOTE]
- > **Query intervals and lookback period**
- > - These two settings are independent of each other, up to a point. You can run a query at a short interval covering a time period longer than the interval (in effect having overlapping queries), but you cannot run a query at an interval that exceeds the coverage period, otherwise you will have gaps in the overall query coverage.
- >
- > **Ingestion delay**
- > - To account for **latency** that may occur between an event's generation at the source and its ingestion into Azure Sentinel, and to ensure complete coverage without data duplication, Azure Sentinel runs scheduled analytics rules on a **five-minute delay** from their scheduled time.
+ 1. Set **Lookup data from the last** to determine the time period of the data covered by the query - for example, it can query the past 10 minutes of data, or the past 6 hours of data.
- 1. Use the **Alert threshold** section to define a baseline. For example, set **Generate alert when number of query results** to **Is greater than** and enter the number 1000 if you want the rule to generate an alert only if the query returns more than 1000 results each time it runs. This is a required field, so if you donΓÇÖt want to set a baseline ΓÇô that is, if you want your alert to register every event ΓÇô enter 0 in the number field.
+ > [!NOTE]
+ > **Query intervals and lookback period**
+ > - These two settings are independent of each other, up to a point. You can run a query at a short interval covering a time period longer than the interval (in effect having overlapping queries), but you cannot run a query at an interval that exceeds the coverage period, otherwise you will have gaps in the overall query coverage.
+ >
+ > - You can set a lookback period of up to 14 days.
+ >
+ > **Ingestion delay**
+ > - To account for **latency** that may occur between an event's generation at the source and its ingestion into Azure Sentinel, and to ensure complete coverage without data duplication, Azure Sentinel runs scheduled analytics rules on a **five-minute delay** from their scheduled time.
+
+1. Use the **Alert threshold** section to define a baseline. For example, set **Generate alert when number of query results** to **Is greater than** and enter the number 1000 if you want the rule to generate an alert only if the query returns more than 1000 results each time it runs. This is a required field, so if you donΓÇÖt want to set a baseline ΓÇô that is, if you want your alert to register every event ΓÇô enter 0 in the number field.
- 1. Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**:
+1. Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**:
- - **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results.
+ - **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results.
- - **Trigger an alert for each event**. The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters - by user, hostname, or something else. You can define these parameters in the query.
+ - **Trigger an alert for each event**. The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters - by user, hostname, or something else. You can define these parameters in the query.
- Currently the number of alerts a rule can generate is capped at 20. If in a particular rule, **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns more than 20 events, each of the first 19 events will generate a unique alert, and the twentieth alert will summarize the entire set of returned events. In other words, the twentieth alert is what would have been generated under the **Group all events into a single alert** option.
-
- > [!NOTE]
- > What's the difference between **Events** and **Alerts**?
+ Currently the number of alerts a rule can generate is capped at 20. If in a particular rule, **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns more than 20 events, each of the first 19 events will generate a unique alert, and the twentieth alert will summarize the entire set of returned events. In other words, the twentieth alert is what would have been generated under the **Group all events into a single alert** option.
+
+ > [!NOTE]
+ > What's the difference between **events** and **alerts**?
+ >
+ > - An **event** is a description of a single occurrence. For example, a single entry in a log file could count as an event. In this context an event refers to a single result returned by a query in an analytics rule.
+ >
+ > - An **alert** is a collection of events that, taken together, are significant from a security standpoint. An alert could contain a single event if the event had significant security implications - an administrative login from a foreign country outside of office hours, for example.
>
- > - An **event** is a description of a single occurrence. For example, a single entry in a log file could count as an event. In this context an event refers to a single result returned by a query in an analytics rule.
- >
- > - An **alert** is a collection of events that, taken together, are significant from a security standpoint. An alert could contain a single event if the event had significant security implications - an administrative login from a foreign country outside of office hours, for example.
- >
- > - By the way, what are **incidents**? Azure Sentinel's internal logic creates **incidents** from **alerts** or groups of alerts. The incidents queue is the focal point of analysts' work - triage, investigation and remediation.
- >
- > Azure Sentinel ingests raw events from some data sources, and already-processed alerts from others. It is important to note which one you're dealing with at any time.
+ > - By the way, what are **incidents**? Azure Sentinel's internal logic creates **incidents** from **alerts** or groups of alerts. The incidents queue is the focal point of analysts' work - triage, investigation and remediation.
+ >
+ > Azure Sentinel ingests raw events from some data sources, and already-processed alerts from others. It is important to note which one you're dealing with at any time.
- > [!IMPORTANT]
- > Event grouping is currently in public preview. This feature is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ > [!IMPORTANT]
+ > Event grouping is currently in public preview. This feature is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- 1. In the **Suppression** section, you can turn the **Stop running query after alert is generated** setting **On** if, once you get an alert, you want to suspend the operation of this rule for a period of time exceeding the query interval. If you turn this on, you must set **Stop running query for** to the amount of time the query should stop running, up to 24 hours.
+1. In the **Suppression** section, you can turn the **Stop running query after alert is generated** setting **On** if, once you get an alert, you want to suspend the operation of this rule for a period of time exceeding the query interval. If you turn this on, you must set **Stop running query for** to the amount of time the query should stop running, up to 24 hours.
-1. In the **Incident Settings** tab, you can choose whether and how Azure Sentinel turns alerts into actionable incidents. If this tab is left alone, Azure Sentinel will create a single, separate incident from each and every alert. You can choose to have no incidents created, or to group several alerts into a single incident, by changing the settings in this tab.
+## Configure the incident creation settings
- > [!IMPORTANT]
- > The incident settings tab is currently in public preview. This feature is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
- 1. In the **Incident Settings** section, **Create incidents from alerts triggered by this analytics rule** is set by default to **Enabled**, meaning that Azure Sentinel will create a single, separate incident from each and every alert triggered by the rule.
+In the **Incident Settings** tab, you can choose whether and how Azure Sentinel turns alerts into actionable incidents. If this tab is left alone, Azure Sentinel will create a single, separate incident from each and every alert. You can choose to have no incidents created, or to group several alerts into a single incident, by changing the settings in this tab.
+
+> [!IMPORTANT]
+> The incident settings tab is currently in public preview. This feature is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+:::image type="content" source="media/tutorial-detect-threats-custom/incident-settings-tab.png" alt-text="Define the incident creation and alert grouping settings":::
+
+1. In the **Incident Settings** section, **Create incidents from alerts triggered by this analytics rule** is set by default to **Enabled**, meaning that Azure Sentinel will create a single, separate incident from each and every alert triggered by the rule.
- If you donΓÇÖt want this rule to result in the creation of any incidents (for example, if this rule is just to collect information for subsequent analysis), set this to **Disabled**.
- 1. In the **Alert grouping** section, if you want a single incident to be generated from a group of up to 150 similar or recurring alerts (see note), set **Group related alerts, triggered by this analytics rule, into incidents** to **Enabled**, and set the following parameters.
+1. In the **Alert grouping** section, if you want a single incident to be generated from a group of up to 150 similar or recurring alerts (see note), set **Group related alerts, triggered by this analytics rule, into incidents** to **Enabled**, and set the following parameters.
- **Limit the group to alerts created within the selected time frame**: Determine the time frame within which the similar or recurring alerts will be grouped together. All of the corresponding alerts within this time frame will collectively generate an incident or a set of incidents (depending on the grouping settings below). Alerts outside this time frame will generate a separate incident or set of incidents.
@@ -127,15 +146,22 @@ You can create custom analytics rules to help you search for the types of threat
> [!NOTE] > Up to 150 alerts can be grouped into a single incident. If more than 150 alerts are generated by a rule that groups them into a single incident, a new incident will be generated with the same incident details as the original, and the excess alerts will be grouped into the new incident.
+## Set automated responses and create the rule
+ 1. In the **Automated responses** tab, select any playbooks you want to run automatically when an alert is generated by the custom rule. For more information on creating and automating playbooks, see [Respond to threats](tutorial-respond-threats-playbook.md).
+ :::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings":::
+ 1. Select **Review and create** to review all the settings for your new alert rule and then select **Create to initialize your alert rule**.+
+ :::image type="content" source="media/tutorial-detect-threats-custom/review-and-create-tab.png" alt-text="Review all settings and create the rule":::
+
+## View the rule and its output
1. After the alert is created, a custom rule is added to the table under **Active rules**. From this list you can enable, disable, or delete each rule. 1. To view the results of the alert rules you create, go to the **Incidents** page, where you can triage, [investigate incidents](tutorial-investigate-cases.md), and remediate the threats. - > [!NOTE] > Alerts generated in Azure Sentinel are available through [Microsoft Graph Security](/graph/security-concept-overview). For more information, see the [Microsoft Graph Security alerts documentation](/graph/api/resources/security-api-overview).
@@ -186,4 +212,4 @@ SOC managers should be sure to check the rule list regularly for the presence of
In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
-To learn how to automate your responses to threats, [Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md).
\ No newline at end of file
+To learn how to automate your responses to threats, [Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md).
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-federation-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-federation-overview.md
@@ -156,7 +156,7 @@ Azure Functions can run under a [Azure managed identity](../active-directory/man
Azure Functions furthermore allows the replication tasks to directly integrate with Azure virtual networks and [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) for all Azure messaging services, and it's readily integrated with [Azure Monitor](../azure-monitor/overview.md).
-Most importantly, Azure Functions has prebuilt, scalable triggers and output bindings for [Azure Event Hubs](../azure-functions/functions-bindings-service-bus.md), [Azure IoT Hub](../azure-functions/functions-bindings-event-iot.md), [Azure Service Bus](../azure-functions/functions-bindings-service-bus.md), [Azure Event Grid](../azure-functions/functions-bindings-event-grid.md), and [Azure Queue Storage](/azure-functions/functions-bindings-storage-queue.md), custom extensions for [RabbitMQ](https://github.com/azure/azure-functions-rabbitmq-extension), and [Apache Kafka](https://github.com/azure/azure-functions-kafka-extension). Most triggers will dynamically adapt to the throughput needs by scaling the number of concurrently executing instances up and down based on documented metrics.
+Most importantly, Azure Functions has prebuilt, scalable triggers and output bindings for [Azure Event Hubs](../azure-functions/functions-bindings-service-bus.md), [Azure IoT Hub](../azure-functions/functions-bindings-event-iot.md), [Azure Service Bus](../azure-functions/functions-bindings-service-bus.md), [Azure Event Grid](../azure-functions/functions-bindings-event-grid.md), and [Azure Queue Storage](/azure/azure-functions/functions-bindings-storage-queue), custom extensions for [RabbitMQ](https://github.com/azure/azure-functions-rabbitmq-extension), and [Apache Kafka](https://github.com/azure/azure-functions-kafka-extension). Most triggers will dynamically adapt to the throughput needs by scaling the number of concurrently executing instances up and down based on documented metrics.
With the Azure Functions consumption plan, the prebuilt triggers can even scale down to zero while no messages are available for replication, which means you incur no costs for keeping the configuration ready to scale back up. The key downside of using the consumption plan is that the latency for replication tasks "waking up" from this state is significantly higher than with the hosting plans where the infrastructure is kept running.
@@ -171,6 +171,6 @@ Next, you might want to read up how to set up a replicator application with Azur
- [Replication applications in Azure Functions](service-bus-federation-replicator-functions.md) - [Replicating events between Service Bus entities](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopy) - [Routing events to Azure Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopyToEventHub)-- [Acquire events from Azure Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubsCopyToServiceBus)
+- [Acquire events from Azure Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubCopyToServiceBus)
[1]: ./media/service-bus-auto-forwarding/IC628632.gif
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-partitioning.md
@@ -24,8 +24,9 @@ When a client wants to receive a message from a partitioned queue, or from a sub
The peek operation on a non-partitioned entity always returns the oldest message, but not on a partitioned entity. Instead, it returns the oldest message in one of the partitions whose message broker responded first. There is no guarantee that the returned message is the oldest one across all partitions. There is no additional cost when sending a message to, or receiving a message from, a partitioned queue or topic.
->[!NOTE]
-> The peek operation returns the oldest message from the partion based on its SequenceNumber. For partioned entities, the sequence number is issued relative to the partition. For more information, see [Message sequencing and timestamps](../service-bus-messaging/message-sequencing.md).
+
+> [!NOTE]
+> The peek operation returns the oldest message from the partition based on its sequence number. For partitioned entities, the sequence number is issued relative to the partition. For more information, see [Message sequencing and timestamps](../service-bus-messaging/message-sequencing.md).
## Enable partitioning
storage https://docs.microsoft.com/en-us/azure/storage/blobs/point-in-time-restore-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-manage.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: how-to
-ms.date: 09/23/2020
+ms.date: 12/28/2020
ms.author: tamram ms.subservice: blobs ---
@@ -19,7 +19,7 @@ You can use point-in-time restore to restore one or more sets of block blobs to
To learn more about point-in-time restore, see [Point-in-time restore for block blobs](point-in-time-restore-overview.md). > [!CAUTION]
-> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Instead of deleting a container, delete individual blobs if you may want to restore them.
+> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later.
## Enable and configure point-in-time restore
@@ -103,6 +103,8 @@ Only block blobs are restored. Page blobs and append blobs are not included in a
> When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation. Read, write, and delete operations are blocked in the primary location. For this reason, operations such as listing containers in the Azure portal may not perform as expected while the restore operation is underway. > > Read operations from the secondary location may proceed during the restore operation if the storage account is geo-replicated.
+>
+> The time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past. A retention period and restoration more than 90 days in the past would not be recommended for an account with this rate of change.
### Restore all containers in the account
storage https://docs.microsoft.com/en-us/azure/storage/blobs/point-in-time-restore-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-overview.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: conceptual
-ms.date: 09/22/2020
+ms.date: 12/28/2020
ms.author: tamram ms.subservice: blobs ms.custom: devx-track-azurepowershell
@@ -39,7 +39,7 @@ The **Restore Blob Ranges** operation returns a restore ID that uniquely identif
> Read operations from the secondary location may proceed during the restore operation if the storage account is geo-replicated. > [!CAUTION]
-> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Instead of deleting a container, delete individual blobs if you may want to restore them.
+> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later.
### Prerequisites for point-in-time restore
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-sas-overview.md
@@ -6,7 +6,7 @@ services: storage
author: tamram ms.service: storage ms.topic: conceptual
-ms.date: 11/20/2020
+ms.date: 12/28/2020
ms.author: tamram ms.reviewer: dineshm ms.subservice: common
@@ -107,7 +107,7 @@ The SAS token is a string that you generate on the client side, for example by u
Client applications provide the SAS URI to Azure Storage as part of a request. Then, the service checks the SAS parameters and the signature to verify that it is valid. If the service verifies that the signature is valid, then the request is authorized. Otherwise, the request is declined with error code 403 (Forbidden).
-Here's an example of a service SAS URI, showing the resource URI and the SAS token:
+Here's an example of a service SAS URI, showing the resource URI and the SAS token. Because the SAS token comprises the URI query string, the resource URI must be followed first by a question mark, and then by the SAS token:
![Components of a service SAS URI](./media/storage-sas-overview/sas-storage-uri.png)
storage https://docs.microsoft.com/en-us/azure/storage/queues/storage-nodejs-how-to-use-queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/queues/storage-nodejs-how-to-use-queues.md
@@ -345,4 +345,4 @@ To clear all messages from a queue without deleting it, call `clearMessages`.
Now that you've learned the basics of Queue Storage, follow these links to learn about more complex storage tasks. - Visit the [Azure Storage team blog](https://techcommunity.Microsoft.com/t5/Azure-storage/bg-p/azurestorageblog) to learn what's new-- Visit the [Azure Storage client library for JavaScript](https://github.com/Azure/Azure-SDK-for-js/tree/master/SDK/storage#Azure-storage-client-library-for-JavaScript) repository on GitHub
+- Visit the [Azure Storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage#Azure-storage-client-library-for-JavaScript) repository on GitHub
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/linux-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/linux-overview.md
@@ -19,7 +19,7 @@ The following partners have approved Windows Virtual Desktop clients for Linux d
|:------|:--------------------|:--------------| |![IGEL logo](./media/partners/igel.png)|[IGEL client documentation](https://www.igel.com/igel-solution-family/windows-virtual-desktop/)|[IGEL support](https://www.igel.com/support/)| |![NComputing logo](./media/partners/ncomputing.png)|[NComputing client documentation](https://www.ncomputing.com/microsoft)|[NComputing support](https://www.ncomputing.com/support/support-options)|
-|![Stratodesk logo](./media/partners/stratodesk.png)|[Stratodesk client documentation](https://www.stratodesk.com/kb/Microsoft_Windows_Virtual_Desktop_(WVD))|[Stratodesk support](https://www.stratodesk.com/support-3/)|
+|![Stratodesk logo](./media/partners/stratodesk.png)|[Stratodesk client documentation](https://www.stratodesk.com/kb/Microsoft_Windows_Virtual_Desktop_(WVD))|[Stratodesk support](https://www.stratodesk.com/support/)|
## Next steps
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/partners.md
@@ -56,7 +56,7 @@ Once you're ready for launch, you can use all the workflow scripts you created f
Automai lets you use the same scripts for performance testing, functional testing, performance monitoring, and even robotic process automation, all on one platform. - [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4B76N).-- [Go to the partner website](https://www.automai.com/wvd-testing-monitoring?hs_preview=EyZXkOWu-30742040580).
+- [Go to the partner website](https://www.automai.com/windows-virtual-desktop-performance-testing/).
## Cloudhouse
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/byos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/byos.md
@@ -214,4 +214,4 @@ For steps to apply Azure Disk Encryption, see [Azure Disk Encryption scenarios o
- To learn more about the Red Hat Update Infrastructure, see [Azure Red Hat Update Infrastructure](./redhat-rhui.md). - To learn more about all the Red Hat images in Azure, see the [documentation page](./redhat-images.md). - For information on Red Hat support policies for all versions of RHEL, see the [Red Hat Enterprise Linux life cycle](https://access.redhat.com/support/policy/updates/errata) page.-- For additional documentation on the RHEL Gold Images, see the [Red Hat documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/using_red_hat_gold_images#con-gold-image-azure).
+- For additional documentation on the RHEL Gold Images, see the [Red Hat documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/cloud-access-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/jboss-eap-on-rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/jboss-eap-on-rhel.md
@@ -147,7 +147,7 @@ For details on PAYG VM pricing, see [Red Hat Enterprise Linux pricing](https://a
To use BYOS for RHEL OS, you need to have a valid Red Hat subscription with entitlements to use RHEL OS in Azure. Complete the following prerequisites before you deploy the RHEL OS with the BYOS model: 1. Ensure that you have RHEL OS and JBoss EAP entitlements attached to your Red Hat subscription.
-2. Authorize your Azure subscription ID to use RHEL BYOS images. Follow the [Red Hat Subscription Management documentation](https://access.redhat.com/documentation/en/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/con-enable-subs) to complete the process, which includes these steps:
+2. Authorize your Azure subscription ID to use RHEL BYOS images. Follow the [Red Hat Subscription Management documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/indexazure_cloud-access) to complete the process, which includes these steps:
1. Enable Microsoft Azure as a provider in your Red Hat Cloud Access Dashboard.
@@ -155,7 +155,7 @@ To use BYOS for RHEL OS, you need to have a valid Red Hat subscription with enti
1. Enable new products for Cloud Access on Microsoft Azure.
- 1. Activate Red Hat Gold Images for your Azure subscription. For more information, see [Red Hat Gold Images on Microsoft Azure](https://access.redhat.com/documentation/en/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/using_red_hat_gold_images#con-gold-image-azure).
+ 1. Activate Red Hat Gold Images for your Azure subscription. For more information, see [Red Hat Gold Images on Microsoft Azure](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/cloud-access-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access).
1. Wait for Red Hat Gold Images to be available in your Azure subscription. These images are typically available within three hours of submission.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/overview.md
@@ -32,7 +32,7 @@ You might want to use the pay-as-you-go images if you don't want to worry about
### Red Hat Gold Images Azure also offers Red Hat Gold Images (`rhel-byos`). These images might be useful to customers who have existing Red Hat subscriptions and want to use them in Azure. You're required to enable your existing Red Hat subscriptions for Red Hat Cloud Access before you can use them in Azure. Access to these images is granted automatically when your Red Hat subscriptions are enabled for Cloud Access and meet the eligibility requirements. Using these images allows a customer to avoid double billing that might be incurred from using the pay-as-you-go images.
-* Learn how to [enable your Red Hat subscriptions for Cloud Access with Azure](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/con-enable-subs).
+* Learn how to [enable your Red Hat subscriptions for Cloud Access with Azure](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/indexazure_cloud-access).
* Learn how to [locate Red Hat Gold Images in the Azure portal, the Azure CLI, or PowerShell cmdlet](./byos.md). > [!NOTE]