Updates from: 01/04/2022 02:06:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-swissid.md
In this article, you learn how to provide sign-up and sign-in to customers with
To enable sign-in for users with a SwissID account in Azure AD B2C, you need to create an application. To create SwissID application, follow these steps:
-1. Contact [SwissID Business Partner support](https://www.swissid.ch/en/b2b-kontakt.html).
+1. Contact [SwissID Business Partner support](https://www.swissid.ch/en/b2c-kontakt.html).
1. After the sign up with SwissID, provide information about your Azure AD B2C tenant:
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
During the sign-out, Azure AD B2C simultaneously sends an HTTP request to the re
To support single sign-out, the token issuer technical profiles for both JWT and SAML must specify: - The protocol name, such as `<Protocol Name="OpenIdConnect" />`-- The reference to the session technical profile, such as `UseTechnicalProfileForSessionManagement ReferenceId="SM-OAuth-issuer" />`.
+- The reference to the session technical profile, such as `UseTechnicalProfileForSessionManagement ReferenceId="SM-jwt-issuer" />`.
The following example illustrates the JWT and SAML token issuers with single sign-out:
The following example illustrates the JWT and SAML token issuers with single sig
<Protocol Name="OpenIdConnect" /> <OutputTokenFormat>JWT</OutputTokenFormat> ...
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-OAuth-issuer" />
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-jwt-issuer" />
</TechnicalProfile> <!-- Session management technical profile for OIDC based tokens -->
- <TechnicalProfile Id="SM-OAuth-issuer">
+ <TechnicalProfile Id="SM-jwt-issuer">
<DisplayName>Session Management Provider</DisplayName> <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.OAuthSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </TechnicalProfile>
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## December 2021
+
+### New articles
+
+- [TOTP display control](display-control-time-based-one-time-password.md)
+- [Set up sign-up and sign-in with a SwissID account using Azure Active Directory B2C](identity-provider-swissid.md)
+- [Set up sign-up and sign-in with a PingOne account using Azure Active Directory B2C](identity-provider-ping-one.md)
+- [Tutorial: Configure Haventec with Azure Active Directory B2C for single step, multifactor passwordless authentication](partner-haventec.md)
+- [Tutorial: Acquire an access token for calling a web API in Azure AD B2C](tutorial-acquire-access-token.md)
+- [Tutorial: Sign in and sign out users with Azure AD B2C in a Node.js web app](tutorial-authenticate-nodejs-web-app-msal.md)
+- [Tutorial: Call a web API protected with Azure AD B2C](tutorial-call-api-with-access-token.md)
+
+### Updated articles
+
+- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
+- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
+- [Display controls](display-controls.md)
+- ['Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml)
+- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)
+- [Define an Azure AD MFA technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md)
+- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)
+- [String claims transformations](string-transformations.md)
+ ## November 2021 ### Updated articles
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/what-is-application-proxy.md
Azure Active Directory (Azure AD) offers many capabilities for protecting users,
The ability to securely access internal apps from outside your network becomes even more critical in the modern workplace. With scenarios such as BYOD (Bring Your Own Device) and mobile devices, IT professionals are challenged to meet two goals:
-* Empower end users to be productive anytime and anywhere
-* Protect corporate assets at all times
+* Empower users to be productive anytime and anywhere.
+* Protect corporate assets at all times.
-Many organizations believe they are in control and protected when resources exist within the boundaries of their corporate networks. But in today's digital workplace, that boundary has expanded with managed mobile devices and resources and services in the cloud. Now you need to manage the complexity of protecting your users' identities and data stored on their devices and apps.
+Many organizations believe they are in control and protected when resources exist within the boundaries of their corporate networks. But in today's digital workplace, that boundary has expanded with managed mobile devices and resources and services in the cloud. You now need to manage the complexity of protecting your users' identities and data stored on their devices and apps.
Perhaps you're already using Azure AD to manage users in the cloud who need to access Microsoft 365 and other SaaS applications, as well as web apps hosted on-premises. If you already have Azure AD, you can leverage it as one control plane to allow seamless and secure access to your on-premises applications. Or, maybe you're still contemplating a move to the cloud. If so, you can begin your journey to the cloud by implementing Application Proxy and taking the first step towards building a strong identity foundation.
This article explains how Azure AD and Application Proxy give remote users a sin
## Remote access in the past
-Previously, your control plane for protecting internal resources from attackers while facilitating access by remote users was all in the DMZ, or perimeter network. But the VPN and reverse proxy solutions deployed in the DMZ used by external clients to access corporate resources aren't suited to the cloud world. They typically suffer from the following drawbacks:
+Previously, your control plane for protecting internal resources from attackers while facilitating access by remote users was all in the DMZ or perimeter network. But the VPN and reverse proxy solutions deployed in the DMZ used by external clients to access corporate resources aren't suited to the cloud world. They typically suffer from the following drawbacks:
* Hardware costs * Maintaining security (patching, monitoring ports, etc.)
In today's digital workplace, users work anywhere with multiple devices and apps
* An identity provider to keep track of users and user-related information. * Device directory to maintain a list of devices that have access to corporate resources. This directory includes corresponding device information (for example, type of device, integrity etc.).
-* Policy evaluation service to determine if a user and device conforms to the policy set forth by security admins.
+* Policy evaluation service to determine if a user and device conform to the policy set forth by security admins.
* The ability to grant or deny access to organizational resources.
-With Application Proxy, Azure AD keeps track of users who need to access web apps published on-premises and in the cloud. It provides a central management point for those apps. While not required, it's recommended you also enable Azure AD Conditional Access. By defining conditions for how users authenticate and gain access, you further ensure the right people have access to applications.
+With Application Proxy, Azure AD keeps track of users who need to access web apps published on-premises and in the cloud. It provides a central management point for those apps. While not required, it's recommended you also enable Azure AD Conditional Access. By defining conditions for how users authenticate and gain access, you further ensure that the right people access your applications.
**Note:** It's important to understand that Azure AD Application Proxy is intended as a VPN or reverse proxy replacement for roaming (or remote) users who need access to internal resources. It's not intended for internal users on the corporate network. Internal users who unnecessarily use Application Proxy can introduce unexpected and undesirable performance issues.
With Application Proxy, Azure AD keeps track of users who need to access web app
### An overview of how App Proxy works
-Application Proxy is an Azure AD service you configure in the Azure portal. It enables you to publish an external public HTTP/HTTPS URL endpoint in the Azure Cloud, which connects to an internal application server URL in your organization. These on-premises web apps can be integrated with Azure AD to support single sign-on. End users can then access on-premises web apps in the same way they access Microsoft 365 and other SaaS apps.
+Application Proxy is an Azure AD service you configure in the Azure portal. It enables you to publish an external public HTTP/HTTPS URL endpoint in the Azure Cloud, which connects to an internal application server URL in your organization. These on-premises web apps can be integrated with Azure AD to support single sign-on. Users can then access on-premises web apps in the same way they access Microsoft 365 and other SaaS apps.
Components of this feature include the Application Proxy service, which runs in the cloud, the Application Proxy connector, which is a lightweight agent that runs on an on-premises server, and Azure AD, which is the identity provider. All three components work together to provide the user with a single sign-on experience to access on-premises web applications.
-After signing in, external users can access on-premises web applications by using a familiar URL or [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) from their desktop or iOS/MAC devices. For example, App Proxy can provide remote access and single sign-on to Remote Desktop, SharePoint sites, Tableau, Qlik, Outlook on the web, and line-of-business (LOB) applications.
+After signing in, external users can access on-premises web applications by using a display URL or [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) from their desktop or iOS/MAC devices. For example, App Proxy can provide remote access and single sign-on to Remote Desktop, SharePoint sites, Tableau, Qlik, Outlook on the web, and line-of-business (LOB) applications.
![Azure AD Application Proxy architecture](media/what-is-application-proxy/azure-ad-application-proxy-architecture.png) ### Authentication
-There are several ways to configure an application for single sign-on and the method you select depends on the authentication your application uses. Application Proxy supports the following types of applications:
+There are several ways to configure an application for single sign-on, and the method you select depends on the authentication your application uses. Application Proxy supports the following types of applications:
* Web applications * Web APIs that you want to expose to rich applications on different devices
For more information on supported methods, see [Choosing a single sign-on method
The remote access solution offered by Application Proxy and Azure AD support several security benefits customers may take advantage of, including:
-* **Authenticated access**. Application Proxy is best suited to publish applications with [pre-authentication](./application-proxy-security.md#authenticated-access) to ensure that only authenticated connections hit your network. For applications published with pre-authentication, no traffic is allowed to pass through the App Proxy service to your on-premises environment, without a valid token. Pre-authentication, by its very nature, blocks a significant number of targeted attacks, as only authenticated identities can access the backend application.
-* **Conditional Access**. Richer policy controls can be applied before connections to your network are established. With Conditional Access, you can define restrictions on the traffic that you allow to hit your backend application. You create policies that restrict sign-ins based on location, strength of authentication, and user risk profile. As Conditional Access evolves, more controls are being added to provide additional security such as integration with Microsoft Defender for Cloud Apps. Defender for Cloud Apps integration enables you to configure an on-premises application for [real-time monitoring](./application-proxy-integrate-with-microsoft-cloud-application-security.md) by leveraging Conditional Access to monitor and control sessions in real-time based on Conditional Access policies.
+* **Authenticated access**. Application Proxy is best suited to publish applications with [pre-authentication](./application-proxy-security.md#authenticated-access) to ensure that only authenticated connections hit your network. No traffic is allowed to pass through the App Proxy service to your on-premises environment without a valid token for applications published with pre-authentication. Pre-authentication, by its very nature, blocks a significant number of targeted attacks, as only authenticated identities can access the backend application.
+* **Conditional Access**. Richer policy controls can be applied before connections to your network are established. With Conditional Access, you can define restrictions on the traffic that you allow to hit your backend application. You create policies that restrict sign-ins based on location, the strength of authentication, and user risk profile. As Conditional Access evolves, more controls are being added to provide additional security such as integration with Microsoft Defender for Cloud Apps. Defender for Cloud Apps integration enables you to configure an on-premises application for [real-time monitoring](./application-proxy-integrate-with-microsoft-cloud-application-security.md) by leveraging Conditional Access to monitor and control sessions in real-time based on Conditional Access policies.
* **Traffic termination**. All traffic to the backend application is terminated at the Application Proxy service in the cloud while the session is re-established with the backend server. This connection strategy means that your backend servers are not exposed to direct HTTP traffic. They are better protected against targeted DoS (denial-of-service) attacks because your firewall isn't under attack. * **All access is outbound**. The Application Proxy connectors only use outbound connections to the Application Proxy service in the cloud over ports 80 and 443. With no inbound connections, there's no need to open firewall ports for incoming connections or components in the DMZ. All connections are outbound and over a secure channel. * **Security Analytics and Machine Learning (ML) based intelligence**. Because it's part of Azure Active Directory, Application Proxy can leverage [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) (requires [Premium P2 licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)). Azure AD Identity Protection combines machine-learning security intelligence with data feeds from Microsoft's [Digital Crimes Unit](https://news.microsoft.com/stories/cybercrime/https://docsupdatetracker.net/index.html) and [Microsoft Security Response Center](https://www.microsoft.com/msrc) to proactively identify compromised accounts. Identity Protection offers real-time protection from high-risk sign-ins. It takes into consideration factors like accesses from infected devices, through anonymizing networks, or from atypical and unlikely locations to increase the risk profile of a session. This risk profile is used for real-time protection. Many of these reports and events are already available through an API for integration with your SIEM systems.
To learn more about migrating your apps to Azure AD, see the [Migrating Your App
## Architecture
-The following diagram illustrates in general how Azure AD authentication services and Application Proxy work together to provide single sign-on to on-premises applications to end users.
+The following diagram illustrates in general how Azure AD authentication services and Application Proxy work together to provide single sign-on to on-premises applications to users.
![Azure AD Application Proxy authentication flow](media/what-is-application-proxy/azure-ad-application-proxy-authentication-flow.png)
The following diagram illustrates in general how Azure AD authentication service
|**Component**|**Description**| |:-|:-|
-|Endpoint|The endpoint is a URL or an [end-user portal](../manage-apps/end-user-experiences.md). Users can reach applications while outside of your network by accessing an external URL. Users within your network can access the application through a URL or an end-user portal. When users go to one of these endpoints, they authenticate in Azure AD and then are routed through the connector to the on-premises application.|
+|Endpoint|The endpoint is a URL or an [user portal](../manage-apps/end-user-experiences.md). Users can reach applications while outside of your network by accessing an external URL. Users within your network can access the application through a URL or an user portal. When users go to one of these endpoints, they authenticate in Azure AD and then are routed through the connector to the on-premises application.|
|Azure AD|Azure AD performs the authentication using the tenant directory stored in the cloud.| |Application Proxy service|This Application Proxy service runs in the cloud as part of Azure AD. It passes the sign-on token from the user to the Application Proxy Connector. Application Proxy forwards any accessible headers on the request and sets the headers as per its protocol, to the client IP address. If the incoming request to the proxy already has that header, the client IP address is added to the end of the comma-separated list that is the value of the header.| |Application Proxy connector|The connector is a lightweight agent that runs on a Windows Server inside your network. The connector manages communication between the Application Proxy service in the cloud and the on-premises application. The connector only uses outbound connections, so you don't have to open any inbound ports or put anything in the DMZ. The connectors are stateless and pull information from the cloud as necessary. For more information about connectors, like how they load-balance and authenticate, see [Understand Azure AD Application Proxy connectors](./application-proxy-connectors.md).|
For more information about choosing where to install your connectors and optimiz
Up to this point, we've focused on using Application Proxy to publish on-premises apps externally while enabling single sign-on to all your cloud and on-premises apps. However, there are other use cases for App Proxy that are worth mentioning. They include: * **Securely publish REST APIs**. When you have business logic or APIs running on-premises or hosted on virtual machines in the cloud, Application Proxy provides a public endpoint for API access. API endpoint access lets you control authentication and authorization without requiring incoming ports. It provides additional security through Azure AD Premium features such as multi-factor authentication and device-based Conditional Access for desktops, iOS, MAC, and Android devices using Intune. To learn more, see [How to enable native client applications to interact with proxy applications](./application-proxy-configure-native-client-application.md) and [Protect an API by using OAuth 2.0 with Azure Active Directory and API Management](../../api-management/api-management-howto-protect-backend-with-aad.md).
-* **Remote Desktop Services** **(RDS)**. Standard RDS deployments require open inbound connections. However, the [RDS deployment with Application Proxy](./application-proxy-integrate-with-remote-desktop-services.md) has a permanent outbound connection from the server running the connector service. This way, you can offer more applications to end users by publishing on-premises applications through Remote Desktop Services. You can also reduce the attack surface of the deployment with a limited set of two-step verification and Conditional Access controls to RDS.
+* **Remote Desktop Services** **(RDS)**. Standard RDS deployments require open inbound connections. However, the [RDS deployment with Application Proxy](./application-proxy-integrate-with-remote-desktop-services.md) has a permanent outbound connection from the server running the connector service. This way, you can offer more applications to users by publishing on-premises applications through Remote Desktop Services. You can also reduce the attack surface of the deployment with a limited set of two-step verification and Conditional Access controls to RDS.
* **Publish applications that connect using WebSockets**. Support with [Qlik Sense](./application-proxy-qlik.md) is in Public Preview and will be expanded to other apps in the future. * **Enable native client applications to interact with proxy applications**. You can use Azure AD Application Proxy to publish web apps, but it also can be used to publish [native client applications](./application-proxy-configure-native-client-application.md) that are configured with Microsoft Authentication Library (MSAL). Native client applications differ from web apps because they're installed on a device, while web apps are accessed through a browser.
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-javascript-spa.md
The `acquireTokenSilent` method handles token acquisition and renewal without an
```JavaScript const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me",
- graphMailEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me/messages"
+ graphMeEndpoint: "Enter_the_Graph_Endpoint_Here/v1.0/me",
+ graphMailEndpoint: "Enter_the_Graph_Endpoint_Here/v1.0/me/messages"
}; ```
In the sample application created by this guide, the `callMSGraph()` method is u
``` 1. In your browser, enter **http://localhost:3000** or **http://localhost:{port}**, where *port* is the port that your web server is listening to. You should see the contents of your *https://docsupdatetracker.net/index.html* file and the **Sign In** button. +
+> [!Important]
+> Enable popups and redirects for your site in your browser settings.
+ After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform: ![The JavaScript SPA account sign-in window](media/active-directory-develop-guidedsetup-javascriptspa-test/javascriptspascreenshot1.png)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/whats-new-docs.md
Previously updated : 12/06/2021 Last updated : 01/03/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## December 2021
+
+### New articles
+
+- [Build Zero Trust-ready apps using Microsoft identity platform features and tools](zero-trust-for-developers.md)
+- [Quickstart: Sign in users in single-page apps (SPA) using the auth code flow](single-page-app-quickstart.md)
+- [Run automated integration tests](test-automate-integration-testing.md)
+- [Secure identity in line-of-business application using Zero Trust principles](secure-line-of-business-apps.md)
+- [What are workload identities?](workload-identities-overview.md)
+
+### Updated articles
+
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md)
+- [Microsoft identity platform developer glossary](developer-glossary.md)
+- [Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow](quickstart-v2-javascript-auth-code-angular.md)
+- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
+ ## November 2021 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Microsoft Graph API](microsoft-graph-intro.md) - [Microsoft identity platform and the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md) - [What's new for authentication?](reference-breaking-changes.md)-
-## September 2021
-
-### New articles
--- [Desktop app that calls web APIs: Acquire a token interactively](scenario-desktop-acquire-token-interactive.md)-- [Desktop app that calls web APIs: Acquire a token using Device Code flow](scenario-desktop-acquire-token-device-code-flow.md)-- [Desktop app that calls web APIs: Acquire a token using Integrated Windows Authentication](scenario-desktop-acquire-token-integrated-windows-authentication.md)-- [Desktop app that calls web APIs: Acquire a token using Username and Password](scenario-desktop-acquire-token-username-password.md)-- [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)-- [Implement role-based access control in apps](howto-implement-rbac-for-apps.md)-- [Migrate public client applications from ADAL.NET to MSAL.NET](msal-net-migration-public-client.md)-
-### Updated articles
--- [Enhance security with the principle of least privilege](secure-least-privileged-access.md)-- [Migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md)-- [Microsoft identity platform videos](identity-videos.md)-- [National clouds](authentication-national-cloud.md)-- [Shared device mode for Android devices](msal-android-shared-devices.md)-- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)-- [Validation differences by supported account types (signInAudience)](supported-accounts-validation.md)
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
az vmss identity assign --name myVMSS --resource-group AzureADLinuxVM
2. Install the Azure AD extension on your virtual machine scale set. ```azurecli
-az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name Azure ADSSHLoginForLinux --resource-group AzureADLinuxVM --vmss-name myVMSS
+az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLoginForLinux --resource-group AzureADLinuxVM --vmss-name myVMSS
``` Virtual machine scale sets usually don't have public IP addresses, so you must have connectivity to them from another machine that can reach their Azure virtual network. This example shows how to use the private IP of a virtual machine scale set VM to connect from a machine in the same virtual network.
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-search-enhanced.md
Title: User management enhancements (preview) - Azure Active Directory | Microsoft Docs
+ Title: User management enhancements - Azure Active Directory | Microsoft Docs
description: Describes how Azure Active Directory enables user search, filtering, and more information about your users. documentationcenter: ''
Previously updated : 01/11/2020 Last updated : 01/03/2022
-# User management enhancements (preview) in Azure Active Directory
+# User management enhancements in Azure Active Directory
-This article describes how to use the user management enhancements preview in the Azure Active Directory (Azure AD) portal. The **All users** and **Deleted users** pages have been updated to provide more information and make it easier to find users. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article describes how to use the user management enhancements in the Azure Active Directory (Azure AD) portal. The **All users** and **Deleted users** pages have been updated to provide more information and make it easier to find users.
-Changes in the preview include:
+Enhancements include:
- More visible user properties including object ID, directory sync status, creation type, and identity issuer-- Search now allows substring search and combined search of names, emails, and object IDs
+- Search allows substring search and combined search of names, emails, and object IDs
- Enhanced filtering by user type (member, guest, none), directory sync status, creation type, company name, and domain name-- New sorting capabilities on properties like name and user principal name-- A new total users count that updates with searches or filters
+- Sorting capabilities on properties like name and user principal name
+- Total users count that updates with searches or filters
> [!NOTE]
-> This preview is currently not available for Azure AD B2C tenants.
+> These enhancements are not currently available for Azure AD B2C tenants.
-## Find the preview
-
-The preview is turned on by default, so you can use it right away. You can check out the latest features and improvements by selecting **Preview features** on the **All users** page. All pages that have been updated as part of this preview will display a preview tag. If you are having any issues, you can switch back to the legacy experience:
-
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) and select **Users**.
-1. From the **Users ΓÇô All users** page, select the banner at the top of the page.
-1. In the **Preview features** pane, turn **Enhanced user management** off.
-
- ![How and where to turn Enhanced User Management on and off](./media/users-search-enhanced/enable-preview.png)
-
-We appreciate your feedback so that we can improve our experience.
-
-## More user properties
+## User properties enhanced
WeΓÇÖve made some changes to the columns available on the **All users** and **Deleted users** pages. In addition to the existing columns we provide for managing your list of users, we've added a few more columns.
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/user-properties.md
Previously updated : 10/21/2021 Last updated : 01/04/2022
Depending on the inviting organization's needs, an Azure AD B2B collaboration us
![Diagram depicting the four user states](media/user-properties/redemption-diagram.png) - Now, let's see what an Azure AD B2B collaboration user looks like in Azure AD. ### Before invitation redemption
-State 1 and State 2 accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your directory. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Source** property for the guest user account in your directory is set to **Invited user**.
+State 1 and State 2 accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider.
+
+The **Issuer** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the **Invitation accepted** property in the invited userΓÇÖs Azure AD portal profile will be set to `No` and querying for **externalUserState** using the Microsoft Graph API will return `Pending Acceptance`.
![Screenshot showing user properties before offer redemption](media/user-properties/before-redemption.png) ### After invitation redemption
-After the guest user accepts the invitation, the **Source** property is updated based on the guest userΓÇÖs identity provider.
+After the guest user accepts the invitation, the **Issuer** property is updated based on the guest userΓÇÖs identity provider.
-For guest users in State 1, the **Source** is **External Azure Active Directory**.
+For guest users in State 1, the **issuer** is **External Azure AD**.
-![State 1 guest user after offer redemption](media/user-properties/after-redemption-state1.png)
+![State 1 guest user after offer redemption](media/user-properties/after-redemption-state-1.png)
-For guest users in State 2, the **Source** is **Microsoft Account**.
+For guest users in State 2, the **issuer** is **Microsoft Account**.
-![State 2 guest user after offer redemption](media/user-properties/after-redemption-state2.png)
+![State 2 guest user after offer redemption](media/user-properties/after-redemption-state-2.png)
-For guest users in State 3 and State 4, the **Source** property is set to **Azure Active Directory** or **Windows Server AD**, as described in the next section.
+For guest users in State 3 and State 4, the **issuer** property is set to the hostΓÇÖs organization domain. The **Directory synced** property in the Azure portal or **onPremisesSyncEnabled** in Microsoft Graph can be used to distinguish between state 3 and 4, yes indicating that the user is homed in the hostΓÇÖs on premises Active Directory.
## Key properties of the Azure AD B2B collaboration user ### UserType
This property indicates the relationship of the user to the host tenancy. This p
For pricing related details please reference [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory).
-### Source
-This property indicates how the user signs in.
+### Issuer
+This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers which can be viewed by selecting issuer in the userΓÇÖs profile or by querying the property via Microsoft Graph API.
-- Invited User: This user has been invited but has not yet redeemed an invitation.
+Issuer property value | Sign-in state
+ | -
+External Azure AD organization | This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization. This type of sign-in corresponds to State 1.
+Microsoft account | This user is homed in a Microsoft account and authenticates by using a Microsoft account. This type of sign-in corresponds to State 2.
+{HostΓÇÖs domain} | This user authenticates by using an Azure AD account that belongs to this organization. This type of sign-in corresponds to State 4.
+google.com | This user has a Gmail account and has signed up by using self-service to the other organization. This type of sign-in corresponds to State 2.
+facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization. This type of sign-in corresponds to State 2.
+mail | This user has an email address that does not match with verified Azure AD or SAML/WS-Fed domains, and is not a Gmail address or a Microsoft account. This type of sign-in corresponds to State 4.
+phone | This user has an email address that does not match a verified Azure AD domain or a SAML/WS-Fed domain, and is not a Gmail address or Microsoft account. This type of sign-in corresponds to State 4.
+{issuer URI} | This user is homed in an external organization that does not use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed based identity providers. The issuer URI is shown when the issuer field is clicked. This type of sign-in corresponds to State 2.
-- External Azure Active Directory: This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization. This type of sign-in corresponds to State 1.
+### Directory synced (or ΓÇÿonPremisesSyncEnabled in MS Graph)
-- Microsoft account: This user is homed in a Microsoft account and authenticates by using a Microsoft account. This type of sign-in corresponds to State 2.
+This property indicates if the user is being synced with on-premises Active Directory and is authenticated on-premises. If the value of this property is ΓÇÿyesΓÇÖ, this corresponds to state 3.
-- Windows Server AD: This user is signed in from on-premises Active Directory that belongs to this organization. This type of sign-in corresponds to State 3.--- Azure Active Directory: This user authenticates by using an Azure AD account that belongs to this organization. This type of sign-in corresponds to State 4. > [!NOTE]
- > Source and UserType are independent properties. A value of Source does not imply a particular value for UserType.
+ > Issuer and UserType are independent properties. A value of issuer does not imply a particular value for UserType.
## Can Azure AD B2B users be added as members instead of guests?+ Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azure AD B2B collaboration user is added as a user with UserType = Guest by default. However, in some cases, the partner organization is a member of a larger organization to which the host organization also belongs. If so, the host organization might want to treat users in the partner organization as members instead of guests. Use the Azure AD B2B Invitation Manager APIs to add or invite a user from the partner organization to the host organization as a member. ## Filter for guest users in the directory
Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azu
![Screenshot showing the filter for guest users](media/user-properties/filter-guest-users.png) ## Convert UserType
-It's possible to convert UserType from Member to Guest and vice-versa by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned?
+It's possible to convert UserType from Member to Guest and vice-versa by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned?
## Remove guest user limitations There may be cases where you want to give your guest users higher privileges. You can add a guest user to any role and even remove the default guest user restrictions in the directory to give a user the same privileges as members.
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Some applications might require the groups in a different format to how they are
- **Regex Pattern**: Use a regular expression (regex) to parse text strings according to the pattern you will set in this field. If the pattern you outline in a regex pattern evaluates to true, then we will run the regex replacement pattern you will outline below. - **Regex replacement pattern**: Here, outline in regular expressions (regex) notation how you would like to replace your string if your regex pattern outlined above evaluates to true. Use capture groups to match subexpressions in this replace regular expression.
-For more information about regex replace and capture groups see [The Regular Expression Engine - The Captured Group0(https://docs.microsoft.com/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group)
+For more information about regex replace and capture groups, see [The Regular Expression Engine - The Captured Group](https://docs.microsoft.com/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
>[!NOTE] > As per the Azure AD documentation a restricted claim cannot be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. The "Groups" claim is still a restricted claim, hence you need to customize the groups by changing the name, if you select a restricted name for the name of your custom group claim then the claim will be ignored at runtime.
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
For example, consent requests for newly registered multi-tenant apps that are no
When a risky consent request is detected, the consent prompt displays a message that indicates that admin approval is needed. If the [admin consent request workflow](configure-admin-consent-workflow.md) is enabled, the user can send the request to an admin for further review directly from the consent prompt. If the admin consent request workflow isn't enabled, the following message is displayed:
-> "**AADSTS90094**: \<clientAppDisplayName> needs permission to access resources in your organization that only an admin can grant. Request an admin to grant permission to this app before you can use it."
+> **AADSTS90094**: \<clientAppDisplayName> needs permission to access resources in your organization that only an admin can grant. Request an admin to grant permission to this app before you can use it.
+ In this case, an audit event is also logged with a category of "ApplicationManagement," an activity type of "Consent to application," and a status reason of "Risky application detected." ## Prerequisites
active-directory Workplace By Facebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md
POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronizat
9. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with ΓÇ£FacebookWorkplaceΓÇ¥. 10. Update the tenant URL in the Admin Credentials section to the following: https://scim.workplace.com/
-![Screenshot of Admin Credentials in the Workplace by Facebook app in the Azure portalt](./media/workplace-by-facebook-provisioning-tutorial/credentials.png)
+![Screenshot of Admin Credentials in the Workplace by Facebook app in the Azure portalt](./media/workplace-by-facebook-provisioning-tutorial/provisionings.png)
11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
az extension add --name aks-preview
az extension update --name aks-preview ```
-### Create a private AKS cluster with Custom Private DNS Zone
-
-```azurecli-interactive
-# Custom Private DNS Zone name should be in format "privatelink.<region>.azmk8s.io"
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId>
-```
-
-### Create a private AKS cluster with Custom Private DNS SubZone
+### Create a private AKS cluster with Custom Private DNS Zone or Private DNS SubZone
```azurecli-interactive # Custom Private DNS Zone name should be in format "<subzone>.privatelink.<region>.azmk8s.io"
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId>
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone or custom private dns subzone ResourceId>
``` ### Create a private AKS cluster with Custom Private DNS Zone and Custom Subdomain
As mentioned, virtual network peering is one way to access your private cluster.
[express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md [devops-agents]: /azure/devops/pipelines/agents/agents [availability-zones]: availability-zones.md
-[command-invoke]: command-invoke.md
+[command-invoke]: command-invoke.md
aks Spark Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/spark-job.md
Check out Spark documentation for more details.
[docker-hub]: https://docs.docker.com/docker-hub/ [java-install]: /azure/developer/java/fundamentals/java-support-on-azure [maven-install]: https://maven.apache.org/install.html
-[sbt-install]: https://www.scala-sbt.org/1.0/docs/Setup.html
+[sbt-install]: https://www.scala-sbt.org/1.x/docs/Setup.html
[spark-docs]: https://spark.apache.org/docs/latest/running-on-kubernetes.html [spark-kubernetes-earliest-version]: https://spark.apache.org/releases/spark-release-2-3-0.html [spark-quickstart]: https://spark.apache.org/docs/latest/quick-start.html
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
Each number in the version indicates general compatibility with the previous ver
Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
+## Kubernetes Alias Minor Version
++
+> [!NOTE]
+> Alias Minor Version requires Azure CLI version 2.31.0 or above with the aks-preview extension installed. Please use `az upgrade` to install the latest version of the CLI.
+
+Azure Kubernetes Service allows for you to create a cluster without specifiying the exact patch version. When creating a cluster without specifying a patch, the cluster will run the minor version's latest patch. For example, if you create a cluster with **`1.21`**, your cluster will be running **`1.21.7`**, which is the latest patch version of *1.21*.
+
+To see what patch you are on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version.
+
+```
+{
+ "apiServerAccessProfile": null,
+ "autoScalerProfile": null,
+ "autoUpgradeProfile": null,
+ "azurePortalFqdn": "myaksclust-myresourcegroup.portal.hcp.eastus.azmk8s.io",
+ "currentKubernetesVersion": "1.21.7",
+}
+```
+ ## Kubernetes version support policy AKS defines a generally available version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
automation Automation Dsc Cd Chocolatey https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-cd-chocolatey.md
Package managers such as [apt-get](https://en.wikipedia.org/wiki/Advanced_Packag
introduction. In a nutshell, Chocolatey allows you to use the command line to install packages from a central repository onto a Windows operating system. You can create and manage your own repository, and Chocolatey can install packages from any number of repositories that you designate.
-[PowerShell DSC](/powershell/dsc/overview/overview) is a PowerShell tool that allows you to declare the configuration that you want for a machine. For example, if you want Chocolatey installed, IIS installed, port 80 opened, and version 1.0.0 of your
+[PowerShell DSC](/powershell/dsc/overview) is a PowerShell tool that allows you to declare the configuration that you want for a machine. For example, if you want Chocolatey installed, IIS installed, port 80 opened, and version 1.0.0 of your
website installed, the DSC Local Configuration Manager (LCM) implements that configuration. A DSC pull server holds a repository of configurations for your machines. The LCM on each machine checks in periodically to see if its configuration matches the stored configuration. It can either report
automation Automation Dsc Config Data At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-config-data-at-scale.md
to view the
## Next steps -- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview/overview).
+- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview).
- Find out about PowerShell DSC resources in [DSC Resources](/powershell/dsc/resources/resources). - For details of Local Configuration Manager configuration, see [Configuring the Local Configuration Manager](/powershell/dsc/managing-nodes/metaconfig).
automation Automation Dsc Config From Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-config-from-server.md
to be used with SharePointDSC configuration scripts.
Once the data files have been generated, you can use them with
-[DSC Configuration scripts](/powershell/dsc/overview/overview)
+[DSC Configuration scripts](/powershell/dsc/overview)
to generate MOF files and [upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
to view the
## Next steps -- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview/overview).
+- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview).
- Find out about PowerShell DSC resources in [DSC Resources](/powershell/dsc/resources/resources). - For details of Local Configuration Manager configuration, see [Configuring the Local Configuration Manager](/powershell/dsc/managing-nodes/metaconfig).
automation Automation Dsc Configuration Based On Stig https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-configuration-based-on-stig.md
to view the
## Next steps -- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview/overview).
+- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview).
- Find out about PowerShell DSC resources in [DSC Resources](/powershell/dsc/resources/resources). - For details of Local Configuration Manager configuration, see [Configuring the Local Configuration Manager](/powershell/dsc/managing-nodes/metaconfig).
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-create-composite.md
to view the
## Next steps -- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview/overview).
+- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview).
- Find out about PowerShell DSC resources in [DSC Resources](/powershell/dsc/resources/resources). - For details of Local Configuration Manager configuration, see [Configuring the Local Configuration Manager](/powershell/dsc/managing-nodes/metaconfig).
automation Automation Dsc Extension History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-extension-history.md
This article provides information about each version of the Azure DSC VM extensi
## Next steps -- For more information about PowerShell DSC, see [PowerShell documentation center](/powershell/dsc/overview/overview).
+- For more information about PowerShell DSC, see [PowerShell documentation center](/powershell/dsc/overview).
- Examine the [Resource Manager template for the DSC extension](../virtual-machines/extensions/dsc-template.md). - For other functionality and resources that you can manage with PowerShell DSC, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0). - For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](../virtual-machines/extensions/dsc-credentials.md).
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-getting-started.md
# Get started with Azure Automation State Configuration
-This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview/overview).
+This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
> [!NOTE] > Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/policy/concepts/guest-configuration.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
If you no longer want a node to be managed by State Configuration, you can unreg
- For an overview, see [Azure Automation State Configuration overview](automation-dsc-overview.md). - To enable the feature for VMs in your environment, see [Enable Azure Automation State Configuration](automation-dsc-onboarding.md).-- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview/overview).
+- To understand PowerShell DSC, see [Windows PowerShell Desired State Configuration overview](/powershell/dsc/overview).
- For pricing information, see [Azure Automation State Configuration pricing](https://azure.microsoft.com/pricing/details/automation/). - For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-overview.md
Automation can target virtual or physical Windows or Linux machines, in the clou
### Management of all your DSC artifacts
-Azure Automation State Configuration brings the same management layer to [PowerShell Desired State Configuration](/powershell/dsc/overview/overview) as it offers for PowerShell scripting. From the Azure portal or from PowerShell, you can manage all your DSC configurations, resources, and target nodes.
+Azure Automation State Configuration brings the same management layer to [PowerShell Desired State Configuration](/powershell/dsc/overview) as it offers for PowerShell scripting. From the Azure portal or from PowerShell, you can manage all your DSC configurations, resources, and target nodes.
![Screenshot of the Azure Automation page](./media/automation-dsc-overview/azure-automation-blade.png)
automation Tutorial Configure Servers Desired State https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/tutorial-configure-servers-desired-state.md
For this tutorial, we use a simple [DSC configuration](/powershell/dsc/configura
- An Azure Resource Manager VM (not classic) running Windows Server 2008 R2 or later. For instructions on creating a VM, see [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md). - Azure PowerShell module version 3.6 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/azurerm/install-azurerm-ps).-- Familiarity with Desired State Configuration (DSC). For information about DSC, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview/overview).
+- Familiarity with Desired State Configuration (DSC). For information about DSC, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
## Support for partial configurations
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
After you deployed an Arc data controller enabled for Direct connectivity mode,
### Option 1: Deploy from the Azure Marketplace 1. Open a browser to the following URL [https://portal.azure.com](https://portal.azure.com)
-2. In the search window at the top of the page search for "*azure arc postgres*" in the Azure Market Place and select **Azure Database for PostgreSQL server groups - Azure Arc**.
+2. In the search window at the top of the page search for "*azure arc postgres*" in the Azure Market Place and select **Azure Arc-enabled PostgreSQL Hyperscale server groups**.
3. In the page that opens, click **+ Create** at the top left corner. 4. Fill in the form like you deploy an other Azure resource.
azure-arc Monitor Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitor-certificates.md
The following table describes the requirements for each certificate and key.
The GitHub repository directory includes example template files that identify the certificate specifications. - [/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl)-- [/arc_data_services/deploy/scripts/monitoring/metricsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/monitoring/metricssui-ssl.conf.tmpl)
+- [/arc_data_services/deploy/scripts/monitoring/metricsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/monitoring/metricsui-ssl.conf.tmpl)
The Azure Arc samples GitHub repository provides an example you can use to generate a compliant certificate and private key for an endpoint.
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
Run a command like this to download the files replace the value of the pod name
> Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is \<server group name\>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command `kubectl get pod` ```console
-kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
+kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql"
#Example:
-#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
+#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O hthttps://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql"
```
-## Step 2: Import the AdventureWorks database
+## Import the AdventureWorks database
Similarly, you can run a kubectl exec command to use the psql CLI tool that is included in the PostgreSQL Hyperscale server group containers to create and load the database.
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Extension bundles usage is enabled in the host.json file at the root of the proj
:::code language="json" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/host.json":::
+Now, you can add the storage output binding to your project.
+ ::: zone-end ::: zone pivot="programming-language-csharp"
Extension bundles usage is enabled in the host.json file at the root of the proj
::: zone-end
-Now, you can add the storage output binding to your project.
- ## Add an output binding In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app.
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
description: Learn how to troubleshoot SQL insights in Azure Monitor.
Previously updated : 11/03/2021 Last updated : 1/3/2022 # Troubleshoot SQL insights (preview)
SQL insights uses the following query to retrieve this information:
```kusto InsightsMetrics
-    | extend Tags = todynamic(Tags)
-    | extend SqlInstance = tostring(Tags.sql_instance)
-    | where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
+ | extend Tags = todynamic(Tags)
+ | extend SqlInstance = tostring(Tags.sql_instance)
+ | where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
``` Check if any logs from Telegraf help identify the root cause the problem. If there are log entries, you can select **Not collecting** and check the logs and troubleshooting info for common problems.
To see error messages from the telegraf service, run it manually by using the fo
Check [prerequisites](../agents/azure-monitor-agent-install.md#prerequisites) for the Azure Monitor agent. -
-Service logs:
+Prior to Azure Monitoring Agent v1.12, mdsd service logs were located in:
- `/var/log/mdsd.err` - `/var/log/mdsd.warn` - `/var/log/mdsd.info`
+From v1.12 onward, service logs are located in:
+- `/var/opt/microsoft/azuremonitoragent/log/`
+- `/etc/opt/microsoft/azuremonitoragent/`
+ To see recent errors: `tail -n 100 -f /var/log/mdsd.err` If you need to contact support, collect the following information: - Logs in `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` - Log in `/var/log/waagent.log` -- Logs in `/var/log/mdsd*`
+- Logs in `/var/log/mdsd*`, or logs in `/var/opt/microsoft/azuremonitoragent/log/` and `/etc/opt/microsoft/azuremonitoragent/`.
- Files in `/etc/mdsd.d/` - File `/etc/default/mdsd`
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Last updated 12/01/2021
Log Analytics workspace data export in Azure Monitor allows you to continuously export data from selected tables in your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces. ## Overview
+Data stored in Log Analytics is available for the retention period defined in your workspace and used in Azure Monitor and Azure experiences, where more capabilities can be met using export:
+* Temper protected store compliance -- data can't be altered in Log Analytics once ingested, but can be purged. Export to storage account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to protect data from changes.
+* Integration with Azure services and other tools -- export to event hub in near-real-time lets you integrate with services and tools of your choice.
+* Keep data for long time for compliance and in low cost -- export to storage account in the same region as your workspace, replicate data to other storage accounts in other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
+ Once data export is configured in your Log Analytics workspace, any new data sent to the selected tables in the workspace is automatically exported in near-real-time to your storage account or to your event hub. [![Data export overview](media/logs-data-export/data-export-overview.png "Screenshot of data export flow diagram.")](media/logs-data-export/data-export-overview.png#lightbox)
Follow the steps, then click **Create**.
# [PowerShell](#tab/powershell)
-N/A
+Use the following command to create a data export rule to a storage account using PowerShell. A separate container is created for each table.
+
+```powershell
+$storageAccountResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $storageAccountResourceId
+```
# [Azure CLI](#tab/azure-cli)
Click a rule for configuration view.
# [PowerShell](#tab/powershell)
-N/A
+Use the following command to view the configuration of a data export rule using PowerShell.
+
+```powershell
+Get-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName'
+```
# [Azure CLI](#tab/azure-cli)
N/A
-## Disable an export rule
+## Disable or update an export rule
# [Azure portal](#tab/portal)
-Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. In the **Log Analytics workspace** menu in the Azure portal, select **Data Export** from the **Settings** section and click the status toggle to disable or enable export rule.
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. In the **Log Analytics workspace** menu in the Azure portal, select **Data Export** from the **Settings** section and click the status toggle to disable or enable export rule.
[![export rule disable](media/logs-data-export/export-disable.png "Screenshot of disable data export rule.")](media/logs-data-export/export-disable.png#lightbox) # [PowerShell](#tab/powershell)
-N/A
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable a data export rule using PowerShell.
+
+```powershell
+Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -Enable: $false
+```
# [Azure CLI](#tab/azure-cli)
-Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Use the following command to disable a data export rule using CLI.
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable a data export rule using CLI.
```azurecli az monitor log-analytics workspace data-export update --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --enable false
In the **Log Analytics workspace** menu in the Azure portal, select *Data Export
# [PowerShell](#tab/powershell)
-N/A
+Use the following command to delete a data export rule using PowerShell.
+
+```powershell
+Remove-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName'
+```
# [Azure CLI](#tab/azure-cli)
In the **Log Analytics workspace** menu in the Azure portal, select **Data Expor
# [PowerShell](#tab/powershell)
-N/A
+Use the following command to view all data export rules in a workspace using PowerShell.
+
+```powershell
+Get-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName
+```
# [Azure CLI](#tab/azure-cli)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 12/02/2021 Last updated : 01/03/2022 # What is Bicep?
-Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
+Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
-You can use Bicep instead of JSON to develop your Azure Resource Manager templates (ARM templates). The JSON syntax to create an ARM template can be verbose and require complicated expressions. Bicep syntax reduces that complexity and improves the development experience. Bicep is a transparent abstraction over ARM template JSON and doesn't lose any of the JSON template capabilities. During deployment, the Bicep CLI converts a Bicep file into ARM template JSON.
-
-Bicep isn't intended as a general programming language to write applications. A Bicep file declares Azure resources and resource properties, without writing a sequence of programming commands to create resources.
-
-Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file.
+Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
-To track the status of the Bicep work, see the [Bicep project repository](https://github.com/Azure/bicep).
+## Benefits of Bicep versus other tools
-To learn about Bicep, see the following video.
+Bicep provides the following advantages over other infrastructure-as-code options:
-> [!VIDEO https://www.youtube.com/embed/sc1kJfcRQgY]
+- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.
+- **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages. Bicep syntax is declarative and specifies which resources and resource properties you want to deploy.
+- **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
+- **Modularity**: You can break your Bicep code into manageable parts by using [modules](./modules.md). The module deploys a set of related resources. Modules enable you to reuse code and simplify development. Add the module to a Bicep file anytime you need to deploy those resources.
+- **Integration with Azure services**: Bicep is integrated with Azure services such as Azure Policy, template specs, and Blueprints.
+- **No state or state files to manage**: All state is stored in Azure. Users can collaborate and have confidence their updates are handled as expected. Use the [what-if operation](./deploy-what-if.md) to preview changes before deploying your template.
+- **No cost and open source**: Bicep is completely free. You don't have to pay for premium capabilities. It's also supported by Microsoft support.
## Get started
To learn about the resources that are available in your Bicep file, see [Bicep r
Bicep examples can be found in the [Bicep GitHub repo](https://github.com/Azure/bicep/tree/main/docs/examples).
-## Benefits of Bicep versus other tools
+## About the language
-Bicep provides the following advantages over other options:
--- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.-- **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages. Bicep syntax is declarative and specifies which resources and resource properties you want to deploy.-- **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.-- **Modularity**: You can break your Bicep code into manageable parts by using [modules](./modules.md). The module deploys a set of related resources. Modules enable you to reuse code and simplify development. Add the module to a Bicep file anytime you need to deploy those resources.-- **Integration with Azure services**: Bicep is integrated with Azure services such as Azure Policy, template specs, and Blueprints.-- **No state or state files to manage**: All state is stored in Azure. Users can collaborate and have confidence their updates are handled as expected. Use the [what-if operation](./deploy-what-if.md) to preview changes before deploying your template.-- **No cost and open source**: Bicep is completely free. You don't have to pay for premium capabilities. It's also supported by Microsoft support.
+Bicep isn't intended as a general programming language to write applications. A Bicep file declares Azure resources and resource properties, without writing a sequence of programming commands to create resources.
-## Bicep improvements
+To track the status of the Bicep work, see the [Bicep project repository](https://github.com/Azure/bicep).
-Bicep offers an easier and more concise syntax when compared to the equivalent JSON. You don't use bracketed expressions `[...]`. Instead, you directly call functions, and get values from parameters and variables. You give each deployed resource a symbolic name, which makes it easy to reference that resource in your template.
+To learn about Bicep, see the following video.
-For example, the following JSON returns an output value from a resource property.
+> [!VIDEO https://www.youtube.com/embed/sc1kJfcRQgY]
-```json
-"outputs": {
- "hostname": {
- "type": "string",
- "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))).dnsSettings.fqdn]"
- },
-}
-```
+You can use Bicep instead of JSON to develop your Azure Resource Manager templates (ARM templates). The JSON syntax to create an ARM template can be verbose and require complicated expressions. Bicep syntax reduces that complexity and improves the development experience. Bicep is a transparent abstraction over ARM template JSON and doesn't lose any of the JSON template capabilities. During deployment, the Bicep CLI converts a Bicep file into ARM template JSON.
-The equivalent output expression in Bicep is easier to write. The following example returns the same property by using the symbolic name **publicIP** for a resource that is defined within the template:
+Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file.
-```bicep
-output hostname string = publicIP.properties.dnsSettings.fqdn
-```
+Bicep offers an easier and more concise syntax when compared to the equivalent JSON. You don't use bracketed expressions `[...]`. Instead, you directly call functions, and get values from parameters and variables. You give each deployed resource a symbolic name, which makes it easy to reference that resource in your template.
For a full comparison of the syntax, see [Comparing JSON and Bicep for templates](compare-template-syntax.md).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
For Azure VM Linux backups, Azure Backup supports the list of Linux [distributio
- Azure Backup doesn't support Core OS Linux. - Azure Backup doesn't support 32-bit operating systems. - Other bring-your-own Linux distributions might work as long as the [Azure VM agent for Linux](../virtual-machines/extensions/agent-linux.md) is available on the VM, and as long as Python is supported.-- Azure Backup doesn't support a proxy-configured Linux VM if it doesn't have Python version 2.7 installed.
+- Azure Backup doesn't support a proxy-configured Linux VM if it doesn't have Python version 2.7 or higher installed.
- Azure Backup doesn't support backing up NFS files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It only backs up disks that are locally attached to the VM. ## Support matrix for managed pre-post scripts for Linux databases
On-premises/Azure VMs with MABS | ![Yes][green] | ![Yes][green]
[green]: ./media/backup-support-matrix/green.png [yellow]: ./media/backup-support-matrix/yellow.png
-[red]: ./media/backup-support-matrix/red.png
+[red]: ./media/backup-support-matrix/red.png
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 12/4/2019 Last updated : 12/23/2021 +++ # Tutorial: Restore SAP HANA databases in an Azure VM using Azure CLI
By the end of this tutorial you'll be able to:
This tutorial assumes you have an SAP HANA database running on Azure VM that's backed-up using Azure Backup. If you've used [Back up an SAP HANA database in Azure using CLI](tutorial-sap-hana-backup-cli.md) to back up your SAP HANA database, then you're using the following resources:
-* a resource group named *saphanaResourceGroup*
-* a vault named *saphanaVault*
-* protected container named *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*
-* backed-up database/item named *saphanadatabase;hxe;hxe*
-* resources in the *westus2* region
+* A resource group named *saphanaResourceGroup*
+* A vault named *saphanaVault*
+* Protected container named *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*
+* Backed-up database/item named *saphanadatabase;hxe;hxe*
+* Resources in the *westus2* region
## View restore points for a backed-up database
Name Resource
The response will give you the job name. This job name can be used to track the job status using the [az backup job show](/cli/azure/backup/job#az_backup_job_show) cmdlet.
+## Restore to secondary region
+
+To restore a database to the secondary region, specify a target vault and server located in the secondary region, in the restore configuration.
+
+```azurecli-interactive
+az backup recoveryconfig show --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
+ --container-name VMAppContainer;compute;hanasnapshotcvtmachines;hanasnapcvt01 \
+ --item-name SAPHanaDatabase;h10;h10 \
+ --restore-mode AlternateWorkloadRestore \
+ --from-full-rp-name 293170069256531 \
+ --rp-name 293170069256531 \
+ --target-server-name targethanaserver \
+ --target-container-name VMAppContainer;compute;saphanaTargetRG;targethanaserver \
+ --target-item-name h10 \
+ --target-server-type HANAInstance \
+ --workload-type SAPHANA \
+ --target-resource-group saphanaTargetRG \
+ --target-vault-name targetVault \
+ --backup-management-type AzureWorkload
+```
+
+Following is the response to the above command that will be a recovery configuration object:
+
+```output
+{
+ "alternate_directory_paths": null,
+ "container_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/saphanaTargetRG/providers/Microsoft.RecoveryServices/vaults/targetVault/backupFabrics/Azure/protectionContainers/vmappcontainer;compute;saphanaTargetRG;targethanaserver",
+ "container_uri": "VMAppContainer;compute;hanasnapshotcvtmachines;hanasnapcvt01",
+ "database_name": "SAPHanaDatabase;h10;h10",
+ "filepath": null,
+ "item_type": "SAPHana",
+ "item_uri": "SAPHanaDatabase;h10;h10",
+ "log_point_in_time": null,
+ "recovery_mode": null,
+ "recovery_point_id": "293170069256531",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/saphanaResourceGroup/providers/Microsoft.Compute/virtualMachines/hanasnapcvt01",
+ "workload_type": "SAPHanaDatabase"
+}
+```
+
+Use this recovery configuration in the [az restore restore-azurewl](/cli/azure/backup/restore#az_backup_restore_restore_azurewl) cmdlet. Select the `--use-secondary-region` flag to restore the database to the secondary region.
+
+```azurecli-interactive
+az backup restore restore-azurewl --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
+ --recovery-config recoveryconfig.json \
+ --use-secondary-region \
+ --output table
+```
+
+The output will be as follows:
+
+```output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ - - -- --
+00000000-0000-0000-0000-000000000000 CrossRegionRestore InProgress H10 [hanasnapcvt01] AzureWorkload 2021-12-22T05:21:34.165617+00:00 0:00:05.665470
+```
+ ## Restore as files To restore the backup data as files instead of a database, we'll use **RestoreAsFiles** as the restore mode. Then choose the restore point, which can either be a previous point-in-time or any of the previous restore points. Once the files are dumped to a specified path, you can take these files to any SAP HANA machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
Learn what's new in the service. These items include release notes, videos, blog
## Release notes
+### December 2021
+* [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.1) to v1.8.1
+* Jio India west [publishing region](luis-reference-regions.md#other-publishing-regions)
+
### November 2021 * [Azure Role-based access control (RBAC) article](role-based-access-control.md)
cognitive-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
Active Learning alters the Knowledge Base or Search Service after you approve th
## Turn on active learning
-In order to see suggested questions, you must [turn on active learning](../index.yml) for your QnA Maker resource.
+In order to see suggested questions, you must [turn on active learning](../How-To/use-active-learning.md#turn-on-active-learning-for-alternate-questions) for your QnA Maker resource.
## View suggested questions
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
In this article, we use the Language studio to demonstrate key concepts of custo
## Next steps
-After you've created a text classification model, you can:
+After you've created entity extraction model, you can:
-* [Use the runtime API to classify text](how-to/call-api.md)
+* [Use the runtime API to extract entities](how-to/call-api.md)
-When you start to create your own text classification projects, use the how-to articles to learn more about developing your model in greater detail:
+When you start to create your own entity classification projects, use the how-to articles to learn more about developing your model in greater detail:
* [Data selection and schema design](how-to/design-schema.md) * [Tag data](how-to/tag-data.md)
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-endpoint.md
# Build a custom Teams endpoint
-> [!IMPORTANT]
-> To enable or disable the custom Teams endpoint experience, [complete and submit this form](https://forms.office.com/r/B8p5KqCH19).
-You can use Azure Communication Services to build custom Teams endpoints to communicate with the Microsoft Teams client or other custom Teams endpoints. With a custom Teams endpoint, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
+You can use Azure Communication Services and Graph API to build custom Teams endpoints to communicate with the Microsoft Teams client or other custom Teams endpoints. With a custom Teams endpoint, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens. The diagrams in the next sections demonstrate multitenant use cases, where fictional company Fabrikam is the customer of fictional company Contoso.
Optionally, you can also use custom Teams endpoints to integrate chat capabiliti
| Permission | Display string | Description | Admin consent required | Microsoft account supported | |: |: |: |: |: |
-| _`https://auth.msft.communication.azure.com/VoIP`_ | Manage calls in Teams | Start, join, forward, transfer, or leave Teams calls and update call properties. | No | No |
+| _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_ | Manage calls in Teams | Start, join, forward, transfer, or leave Teams calls and update call properties. | No | No |
### Application permissions
None.
### Roles for granting consent on behalf of a company - Global admin-- Application admin (only in private preview)-- Cloud application admin (only in private preview)
+- Application admin
+- Cloud application admin
+
+Find more details in [Azure Active Directory documentation](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference).
## Next steps
container-registry Github Action Scan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/github-action-scan.md
Title: Scan container images using GitHub Actions description: Learn how to scan the container images using Container Scan action--++
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
The well-defined schema representation creates a simple tabular representation o
```SQL SELECT CAST (num as float) as num
-FROM OPENROWSET(ΓÇïPROVIDER = 'CosmosDB',
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
CONNECTION = '<your-connection', OBJECT = 'IntToFloat', SERVER_CREDENTIAL = 'your-credential'
FROM OPENROWSET(ΓÇïPROVIDER = 'CosmosDB',
WITH (num varchar(100)) AS [IntToFloat] ```
- * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the 2 documents below, and that the first one defined the analytical store base schema. The second document, where `id` is `2`, doesn't have a well-defined schema since property `"a"` is a string and the first document has `"a"` as a number. In this case, the analytical store registers the data type of `"a"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"a"` property will not.
+ * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the documents below: the first one defined the analytical store base schema. The second document, where `id` is `"2"`, **doesn't** have a well-defined schema since property `"code"` is a string and the first document has `"code"` as a number. In this case, the analytical store registers the data type of `"code"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"code"` property will not.
- * `{"id": "1", "a":123}`
- * `{"id": "2", "a": "str"}`
+ * `{"id": "1", "code":123}`
+ * `{"id": "2", "code": "123"}`
> [!NOTE]
- > This condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well defined.
+ > The condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well defined.
+
+> [!NOTE]
+ > The condition above doesn't change if you update `"code"` of document `"1"` to a string in your transactional store. In analytical store, `"code"` will be kept as `integer` since currently we don't support schema reset.
* Array types must contain a single repeated type. For example, `{"a": ["str",12]}` is not a well-defined schema because the array contains a mix of integer and string types.
cosmos-db Monitor Server Side Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-server-side-latency.md
Last updated 09/16/2021
# How to monitor the server-side latency for operations in an Azure Cosmos DB container or account [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The server-side latency metric is used to view the server-side latency of an operation. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/).
+Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The server-side latency metric direct and server-side latency gateway metrics are used to view the server-side latency of an operation in two different connection modes. Use server-side latency gateway metric if your request operation is in gateway connectivity mode. Use server-side latency direct metric if your request operation is in direct connectivity mode. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sl) article.
-You can monitor server-side latency if you see unusually high latency for point operation such as:
+The following table indicates which API supports server-side latency metrics (Direct versus Gateway):
-* A GET or a SET operation with partition key and ID in direct connectivity mode
+|API |Server Side Latency Direct |Server Side Latency Gateway |
+||:-:|:-:|
+|SQL |Γ£ô |Γ£ô |
+|MongoDB | |Γ£ô |
+|Cassandra | |Γ£ô |
+|Gremlin | |Γ£ô |
+|Table |Γ£ô |Γ£ô |
+
+You can monitor server-side latency metrics if you see unusually high latency for point operation such as:
+
+* A GET or a SET operation with partition key and ID
* A read or write operation or * A query You can look up the diagnostic log to see the size of the data returned. If you see a sustained high latency for query operations, you should look up the diagnostic log for higher [throughput or RU/s](cosmosdb-monitor-logs-basic-queries.md) used. Server side latency shows the amount of time spent on the backend infrastructure before the data was returned to the client. It is important to look at this metric to rule out any backend latency issues.
-## View the server-side latency metric
+## View the server-side latency metrics
1. Sign in to the [Azure portal](https://portal.azure.com/).
You can look up the diagnostic log to see the size of the data returned. If you
:::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
-1. Next select the **Server Side Latency** metric from the list of available metrics. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Server Side Latency** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency per minute for the selected period.
+1. Next select the **Server Side Latency Gateway** metric from the list of available metrics, if your operation is in gateway connectivity mode. Select the **Server Side Latency Direct** metric, if your operation is in direct connectivity mode. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Server Side Latency Gateway** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency in gateway connectivity mode per 5 minute for the selected period.
- :::image type="content" source="./media/monitor-server-side-latency/server-side-latency-metric.png" alt-text="Choose the Server-Side Latency metric from the Azure portal" border="true":::
+ :::image type="content" source="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png" alt-text="Choose the Server-Side Latency Gateway metric from the Azure portal" border="true" lightbox="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png":::
## Filters for server-side latency
-You can also filter metrics and get the charts displayed by a specific **CollectionName**, **ConnectionMode**, **DatabaseName**, **OperationType**, **Region**, and **PublicAPIType**.
+You can also filter metrics and get the charts displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **PublicAPIType**.
-To filter the metrics, select **Add filter** and choose the required property such as **PublicAPIType** and select the value **sql**. Add another filter for **OperationType**. The graph then displays the server-side latency for different operations during the selected period. The operations executed via Stored procedure are not logged so they are not available under the OperationType metric.
+To filter the metrics, select **Add filter** and choose the required property such as **PublicAPIType** and select the value **Sql**. Select **Apply splitting** for **OperationType**. The graph then displays the server-side latency for different operations in gateway connection mode during the selected period. The operations executed via Stored procedure are not logged so they are not available under the OperationType metric.
-The **Server Side Latency** metrics for each operation are displayed as shown in the following image:
+The **Server Side Latency Gateway** metrics for each operation are displayed as shown in the following image:
You can also group the metrics by using the **Apply splitting** option.
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mca-section-invoice.md
tags: billing
Previously updated : 09/15/2021 Last updated : 01/03/2022
To create an invoice section, you need to be a **billing profile owner** or a **
To create a billing profile, you need to be a **billing account owner** or a **billing account contributor**. For more information, see [Manage billing profiles for billing account](understand-mca-roles.md#manage-billing-profiles-for-billing-account).
+Adding additional billing profiles is supported only for direct Microsoft Customer Agreements (working with a Microsoft representative). If you don't see the **Add** option on the Billing profile page, the feature isn't available for your account. If you don't have a direct Microsoft Customer Agreement, you can contact the Digital Sales team by chat, phone, or ticket. For more information, see [Contact Azure Sales](https://azure.microsoft.com/overview/contact-azure-sales/#contact-sales).
+ > [!IMPORTANT] > > Creating additional billing profiles may impact your overall cost. For more information, see [Things to consider when adding new billing profiles](#things-to-consider-when-adding-new-billing-profiles).
To create a billing profile, you need to be a **billing account owner** or a **b
[![Screenshot that shows billing profile list with Add selected.](./media/mca-section-invoice/mca-list-profiles.png)](./media/mca-section-invoice/mca-list-profiles-zoomed-in.png#lightbox)
- > [!Note]
- >
- > If you don't see the Add button in the Billing profile page, the feature is not available for your account. Currently, it is only available for accounts that have been set up while working with a Microsoft representative.
- 4. Fill the form and click **Create**. [![Screenshot that shows billing profile creation page](./media/mca-section-invoice/mca-add-profile.png)](./media/mca-section-invoice/mca-add-profile-zoomed-in.png#lightbox)
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-reference-architectures.md
DDoS Protection Standard is designed [for services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default DDoS Protection Basic service applies. The following reference architectures are arranged by scenarios, with architecture patterns grouped together.
+> [!NOTE]
+> Protected resources include public IPs attached to an IaaS VM, Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric or an IaaS based Network Virtual Appliance (NVA). PaaS services (multitenant) are not supported at present. This includes Azure App Service Environment for PowerApps or API management in a virtual network with a public IP.
+ ## Virtual machine (Windows/Linux) workloads ### Application running on load-balanced VMs
We recommend that you configure the Application Gateway WAF SKU (prevent mode) t
For more information about this reference architecture, see [this article](/azure/architecture/reference-architectures/app-service-web-app/multi-region).
-## Protecting on-premises resources
-
-You can leverage the scale, capacity, and efficiency of Azure DDoS Protection Standard to protect your on-premises resources, by hosting a public IP address in Azure and redirecting the traffic to the backend origin to your on-premises environment.
-
-![Protecting on-prem resources](./media/reference-architectures/ddos-on-prem.png)
-
-If you have a web application that receives traffic from the Internet, you can host the web application behind Application Gateway, then protect it with WAF against Layer 7 web attacks such as SQL injection. The backend origins of your application will be in your on-premises environment, which is connected over the VPN.
-
-The backend resources in the on-premises environment will not be exposed to the public internet. Only the AppGW/WAF public IP is exposed to the internet and the DNS name of your application maps to that public IP address.
-
-When DDoS Protection Standard is enabled on the virtual network which contains the AppGW/WAF, DDoS Protection Standard will defend your application by mitigating bad traffic and routing the supposed clean traffic to your application.
-
-This [article](../azure-vmware/protect-azure-vmware-solution-with-application-gateway.md) shows you how you can use DDoS Protection Standard alongside Application Gateway to protect a web app running on Azure VMware Solution.
- ## Mitigation for non-web PaaS services ### HDInsight on Azure
In this architecture, traffic destined to the HDInsight cluster from the interne
For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json) documentation. -
-> [!NOTE]
-> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
- ## Next steps - Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
ddos-protection Inline Protection Glb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/inline-protection-glb.md
# Inline L7 DDoS Protection with Gateway Load Balancer and Partner NVAs Azure DDoS Protection is always-on but not inline and takes 30-60 seconds from the time an attack is detected until it is mitigated. Azure DDoS Protection Standard also works at L3/4 (network layer) and does not inspect the packet payload i.e. application layer (L7). + Workloads that are highly sensitive to latency and cannot tolerate 30-60 seconds of on-ramp time for DDoS protection to kick in requires inline protection. Inline protection entails that all the traffic always goes through the DDoS protection pipeline. Further, for scenarios such as web protection or gaming workload protection (UDP) it becomes crucial to inspect the packet payload to mitigate against extreme low volume attacks which exploit the vulnerability in the application layer (L7). Partner NVAs deployed with Gateway Load Balancer and integrated with Azure DDoS Protection Standard offers comprehensive inline L7 DDoS Protection for high performance and high availability scenarios. Inline L7 DDoS Protection combined with Azure DDoS Protection Standard provides comprehensive L3-L7 protection against volumetric as well as low-volume DDoS attacks.
Enabling Azure DDoS Protection Standard on the VNet of the Standard Public Load
## Next steps - Learn more about [inline L7 DDoS protection partners](https://aka.ms/inlineddospartners) - Learn more about [Azure DDoS Protection Standard](./ddos-protection-overview.md)-- Learn more about [Gateway Load Balancer](../load-balancer/gateway-overview.md)
+- Learn more about [Gateway Load Balancer](../load-balancer/gateway-overview.md)
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
You cannot move a virtual network to another resource group or subscription when
3. Select **DDoS protection**, under **SETTINGS**. 4. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+### Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
+
+Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](/azure/firewall-manager/configure-ddos)
++ ### Enable DDoS protection for all virtual networks This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that do not have DDoS Protection Standard enabled, then optionally create a remediation task that will create the association to protect the VNet. See [Azure Policy built-in definitions for Azure DDoS Protection Standard](policy-reference.md) for full list of built-in policies.
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-azure-postgresql-online.md
Known issues and limitations associated with online migrations from PostgreSQL t
- The source and target database schemas must match. ## Size limitations+ - You can migrate up to 1 TB of data from PostgreSQL to Azure DB for PostgreSQL using a single DMS service.
+- The number of tables you can migrate in one DMS activity is limited based on the number of characters in your table names. An upper limit of 7,500 characters applies to the combined length of the schema_name.table_name. If the combined length of the schema_name.table_name exceeds this limit, you likely will see the error *(400) Bad Request.Entity too large*. To avoid this error, try to migrate your tables by using multiple DMS activities, with each activity adhering to the 7,500-character limit.
+ ## Datatype limitations **Limitation**: If there's no primary key on tables, changes may not be synced to the target database.
expressroute Cross Connections Api Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/cross-connections-api-development.md
Develop against the [expressRouteCrossConnections API](/rest/api/expressroute/ex
#### Connectivity Management Workflow Once you receive the ExpressRoute service key from the target customer, follow the below workflow and sample API operations to configure ExpressRoute connectivity:
-1. **List expressRouteCrossConnection:** In order to manage ExpressRoute connectivity, you need to identify the *Name* and *ResourceGroup* of the target expressRouteCrossConnection resource, in order to form the GET API call. the *Name* of the expressRouteCrossConnection is the target service key of the customer's ExpressRoute circuit. In order to find the *ResourceGroupName*, you need to LIST all expressRouteCrossConnections in the provider subscription and search the results for the target service key. From here, you can record the *ResourceGroupName*
+1. **List expressRouteCrossConnection:** In order to manage ExpressRoute connectivity, you need to identify the *Name* and *ResourceGroup* of the target expressRouteCrossConnection resource. The *Name* of the expressRouteCrossConnection is the target service key of the customer's ExpressRoute circuit. In order to find the *ResourceGroupName*, you need to LIST all expressRouteCrossConnections in the provider subscription and search the results for the target service key. From here, you can record the *ResourceGroupName* and form the GET expressRouteCrossConnection API call.
- ```GET /subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/expressRouteCrossConnections?api-version=2018-02-01 HTTP/1.1
+ ```
+ GET /subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/expressRouteCrossConnections?api-version=2018-02-01 HTTP/1.1
Host: management.azure.com Authorization: Bearer eyJ0eXAiOiJKV... User-Agent: ARMClient/1.2.0.0
Once you receive the ExpressRoute service key from the target customer, follow t
2. **GET expressRouteCrossConnection:** Once you have identified both the *Name* and *ResourceGroupName* of the target expressRouteCrossConnection resource, you need to perform the GET expressRouteCrossConnection API call.
- ```GET /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24?api-version=2018-02-01 HTTP/1.1
+ ```
+ GET /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24?api-version=2018-02-01 HTTP/1.1
Host: management.azure.com Authorization: Bearer eyJ0eXAiOiJKV... User-Agent: ARMClient/1.2.0.0
Once you receive the ExpressRoute service key from the target customer, follow t
``` 3. **PUT expressRouteCrossConnection:** Once you provision layer-2 connectivity, update the *ServiceProviderProvisioningState* to **Provisioned**. At this point, the customer can configure Microsoft or Private Peering and create a connection from the ExpressRoute circuit to a virtual network gateway deployed in the customer's subscription.
- ```PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24?api-version=2018-02-01 HTTP/1.1
+ ```
+ PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24?api-version=2018-02-01 HTTP/1.1
Host: management.azure.com Authorization: Bearer eyJ0eXAiOiJKV... User-Agent: ARMClient/1.2.0.0
Once you receive the ExpressRoute service key from the target customer, follow t
4. **(Optional) PUT expressRouteCrossConnection to configure Private Peering** If you manage layer-3 BGP connectivity, you can enable Private Peering
- ```PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24/peerings/AzurePrivatePeering?api-version=2018-02-01 HTTP/1.1
+ ```
+ PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24/peerings/AzurePrivatePeering?api-version=2018-02-01 HTTP/1.1
Host: management.azure.com Authorization: Bearer eyJ0eXAiOiJKV... User-Agent: ARMClient/1.2.0.0
Once you receive the ExpressRoute service key from the target customer, follow t
5. **(Optional) PUT expressRouteCrossConnection to configure Microsoft Peering** If you manage layer-3 BGP connectivity, you can enable Microsoft Peering
- ```PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24/peerings/MicrosoftPeering?api-version=2018-02-01 HTTP/1.1
+ ```
+ PUT /subscriptions/<ProviderManagementSubscription>/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/9ee700ad-50b2-4b98-a63a-4e52f855ac24/peerings/MicrosoftPeering?api-version=2018-02-01 HTTP/1.1
Host: management.azure.com Authorization: Bearer eyJ0eXAiOiJKV... User-Agent: ARMClient/1.2.0.0
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | 10G, 100G | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | 10G | Equinix | | **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | 10G, 100G | CenturyLink Cloud Connect, Megaport |
-| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | 10G, 100G | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
+| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | 10G, 100G | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | 10G, 100G | | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | 10G, 100G | Aryaka Networks, Equinix, Level 3 Communications, Megaport, Telus, Zayo | | **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | 10G, 100G | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
If you would like to subscribe on release notes, watch releases on [this GitHub
## Release date: 12/27/2021
-This release applies for both HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
+This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5 LTS
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Previously updated : 10/06/2021 Last updated : 01/03/2022
In this article, we'll cover some of the nuances of the RESTful interactions of
Azure API for FHIR supports create, conditional create, update, and conditional update as defined by the FHIR specification. One useful header in these scenarios is the [If-Match](https://www.hl7.org/fhir/http.html#concurrency) header. The `If-Match` header is used and will validate the version being updated before making the update. If the `ETag` doesnΓÇÖt match the expected `ETag`, it will produce the error message *412 Precondition Failed*.
-## Delete
+## Delete and Conditional Delete
-[Delete](https://www.hl7.org/fhir/http.html#delete) defined by the FHIR specification requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code, and the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+The FHIR service offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
+
+### Delete (Hard + Soft Delete)
+
+Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, the FHIR service enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
> [!NOTE]
-> If you want to delete only the history, Azure API for FHIR supports a custom operations, `$purge-history`, which allows you to delete the history off of a resource.
+> If you only want to delete the history, the FHIR service supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
+
+### Conditional Delete
+
+ Conditional Delete allows you to pass search criteria to delete a resource. By default, the Conditional Delete allows you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using Conditional Delete.
+
+To delete a single item using Conditional Delete, you must specify search criteria that returns a single item.
-## Conditional delete
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704`
-In addition to delete, Azure API for FHIR supports conditional delete, which allows you to pass a search criteria to delete a resource. By default, the conditional delete will allow you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using conditional delete.
+You can do the same search but include `hardDelete=true` to also delete all history.
-To delete a single item using conditional delete, you must specify search criteria that returns a single item.
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&hardDelete=true`
-DELETE `https://{{hostname}}/Patient?identifier=1032704`
+To delete multiple resources, include `_count=100` parameter. This parameter will delete up to 100 resources that match the search criteria.
-You can do the same search but include hardDelete=true to also delete all history.
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&_count=100`
+
+### Recovery of deleted files
-DELETE `https://{{hostname}}/Patient?identifier=1032704&hardDelete=true`
+If you don't use the hard delete parameter, then the record(s) in the FHIR service should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
+
+If the ID of the resource that was deleted is known, use the following URL pattern:
-If you want to delete multiple resources, you can include `_count=100`, which will delete up to 100 resources that match the search criteria.
+`<FHIR_URL>/<resource-type>/<resource-id>/_history`
-DELETE `https://{{hostname}}/Patient?identifier=1032704&_count=100`
+For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history`
+
+If the ID of the resource is not known, do a history search on the entire resource type:
+
+`<FHIR_URL>/<resource-type>/_history`
+
+For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/_history`
+
+After you've found the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information.
+
+> [!NOTE]
+> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.
## Patch and Conditional Patch
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-faq.md
Previously updated : 12/28/2021 Last updated : 12/30/2021
For more information, see [Supported FHIR features](fhir-features-supported.md).
The FHIR service is our implementation of the FHIR specification that sits in the Azure Healthcare APIs, which allows you to have a FHIR service and a DICOM service within a single workspace. The Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are:
-* The FHIR service has a limit of 4TB and is in public preview while the Azure API for FHIR supports more than 4TB and is GA.
+* The FHIR service has a limit of 4 TB and is in public preview while the Azure API for FHIR supports more than 4 TB and is GA.
* The FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction). * The Azure API for FHIR has more platform features (such as private link, customer managed keys, and logging) that are not yet available in the FHIR service in the Azure Healthcare APIs. More details will follow on these features by GA.
When you run the FHIR Server for Azure, you have direct access to the underlying
### In which regions is the FHIR service available?
-We are expanding the global footprints of the Healthcare APIs continually based on customer demands and is available in multiple geo-regions.
+The FHIR service is available in all regions that the Azure Healthcare APIs is available. You can see that on the [Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page.
### Where can I see what is releasing into the FHIR service?
We support sorting by string and dateTime fields in the FHIR service. For more i
### Does the FHIR service support any terminology operations?
-No, Azure API for FHIR doesn't support terminology operations today.
+No, the FHIR service doesn't support terminology operations today.
+
+### What are the differences between delete types in the FHIR service?
+
+There're two basic Delete types supported within the FHIR service. These are [Delete and Conditional Delete](././../fhir/fhir-rest-api-capabilities.md#delete-and-conditional-delete).
++
+* With Delete, you can choose to do a soft delete (most common type) and still be able to recover historic versions of your record.
+* With Conditional Delete, you can pass search criteria to delete a resource one item at a time or several at a time.
+* If you passed the `hardDelete` parameter with either Delete or Conditional Delete, all the records and history are deleted and unrecoverable.
## Using the FHIR service
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Previously updated : 10/06/2021 Last updated : 01/03/2022
In this article, we'll cover some of the nuances of the RESTful interactions of Azure Healthcare APIs FHIR service (hereby called the FHIR service). - ## Conditional create/update The FHIR service supports create, conditional create, update, and conditional update as defined by the FHIR specification. One useful header in these scenarios is the [If-Match](https://www.hl7.org/fhir/http.html#concurrency) header. The `If-Match` header is used and will validate the version being updated before making the update. If the `ETag` doesnΓÇÖt match the expected `ETag`, it will produce the error message *412 Precondition Failed*.
-## Delete
+## Delete and Conditional Delete
+
+The FHIR service offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
+
+### Delete (Hard + Soft Delete)
-[Delete](https://www.hl7.org/fhir/http.html#delete) defined by the FHIR specification requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code, and the resource is no longer found through searching. Additionally, the FHIR service enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, the FHIR service enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
> [!NOTE]
-> If you want to delete only the history, the FHIR service supports a custom operations, `$purge-history`, which allows you to delete the history off of a resource.
+> If you only want to delete the history, the FHIR service supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
+
+### Conditional Delete
+
+ Conditional Delete allows you to pass search criteria to delete a resource. By default, the Conditional Delete allows you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using Conditional Delete.
-## Conditional delete
+To delete a single item using Conditional Delete, you must specify search criteria that returns a single item.
-In addition to delete, the FHIR service supports conditional delete, which allows you to pass a search criteria to delete a resource. By default, the conditional delete allows you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using conditional delete.
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704`
-To delete a single item using conditional delete, you must specify search criteria that returns a single item.
+You can do the same search but include `hardDelete=true` to also delete all history.
-DELETE `https://{{hostname}}/Patient?identifier=1032704`
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&hardDelete=true`
-You can do the same search but include hardDelete=true to also delete all history.
+To delete multiple resources, include `_count=100` parameter. This parameter will delete up to 100 resources that match the search criteria.
-DELETE `https://{{hostname}}/Patient?identifier=1032704&hardDelete=true`
+`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&_count=100`
+
+### Recovery of deleted files
-If you want to delete multiple resources, you can include `_count=100`, which will delete up to 100 resources that match the search criteria.
+If you don't use the hard delete parameter, then the record(s) in the FHIR service should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
+
+If the ID of the resource that was deleted is known, use the following URL pattern:
-DELETE `https://{{hostname}}/Patient?identifier=1032704&_count=100`
+`<FHIR_URL>/<resource-type>/<resource-id>/_history`
+
+For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history`
+
+If the ID of the resource is not known, do a history search on the entire resource type:
+
+`<FHIR_URL>/<resource-type>/_history`
+
+For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/_history`
+
+After you've found the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information.
+
+> [!NOTE]
+> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.
## Patch and Conditional Patch
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/iot-dps-mqtt-support.md
DPS is not a full-featured MQTT broker and does not support all the behaviors sp
All device communication with DPS must be secured using TLS/SSL. Therefore, DPS doesn't support non-secure connections over port 1883.
- > [!NOTE]
+ > [!NOTE]
> DPS does not currently support devices using TPM [attestation mechanism](./concepts-service.md#attestation-mechanism) over the MQTT protocol. ## Connecting to DPS
If a device cannot use the device SDKs, it can still connect to the public devic
* For the **ClientId** field, use **registrationId**.
-* For the **Username** field, use `{idScope}/registrations/{registration_id}/api-version=2019-03-31`, where `{idScope}` is the [idScope](./concepts-service.md#id-scope) of the DPS.
+* For the **Username** field, use `{idScope}/registrations/{registration_id}/api-version=2019-03-31`, where `{idScope}` is the [ID scope](./concepts-service.md#id-scope) of the DPS and `{registration_id}` is the [Registration ID](./concepts-service.md#registration-id) for your device.
+
+ > [!NOTE]
+ > If you use X.509 certificate authentication, the registration ID is provided by the subject common name (CN) of your device leaf (end-entity) certificate. `{registration_id}` in the **Username** field must match the common name.
* For the **Password** field, use a SAS token. The format of the SAS token is the same as for both the HTTPS and AMQP protocols:
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-test-certificates.md
description: Create test certificates and learn how to install them on an Azure
Previously updated : 10/25/2021 Last updated : 01/03/2022
Before proceeding with the steps in this section, follow the steps in the [Set u
This command creates several certificate and key files. The following certificate and key pair needs to be copied over to an IoT Edge device and referenced in the config file:
- * `<WRKDIR>\certs\iot-edge-device-<CA cert name>-full-chain.cert.pem`
- * `<WRKDIR>\private\iot-edge-device-<CA cert name>.key.pem`
+ * `<WRKDIR>\certs\iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
+ * `<WRKDIR>\private\iot-edge-device-ca-<CA cert name>.key.pem`
The name passed to the **New-CACertsEdgeDevice** command should not be the same as the hostname parameter in the config file, or the device's ID in IoT Hub.
The name passed to the **New-CACertsEdgeDevice** command should not be the same
This script command creates several certificate and key files. The following certificate and key pair needs to be copied over to an IoT Edge device and referenced in the config file:
- * `<WRKDIR>/certs/iot-edge-device-<CA cert name>-full-chain.cert.pem`
- * `<WRKDIR>/private/iot-edge-device-<CA cert name>.key.pem`
+ * `<WRKDIR>/certs/iot-edge-device-ca-<CA cert name>-full-chain.cert.pem`
+ * `<WRKDIR>/private/iot-edge-device-ca-<CA cert name>.key.pem`
The name passed to the **create_edge_device_ca_certificate** command should not be the same as the hostname parameter in the config file, or the device's ID in IoT Hub.
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-transparent-gateway.md
If you don't have your own certificate authority and want to use demo certificat
2. Create a root CA certificate. At the end of those instructions, you'll have a root CA certificate file: * `<path>/certs/azure-iot-test-only.root.ca.cert.pem`. 3. Create IoT Edge device CA certificates. At the end of those instructions, you'll have a device CA certificate and its private key:
- * `<path>/certs/iot-edge-device-<cert name>-full-chain.cert.pem` and
- * `<path>/private/iot-edge-device-<cert name>.key.pem`
+ * `<path>/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem` and
+ * `<path>/private/iot-edge-device-ca-<cert name>.key.pem`
If you created the certificates on a different machine, copy them over to your IoT Edge device then proceed with the next steps.
For a gateway scenario to work, at least one of the IoT Edge hub's supported pro
## Next steps
-Now that you have an IoT Edge device set up as a transparent gateway, you need to configure your downstream devices to trust the gateway and send messages to it. Continue on to [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md) for the next steps in setting up your transparent gateway scenario.
+Now that you have an IoT Edge device set up as a transparent gateway, you need to configure your downstream devices to trust the gateway and send messages to it. Continue on to [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md) for the next steps in setting up your transparent gateway scenario.
iot-hub-device-update Components Enumerator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/components-enumerator.md
- Title: 'Register components with Device Update: Contoso Virtual Vacuum component enumerator | Microsoft Docs'
-description: Follow a Contoso Virtual Vacuum example to implement your own component enumerator by using proxy update.
-- Previously updated : 12/3/2021----
-# Register components with Device Update: Contoso Virtual Vacuum component enumerator
-
-This article shows an example implementation of the Contoso Virtual Vacuum component enumerator. You can reference this example to implement a custom component enumerator for your Internet of Things (IoT) devices. A *component* is an identity beneath the device level that has a composition relationship with the host device.
-
-## What is Contoso Virtual Vacuum?
-
-Contoso Virtual Vacuum is a virtual IoT device that we use to demonstrate the *proxy update* feature.
-
-Proxy update enables updating multiple components on the same IoT device or multiple sensors connected to the IoT device with a single over-the-air deployment. Proxy update supports an installation order for updating components. It also supports multiple-step updating with pre-installation, installation, and post-installation capabilities.
-
-Use cases where proxy updates are applicable include:
--- Targeting specific update files to partitions on the device.-- Targeting specific update files to apps or components on the device.-- Targeting specific update files to sensors connected to IoT devices over a network protocol (for example, USB or CAN bus). -
-The Device Update Agent runs on the host device. It can send each update to a specific component or to a group of components of the same hardware class (that is, requiring the same software or firmware update).
-
-## Virtual Vacuum components
-
-For this demonstration, Contoso Virtual Vacuum consists of five logical components:
--- Host firmware-- Host boot file system-- Host root file system-- Three motors (left wheel, right wheel, and vacuum)-- Two cameras (front and rear)--
-We used the following directory structure to simulate the components:
-
-```sh
-/usr/local/contoso-devices/vacuum-1/hostfw
-/usr/local/contoso-devices/vacuum-1/bootfs
-/usr/local/contoso-devices/vacuum-1/rootfs
-/usr/local/contoso-devices/vacuum-1/motors/0 /* left motor */
-/usr/local/contoso-devices/vacuum-1/motors/1 /* right motor */
-/usr/local/contoso-devices/vacuum-1/motors/2 /* vacuum motor */
-/usr/local/contoso-devices/vacuum-1/cameras/0 /* front camera */
-/usr/local/contoso-devices/vacuum-1/cameras/1 /* rear camera */
-```
-
-Each component's directory contains a JSON file that stores a mock software version number of each component. Example JSON files are *firmware.json* and *diskimage.json*.
-
-> [!NOTE]
-> For this demo, to update the components' firmware, we'll copy *firmware.json* or *diskimage.json* (update payload) to the targeted components' directory.
-
-Here's an example *firmware.json* file:
-
-```json
-{
- "version": "0.5",
- "description": "This component is generated for testing purposes."
-}
-```
-
-> [!NOTE]
-> Contoso Virtual Vacuum contains software or firmware versions for the purpose of demonstrating proxy update. It doesn't provide any other functionality.
-
-## What is a component enumerator?
-
-A component enumerator is a Device Update Agent extension that provides information about every component that you need for an over-the-air update via a host device's Azure IoT Hub connection.
-
-The Device Update Agent is device and component agnostic. By itself, the agent doesn't know anything about components on (or connected to) a host device at the time of the update.
-
-To enable proxy updates, device builders must identify all updateable components on the device and assign a unique name to each component. Also, a group name can be assigned to components of the same hardware class, so that the same update can be installed onto all components in the same group. The update content handler can then install and apply the update to the correct components.
--
-Here are the responsibilities of each part of the proxy update flow:
--- **Device builder**
- - Design and build the device.
- - Integrate the Device Update Agent and its dependencies.
- - Implement a device-specific component enumerator extension and register with the Device Update Agent.
-
- The component enumerator uses the information from a component inventory or a configuration file to augment static component data (Device Update required) with dynamic data (for example, firmware version, connection status, and hardware identity).
- - Create a proxy update that contains one or more child updates that target one or more components on (or connected to) the device.
- - Send the update to the solution operator.
-- **Solution operator**
- - Import the update (and manifest) to the Device Update service.
- - Deploy the update to a group of devices.
-- **Device Update Agent**
- - Get update information from Azure IoT Hub (via device twin or module twin).
- - Invoke a *steps handler* to process the proxy update intended for one or more components on the device.
-
- This example has two updates: `host-fw-1.1` and `motors-fw-1.1`. For each child update, the parent steps handler invokes a child steps handler to enumerate all components that match the `Compatibilities` properties specified in the child update's manifest file. Next, the handler downloads, installs, and applies the child update to all targeted components.
-
- To get the matching components, the child update calls a `SelectComponents` API provided by the component enumerator. If there are no matching components, the child update is skipped.
- - Collect all update results from parent and child updates, and report those results to Azure IoT Hub.
-- **Child steps handler**
- - Iterate through a list of component instances that are compatible with the child update content.
-
-In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom handler that invokes any installer needed for an over-the-air update.
-
-## Implement a component enumerator for the Device Update Agent (C language)
-
-### Requirements
-
-Implement all APIs declared in component_enumerator_extension.hpp:
-
-| Function | Arguments | Returns |
-||||
-|`char* GetAllComponents()`|None|A JSON string that contains an array of *all* `ComponentInfo` values. For more information, see [Example return values](#example-return-values).|
-|`char* SelectComponents(char* selector)`|A JSON string that contains one or more name/value pairs used for selecting update target components| A JSON string that contains an array of `ComponentInfo` values. For more information, see [Example return values](#example-return-values).|
-|`void FreeComponentsDataString(char* string)`|A pointer to string buffer previously returned by `GetAllComponents` or `SelectComponents` functions|None|
-
-### ComponentInfo
-
-The `ComponentInfo` JSON string must include the following properties:
-
-| Name | Type | Description |
-||||
-|`id`| string | A component's unique identity (device scope). Examples include hardware serial number, disk partition ID, and unique file path of the component.|
-|`name`| string| A component's logical name. This is the name that a device builder assigns to a component that's available in every device of the same `device` class.<br/><br/>For example, every Contoso Virtual Vacuum device contains a motor that drives a left wheel. Contoso assigned *left motor* as a common (logical) name for this motor to easily refer to this component, instead of hardware ID, which can be globally unique.|
-|`group`|string|A group that this component belongs to.<br/><br/>For example, all motors could belong to a *motors* group.|
-|`manufacturer`|string|For a physical hardware component, this is a manufacturer or vendor name.<br/><br/>For a logical component, such as a disk partition or directory, it can be any device builder's defined value.|
-|`model`|string|For a physical hardware component, this is a model name.<br/><br/>For a logical component, such as a disk partition or directory, this can be any device builder's defined value.|
-|`properties`|object| A JSON object that contains any optional device-specific properties.|
-
-Here's an example of `ComponentInfo` code:
-
-```json
-{
- "id": "contoso-motor-serial-00000",
- "name": "left-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
- "firmwareDataFile": "firmware.json",
- "status": "connected",
- "version" : "motor-fw-1.0"
- }
-}
-```
-
-### Example return values
-
-Following is a JSON document returned from the `GetAllComponents` function. It's based on the example implementation of the Contoso component enumerator.
-
-```json
-{
- "components": [
- {
- "id": "hostfw",
- "name": "hostfw",
- "group": "firmware",
- "manufacturer": "contoso",
- "model": "virtual-firmware",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "host-fw-1.0"
- }
- },
- {
- "id": "bootfs",
- "name": "bootfs",
- "group": "boot-image",
- "manufacturer": "contoso",
- "model": "virtual-disk",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/bootfs",
- "firmwareDataFile": "diskimage.json",
- "status": "ok",
- "version" : "boot-fs-1.0"
- }
- },
- {
- "id": "rootfs",
- "name": "rootfs",
- "group": "os-image",
- "manufacturer": "contoso",
- "model": "virtual-os",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/rootfs",
- "firmwareDataFile": "diskimage.json",
- "status": "ok",
- "version" : "root-fs-1.0"
- }
- },
- {
- "id": "contoso-motor-serial-00000",
- "name": "left-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- },
- {
- "id": "contoso-motor-serial-00001",
- "name": "right-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- },
- {
- "id": "contoso-motor-serial-00002",
- "name": "vacuum-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- },
- {
- "id": "contoso-camera-serial-00000",
- "name": "front-camera",
- "group": "cameras",
- "manufacturer": "contoso",
- "model": "virtual-camera",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/0",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "camera-fw-1.0"
- }
- },
- {
- "id": "contoso-camera-serial-00001",
- "name": "rear-camera",
- "group": "cameras",
- "manufacturer": "contoso",
- "model": "virtual-camera",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/1",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "camera-fw-1.0"
- }
- }
- ]
-}
-```
-
-The following JSON document is returned from the `SelectComponents` function. It's based on the example implementation of the Contoso component enumerator.
-
-Here's the input parameter for selecting the *motors* component group:
-
-```json
-{
- "group" : "motors"
-}
-```
-
-Here's the output of the parameter. All components belong to the *motors* group.
-
-```json
-{
- "components": [
- {
- "id": "contoso-motor-serial-00000",
- "name": "left-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- },
- {
- "id": "contoso-motor-serial-00001",
- "name": "right-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- },
- {
- "id": "contoso-motor-serial-00002",
- "name": "vacuum-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "motor-fw-1.0"
- }
- }
- ]
-}
-```
-
-Here's the input parameter for selecting a single component named *hostfw*:
-
-```json
-{
- "name" : "hostfw"
-}
-```
-
-Here's the parameter's output for the *hostfw* component:
-
-```json
-{
- "components": [
- {
- "id": "hostfw",
- "name": "hostfw",
- "group": "firmware",
- "manufacturer": "contoso",
- "model": "virtual-firmware",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
- "firmwareDataFile": "firmware.json",
- "status": "ok",
- "version" : "host-fw-1.0"
- }
- }
- ]
-}
-```
-
-> [!NOTE]
-> The preceding example demonstrated that, if needed, it's possible to send a newer update to any instance of a component that's selected by `name` property. For example, deploy the `motor-fw-2.0` update to *vacuum-motor* while continuing to use `motor-fw-1.0` on *left-motor* and *right-motor*.
-
-## Inventory file
-
-The example implementation shown earlier for the Contoso component enumerator will read the device-specific components' information from the *component-inventory.json* file. Note that this example implementation is only for demonstration purposes.
-
-In a production scenario, some properties should be retrieved directly from the actual components. These properties include `id`, `manufacturer`, and `model`.
-
-The device builder defines the `name` and `group` properties. These values should never change after they're defined. The `name` property must be unique within the device.
-
-#### Example component-inventory.json file
-
-> [!NOTE]
-> The content in this file looks almost the same as the returned value from the `GetAllComponents` function. However, `ComponentInfo` in this file doesn't contain `version` and `status` properties. The component enumerator will populate these properties at runtime.
-
-For example, for *hostfw*, the value of the property `properties.version` will be populated from the specified (mock) `firmwareDataFile` value (*/usr/local/contoso-devices/vacuum-1/hostfw/firmware.json*).
-
-```json
-{
- "components": [
- {
- "id": "hostfw",
- "name": "hostfw",
- "group": "firmware",
- "manufacturer": "contoso",
- "model": "virtual-firmware",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
- "firmwareDataFile": "firmware.json",
- }
- },
- {
- "id": "bootfs",
- "name": "bootfs",
- "group": "boot-image",
- "manufacturer": "contoso",
- "model": "virtual-disk",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/bootfs",
- "firmwareDataFile": "diskimage.json",
- }
- },
- {
- "id": "rootfs",
- "name": "rootfs",
- "group": "os-image",
- "manufacturer": "contoso",
- "model": "virtual-os",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/rootfs",
- "firmwareDataFile": "diskimage.json",
- }
- },
- {
- "id": "contoso-motor-serial-00000",
- "name": "left-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
- "firmwareDataFile": "firmware.json",
- }
- },
- {
- "id": "contoso-motor-serial-00001",
- "name": "right-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
- "firmwareDataFile": "firmware.json",
- }
- },
- {
- "id": "contoso-motor-serial-00002",
- "name": "vacuum-motor",
- "group": "motors",
- "manufacturer": "contoso",
- "model": "virtual-motor",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
- "firmwareDataFile": "firmware.json",
- }
- },
- {
- "id": "contoso-camera-serial-00000",
- "name": "front-camera",
- "group": "cameras",
- "manufacturer": "contoso",
- "model": "virtual-camera",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/0",
- "firmwareDataFile": "firmware.json",
- }
- },
- {
- "id": "contoso-camera-serial-00001",
- "name": "rear-camera",
- "group": "cameras",
- "manufacturer": "contoso",
- "model": "virtual-camera",
- "properties": {
- "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/1",
- "firmwareDataFile": "firmware.json",
- }
- }
- ]
-}
-```
-
-## Next steps
-
-This example is written in C++. You can choose to use C if you prefer.
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-overview.md
allowing for messaging to flow between the Device Update Agent and Device Update
The Interface layer is made up of the [ADU Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
-These interfaces rely on a configuration file for the device specific values that need to be reported to the Device Update services. [Learn More](device-update-configuration-file.md) about the configuration file.
+These interfaces rely on a configuration file for default values. The default values include aduc_manufacturer and aduc_model for the AzureDeviceUpdateCore interface and model and manufacturer for the DeviceInformation interface. [Learn More](device-update-configuration-file.md) the configuration file.
### ADU Core Interface
The Device Information Interface is used to implement the `Azure IoT PnP DeviceI
## The Platform Layer
-The Linux Platform Layer integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for
+There are two implementations of the Platform Layer. The Simulator Platform
+Layer has a trivial implementation of the update actions and is used for quickly
+testing and evaluating Device Update for IoT Hub services and setup. When the Device Update Agent is built with
+the Simulator Platform Layer, we refer to it as the Device Update Simulator Agent or just
+simulator. [Learn More](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-run-agent.md) about how to use the simulator
+agent. The Linux Platform Layer integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for
downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
+### Simulator Platform Layer
+
+The Simulator Platform Layer implementation can be found in the
+`src/platform_layers/simulator_platform_layer` and can be used for
+testing and evaluating Device Update for IoT Hub. Many of the actions in the
+"simulator" implementation are mocked to reduce physical changes to experiment with Device Update for IoT Hub. An end to end
+"simulated" update can be performed using this Platform Layer. [Learn
+More](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-run-agent.md) about running the simulator agent.
+ ### Linux Platform Layer The Linux Platform Layer implementation can be found in the
-`src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization Client](https://github.com/microsoft/do-client/releases) for downloads.
+`src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization Client](https://github.com/microsoft/do-client/releases) for downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
This layer can integrate with different Update Handlers to implement the
-installers. For instance, the `SWUpdate` update handler, 'Apt' update handler, and 'Script' update handler.
+installer. For
+instance, the `SWUpdate` Update Handler invokes a shell script to call into the
+`SWUpdate` executable to perform an update.
## Update Handlers Update Handlers are components that handle content or installer-specific parts
-of the update. You can either use [existing Device Update handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom Content Handler that invokes any installer needed for your use case.
+of the update. Update Handler implementations are in `src/content_handlers`.
-## Self-upgrade Device update agent
+### Simulator Update Handler
-We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0).
+The Simulator Update Handler is used by the Simulator Platform Layer and can
+be used with the Linux Platform Layer to fake interactions with a Content
+Handler. The Simulator Update Handler implements the Update Handler APIs with
+mostly no-ops. The implementation of the Simulator Update Handler can be found below:
+* [Image update simulator](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/swupdate_handler/inc/aduc/swupdate_simulator_handler.hpp)
+* [Package update apt simulator](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/apt_handler/inc/aduc/apt_simulator_handler.hpp)
-If you are using the Device Update agent versions 0.6.0 or 0.7.0 please upgrade to the latest agent version 0.8.0.
+>[!Note]
+>The InstalledCriteria field in the AzureDeviceUpdateCore PnP interface should be the sha256 hash of the content. This is the same hash that is present in the [Import Manifest
+Object](import-update.md#create-a-device-update-import-manifest). [Learn More](device-update-plug-and-play.md) about `installedCriteria` and the `AzureDeviceUpdateCore` interface.
-You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
+### `SWUpdate` Update Handler
-## Next Steps
-[Understand Device Update agent configuration file](device-update-configuration-file.md)
+The `SWUpdate` Update Handler integrates with the `SWUpdate` command-line
+executable and other shell commands to implement A/B updates specifically for
+the Raspberry Pi reference image. Find the latest Raspberry Pi reference image [here](https://github.com/Azure/iot-hub-device-update/releases). The `SWUpdate` Update Handler is implemented in [src/content_handlers/swupdate_content_handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/swupdate_handler).
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+### APT Update Handler
-- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+The APT Update Handler processes an APT-specific Update Manifest and invokes APT to
+install or update the specified Debian package(s).
-- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+## Self-update Device update agent
+
+The device update agent and its dependencies can be updated through the Device Update for IoT Hub pipeline. If you are using an image-based update, include the latest device update agent in your new image. If you are using a package-based update, include the device update agent and its desired version in the apt manifest like any other package. [Learn more](device-update-apt-manifest.md) about apt manifest. You can check the installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
+
+## Next steps
+[Understand Device Update agent configuration file](device-update-configuration-file.md)
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
The Device Update Module agent can run alongside other system processes and [IoT Edge modules](../iot-edge/iot-edge-modules.md) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
-## Changes to Device Update agent at Public Preview Refresh
-
-We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0).
-
-If you are using the Device Update agent versions 0.6.0 or 0.7.0 please upgrade to the latest agent version 0.8.0.
-
-You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
## Module identity vs device identity
If you are migrating from a device level agent to adding the agent as a Module i
## Support for Device Update
-The following IoT device over the air update types are currently supported with Device Update:
+The following IoT device types are currently supported with Device Update:
* Linux devices (IoT Edge and Non-IoT Edge devices):
- * [Image )
- * [Package update](device-update-ubuntu-agent.md)
- * [Proxy update for downstream devices](device-update-howto-proxy-updates.md)
-
+ * Image A/B update:
+ - Yocto - ARM64 (reference image), extensible via open source to [build your own images](device-update-agent-provisioning.md#how-to-build-and-run-device-update-agent) for other architecture as needed.
+ - Ubuntu 18.04 simulator
+
+ * Package Agent supported builds for the following platforms/architectures:
+ - Ubuntu Server 18.04 x64 Package Agent
+ - Debian 9
+
* Constrained devices: * AzureRTOS Device Update agent samples: [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. Install the Device Update image update agent.
- We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+ We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
1. Install the Device Update package update agent.
Follow these instructions to provision the Device Update agent on your IoT Linux
1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs-dev/packaging.md#installing-and-configuring-the-package) on your IoT device using this command: > [!Note] > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.- ```shell sudo apt-get install aziot-identity-service ```
Follow these instructions to provision the Device Update agent on your IoT Linux
sudo aziotctl config apply ```
-1. Finally install the Device Update agent. We provide sample images in [Assets here](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. Finally install the Device Update agent. We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases). The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
1. You are now ready to start the Device Update agent on your IoT device.
Follow these instructions to provision the Device Update agent on your IoT Linux
The Device Update agent can also be configured without the IoT Identity service for testing or on constrained devices. Follow the below steps to provision the Device Update agent using a connection string (from the Module or Device).
-1. We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
1. Log onto the machine or IoT Edge device/IoT device.
The Device Update agent can also be configured without the IoT Identity service
1. Enter the below in the terminal window:
- - [For Ubuntu agent](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/du-config.json
- - [For Yocto reference image](device-update-raspberry-pi.md) use: sudo nano /adu/du-config.json
+ - [For Package updates](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/adu-conf.txt
+ - [For Image updates](device-update-raspberry-pi.md) use: sudo nano /adu/adu-conf.txt
- 1. Copy the primary connection string
+ 1. You should see a window open with some text in it. Delete the entire string following 'connection_String=' the first time you provision the Device Update agent on the IoT device. It is just place holder text.
- - If Device Update agent is configured as a module copy the module's primary connection string.
- - Otherwise copy the device's primary connection string.
-
- 3. Enter the copied primary connection string to the 'connectionData' field's value in the du-config.json file. Then save and close the file.
-
+ 1. In the terminal, replace \<your-connection-string\> with the connection string of the device for your instance of Device Update agent. Select Enter and then **Save.** It should look like this example:
+
+ ```text
+ connection_string=<ADD CONNECTION STRING HERE>
+ ```
+
+ > [!Important]
+ > Do not add quotes around the connection string.
+
1. Now you are now ready to start the Device Update agent on your IoT device. ## How to start the Device Update Agent
If you run into issues, review the Device Update for IoT Hub [Troubleshooting Gu
## Next steps
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+You can use the following pre-built images and binaries for a simple demonstration of Device Update for IoT Hub:
- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
+ - [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+ - [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Configuration File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-configuration-file.md
Title: Understand Device Update for Azure IoT Hub Configuration File| Microsoft
description: Understand Device Update for Azure IoT Hub Configuration File. Previously updated : 12/13/2021 Last updated : 2/12/2021 # Device Update for IoT Hub Configuration File
-The "du-config.json" is a file that contains the below configurations for the Device Update agent. The Device Update Agent will then read these values and report them to the Device Update Service.
+The "adu-conf.txt" is an optional file that can be created to manage the following configurations.
* AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["manufacturer"] * AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"] * DeviceInformation.manufacturer * DeviceInformation.model
-* connectionData
-* connectionType
-
+* Device Connection String (if it is not known by the Device Update Agent).
+
+## Purpose
+The Device Update Agent will first try to get the `manufacturer` and `model` values from the device to use for the [Interface Layer](device-update-agent-overview.md#the-interface-layer). If that fails, the Device Update Agent will next look for the "adu-conf.txt" file and use the values from there. If both attempts are not successful, the Device Update Agent will use [default](https://github.com/Azure/iot-hub-device-update/blob/main/CMakeLists.txt) values.
+
+Learn more about [ADU Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
+ ## File location
-When installing Debian agent on an IoT Device with a Linux OS, modify the '/etc/adu/du-config.json' file to update values. For a Yocto build system, in the partition or disk called 'adu' create a json file called '/adu/du-config.json'.
+Within Linux system, in the partition or disk called `adu`, create a text file called "adu-conf.txt" with the following fields.
## List of fields |Name|Description| |--|--|
-|connectionType|Possible values "string" when connecting the device to IoT Hub manually for testing purposes. For production scenarios, use value "AIS" when using the IoT Identity Service to connect the device to IoT Hub. See [understand IoT Identity Service configurations](https://azure.github.io/iot-identity-service/configuration.html)|
-|connectionData|If connectionType = "string", add the value from your IoT Device's, device or module connection string here. If connectionType = "AIS", add the value that you set up as 'principal' in the [IoT Identity ServiceΓÇÖs TOML file](https://azure.github.io/iot-identity-service/configuration.html). For example, you can name the Device Update module as ΓÇ£iotHubDeviceUpdateΓÇ¥ for the 'connectionData' and 'principal'.|
+|connection_string|Pre-provisioned connection string the device can use to connect to the IoT Hub. Note: Not required if you are provisioning Device Update Agent through the [Azure IoT Identity Service](https://azure.github.io/iot-identity-service/)|
|aduc_manufacturer|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.| |aduc_model|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.| |manufacturer|Reported by the Device Update Agent as part of the `DeviceInformation` interface.| |model|Reported by the Device Update Agent as part of the `DeviceInformation` interface.|
-|SchemaVersion|The schema version that maps the current configuration file format version|
-|aduShellTrustedUsers|The list of users that can launch the 'adu-shell' program. Note, 'adu-shell' is a "broker" program that does various Update Actions, as 'root'. The Device Update default content update handlers invoke 'adu-shell' to do tasks that require "super user" privilege. Examples of tasks that require this privilege are "apt-get install" or executing a privileged scripts.|
-## Example "du-conf.json" file contents
+## Example "adu-conf.txt" file contents
```markdown
-{
- "schemaVersion": "1.1",
- "aduShellTrustedUsers": [
- "adu",
- "do"
- ],
- "manufacturer": <Place your device info manufacturer here>,
- "model": <Place your device info model here>,
- "agents": [
- {
- "name": <Place your agent name here>,
- "runas": "adu",
- "connectionSource": {
- "connectionType": "string", //or ΓÇ£AISΓÇ¥
- "connectionData": <Place your Azure IoT device connection string here>
- },
- "manufacturer": <Place your device property manufacturer here>,
- "model": <Place your device property model here>
- }
- ]
-}
-
+connection_string = `HostName=<yourIoTHubName>;DeviceId=<yourDeviceId>;SharedAccessKey=<yourSharedAccessKey>`
+aduc_manufacturer = <value to send through `AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["manufacturer"]`
+aduc_model = <value to send through `AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"]`
+manufacturer = <value to send through `DeviceInformation.manufacturer`
+model = <value to send through `DeviceInformation.manufacturer`
```
iot-hub-device-update Device Update Howto Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-howto-proxy-updates.md
- Title: Complete a proxy update by using Device Update for Azure IoT Hub | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub by using the Device Update binary agent for proxy updates.
-- Previously updated : 11/12/2021----
-# Tutorial: Complete a proxy update by using Device Update for Azure IoT Hub
-
-If you haven't already done so, review [Using proxy updates with Device Update for Azure IoT Hub](device-update-proxy-updates.md).
-
-## Set up a test device or virtual machine
-
-This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example.
-
-### Install the Device Update Agent and dependencies
-
-1. Register *packages.microsoft.com* in an APT package repository:
-
- ```sh
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ~/microsoft-prod.list
-
- sudo cp ~/microsoft-prod.list /etc/apt/sources.list.d/
-
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > ~/microsoft.gpg
-
- sudo cp ~/microsoft.gpg /etc/apt/trusted.gpg.d/
-
- sudo apt-get update
- ```
-
-2. Install the **deviceupdate-agent** on the IoT device. Download the latest Device Update Debian file from *packages.microsoft.com*:
-
- ```sh
- sudo apt-get install deviceupdate-agent
- ```
-
-3. Copy the downloaded Debian file to the test VM. If you're using PowerShell on your computer, run the following shell command:
-
- ```sh
- scp <path to the .deb file> tester@<your vm's ip address>:~
- ```
-
- Then remote into your VM and run the following shell command in the *home* folder:
-
- ```sh
- #go to home folder
- cd ~
- #install latest Device Update agent
- sudo apt-get install ./<debian file name from the previous step>
- ```
-
-4. Go to Azure IoT Hub and copy the primary connection string for your IoT device's Device Update module. Replace any default value for the `connectionData` field with the primary connection string in the *du-config.json* file:
-
- ```sh
- sudo nano /etc/adu/du-config.json
- ```
-
- > [!NOTE]
- > You can copy the primary connection string for the device instead, but we recommend that you use the string for the Device Update module. For information about setting up the module, see [Device Update Agent provisioning](device-update-agent-provisioning.md).
-
-5. Ensure that */etc/adu/du-diagnostics-config.json* contains the correct settings for log collection. For example:
-
- ```sh
- {
- "logComponents":[
- {
- "componentName":"adu",
- "logPath":"/var/log/adu/"
- },
- {
- "componentName":"do",
- "logPath":"/var/log/deliveryoptimization-agent/"
- }
- ],
- "maxKilobytesToUploadPerLogPath":50
- }
- ```
-
-6. Restart the Device Update agent:
-
- ```sh
- sudo systemctl restart adu-agent
- ```
-
-### Set up mock components
-
-For testing and demonstration purposes, we'll create the following mock components on the device:
--- Three motors-- Two cameras-- "hostfs"-- "rootfs"-
-> [!IMPORTANT]
-> The preceding component configuration is based on the implementation of an example component enumerator extension called *libcontoso-component-enumerator.so*. It also requires this mock component inventory data file: */usr/local/contoso-devices/components-inventory.json*.
-
-1. Copy the demo folder to your home directory on the test VM. Then, run the following command to copy required files to the right locations:
-
- ```markup
- `~/demo/tools/reset-demo-components.sh`
- ```
-
- The `reset-demo-components.sh` command takes the following steps on your behalf:
-
- 1. It copies components-inventory.json and adds it to the */usr/local/contoso-devices* folder.
-
- 2. It copies the Contoso component enumerator extension (*libcontoso-component-enumerator.so*) from the [Assets folder](https://github.com/Azure/iot-hub-device-update/releases) and adds it to the */var/lib/adu/extensions/sources* folder.
-
- 3. It registers the extension:
-
- ```sh
- sudo /usr/bin/AducIotAgent -E /var/lib/adu/extensions/sources/libcontoso-component-enumerator.so
- ```
-
-2. View and record the current components' software version by using the following command to set up the VM to support proxy updates:
-
- ```markup
- ~/demo/show-demo-components.sh
- ```
-
-## Import an example update
-
-If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT hub. Then start the following procedure.
-
-1. From the [latest Device Update release](https://github.com/Azure/iot-hub-device-update/releases), under **Assets**, download the import manifests and images for proxy updates.
-2. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, select **Device Management** > **Updates**.
-3. Select the **Updates** tab.
-4. Select **+ Import New Update**.
-5. Select **+ Select from storage container**, and then choose your storage account and container.
-
- :::image type="content" source="media/understand-device-update/one-import.png" alt-text="Screenshot that shows the button for selecting to import from a storage container." lightbox="media/understand-device-update/one-import.png":::
-6. Select **Upload** to add the files that you downloaded in step 1.
-7. Upload the parent import manifest, child import manifest, and payload files to your container.
-
- The following example shows sample files uploaded to update cameras connected to a smart vacuum cleaner device. It also includes a pre-installation script to turn off the cameras before the over-the-air update.
-
- In the example, the parent import manifest is *contoso.Virtual-Vacuum-virtual-camera.1.4.importmanifest.json*. The child import manifest with details for updating the camera is *Contoso.Virtual-Vacuum.3.3.importmanifest.json*. Note that both manifest file names follow the required format and end with *.importmanifest.json*.
-
- :::image type="content" source="media/understand-device-update/two-containers.png" alt-text="Screenshot that shows sample files uploaded to update cameras connected to a smart vacuum cleaner device." lightbox="media/understand-device-update/two-containers.png":::
-
-8. Choose **Select**.
-9. The UI now shows the list of files that will be imported to Device Update. Select **Import update**.
-
- :::image type="content" source="media/understand-device-update/three-confirm-import.png" alt-text="Screenshot that shows listed files and the button for importing an update." lightbox="media/understand-device-update/three-confirm-import.png":::
-
-10. The import process begins, and the screen changes to the **Import History** section. Select **Refresh** to view progress until the import process finishes. Depending on the size of the update, the import might finish in a few minutes or take longer.
-11. When the **Status** column indicates that the import has succeeded, select the **Available Updates** tab. You should see your imported update in the list now.
-
- :::image type="content" source="media/understand-device-update/four-update-added.png" alt-text="Screenshot that shows the imported update added to the list." lightbox="media/understand-device-update/four-update-added.png":::
-
-[Learn more](import-update.md) about importing updates.
-
-## Create an update group
-
-1. Select the **Groups and Deployments** tab at the top of the page.
-
-2. Select the **+ Add Group** button to create a new group by selecting the **IoT Hub** tag. Then select **Create group**. Note that you can also deploy the update to an existing group.
-
-[Learn more](create-update-group.md) about adding tags and creating update groups.
-
-## Deploy an update
-
-1. In the **Groups and Deployments** view, confirm that the new update is available for your device group. You might need to refresh the page once. The following example shows the view for the example smart vacuum device:
-
- :::image type="content" source="media/understand-device-update/five-groups.png" alt-text="Screenshot that shows an available update." lightbox="media/understand-device-update/five-groups.png":::
-
-2. Select **Deploy**.
-
-3. Confirm that the correct group is selected as the target group. Select the option to schedule your deployment or the option to start immediately, and then select **Create**.
-
- :::image type="content" source="media/understand-device-update/six-deploy.png" alt-text="Screenshot that shows options for creating a deployment." lightbox="media/understand-device-update/six-deploy.png":::
-
-4. View the compliance chart. You should see that the update is now in progress.
-
-5. After your device is successfully updated, confirm that your compliance chart and deployment details are updated to reflect that success.
-
- :::image type="content" source="media/understand-device-update/seven-results.png" alt-text="Screenshot that shows the results of a successful update." lightbox="media/understand-device-update/seven-results.png":::
-
-## Monitor an update deployment
-
-1. Select the **Groups and Deployments** tab at the top of the page.
-
-2. Select the group that you created to view the deployment details.
-
-You've now completed a successful end-to-end proxy update by using Device Update for IoT Hub.
-
-## Clean up resources
-
-When you no longer need them, clean up your Device Update account, instance, IoT hub, and IoT device.
-
-## Next steps
-
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ reference image](device-update-raspberry-pi.md) (extensible via open source to build your own images for other architectures as needed)
-
-- [Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64](device-update-ubuntu-agent.md)
-
-- [Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)--- [Device Update for Azure IoT Hub tutorial using the Azure real-time operating system](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-multi-step-updates.md
- Title: Using multiple steps for Updates with Device Update for Azure IoT Hub| Microsoft Docs
-description: Using multiple steps for Updates with Device Update for Azure IoT Hub
-- Previously updated : 11/12/2021----
-# Multi-Step Ordered Execution
-Based on customer requests we have added the ability to run pre-install and post-install tasks when deploying an over-the-air update. This capability is called Multi-Step Ordered Execution (MSOE) and is part of the Public Preview Refresh Update Manifest v4 schema.
-
-See the [Update Manifest](update-manifest.md) documentation before reviewing the following changes as part of the Public Preview Refresh release.
-
-With MSOE we have introduced are two types of Steps:
--- Inline Step (Default)-- Reference Step-
-Example Update Manifest with one Inline Step:
-
-```json
-{
- "updateId": {...},
- "isDeployable": true,
- "compatibility": [
- {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
- }
- ],
- "instructions": {
- "steps": [
- {
- "description": "Example APT update that install libcurl4-doc on a host device.",
- "handler": "microsoft/apt:1",
- "files": [
- "apt-manifest-1.0.json"
- ],
- "handlerProperties": {
- "installedCriteria": "apt-update-test-1.0"
- }
- }
- ]
- },
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-16T14:54:55.8858676Z",
- "createdDateTime": "2021-11-16T14:50:47.3511877Z"
-}
-```
-
-Example Update Manifest with two Inline Steps:
-
-```json
-{
- "updateId": {...},
- "isDeployable": true,
- "compatibility": [
- {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
- }
- ],
- "instructions": {
- "steps": [
- {
- "description": "Install libcurl4-doc on host device",
- "handler": "microsoft/apt:1",
- "files": [
- "apt-manifest-1.0.json"
- ],
- "handlerProperties": {
- "installedCriteria": "apt-update-test-2.2"
- }
- },
- {
- "description": "Install tree on host device",
- "handler": "microsoft/apt:1",
- "files": [
- "apt-manifest-tree-1.0.json"
- ],
- "handlerProperties": {
- "installedCriteria": "apt-update-test-tree-2.2"
- }
- }
- ]
- },
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-16T20:21:33.6514738Z",
- "createdDateTime": "2021-11-16T20:19:29.4019035Z"
-}
-```
-
-Example Update Manifest with one Reference Step:
--- Parent Update-
-```json
-{
- "updateId": {...},
- "isDeployable": true,
- "compatibility": [
- {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
- }
- ],
- "instructions": {
- "steps": [
- {
- "type": "reference",
- "description": "Cameras Firmware Update",
- "updateId": {
- "provider": "contoso",
- "name": "virtual-camera",
- "version": "1.2"
- }
- }
- ]
- },
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-17T07:26:14.7484389Z",
- "createdDateTime": "2021-11-17T07:22:10.6014567Z"
-}
-```
--- Child Update-
-```json
-{
- "updateId": {
- "provider": "contoso",
- "name": "virtual-camera",
- "version": "1.2"
- },
- "isDeployable": false,
- "compatibility": [
- {
- "group": "cameras"
- }
- ],
- "instructions": {
- "steps": [
- {
- "description": "Cameras Update - pre-install step",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh"
- ],
- "handlerProperties": {
- "scriptFileName": "contoso-camera-installscript.sh",
- "arguments": "--pre-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-step-0"
- }
- },
- {
- "description": "Cameras Update - firmware installation (failure - missing file)",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh",
- "camera-firmware-1.1.json"
- ],
- "handlerProperties": {
- "scriptFileName": "missing-contoso-camera-installscript.sh",
- "arguments": "--firmware-file camera-firmware-1.1.json --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-step-1"
- }
- },
- {
- "description": "Cameras Update - post-install step",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh"
- ],
- "handlerProperties": {
- "scriptFileName": "contoso-camera-installscript.sh",
- "arguments": "--post-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-stop-2"
- }
- }
- ]
- },
- "referencedBy": [
- {
- "provider": "DU-Client-Eng",
- "name": "MSOE-Update-Demo",
- "version": "3.1"
- }
- ],
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-17T07:26:14.7376536Z",
- "createdDateTime": "2021-11-17T07:22:09.2232968Z",
- "etag": "\"ad7a553d-24a8-492b-9885-9af424d44d58\""
-}
-```
-
-## Parent Update vs. Child Update
-
-For Public Preview Refresh, we will refer to the top-level Update Manifest as `Parent Update` and refer to an Update Manifest specified in a Reference Step as `Child Update`.
-
-Currently, a `Child Update` must not contain any reference steps. This restriction is validated at import time and if not followed the import will fail.
-
-### Inline Step In Parent Update
-
-Inline step(s) specified in `Parent Update` will be applied to the Host Device. Here the ADUC_WorkflowData object that is passed to a Step Handler (also known as Update Content Handler) and it will not contain the `Selected Components` data. The handler for this type of step should *not* be a `Component-Aware` handler.
-
-### Reference Step In Parent Update
-
-Reference step(s) specified in `Parent Update` will be applied to the component on or components connected to the Host Device. A **Reference Step** is a step that contains update identifier of another Update, called as a `Child Update`. When processing a Reference Step, the Steps Handler will download a Detached Update Manifest file specified in the Reference Step data, then validate the file integrity.
-
-Next, the Steps Handler will parse the Child Update Manifest and create ADUC_Workflow object (also known as Child Workflow Data) by combining the data from Child Update Manifest and File URLs information from the Parent Update Manifest. This Child Workflow Data also has a 'level' property set to '1'.
-
-> [!NOTE]
-> For Update Manfiest version v4, the Child Udpate cannot contain any Reference Steps.
-
-## Detached Update Manifest
-
-To avoid deployment failure because of IoT Hub Twin Data Size Limits, any large Update Manifest will be delivered in the form of a JSON data file, also called as a 'Detached Update Manifest'.
-
-If an update with large content is imported into Device Update for IoT Hub, the generated Update Manifest will contain another payload file called `Detached Update Manifest`, which contains the full data of the Update Manifest.
-
-The `UpdateManifest` property in the Device or Module Twin will contain the Detached Update Manifest file information.
-
-When processing PnP Property Changed Event, the Device Update Agent will automatically download the Detached Update Manifest file, and create ADUC_WorkflowData object that contains the full Update Manifest data.
-
-
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-proxy-updates.md
- Title: Using Proxy Updates with Device Update for Azure IoT Hub| Microsoft Docs
-description: Using Proxy Updates with Device Update for Azure IoT Hub
-- Previously updated : 11/12/2021----
-# Proxy Updates and multi-component updating
-
-Proxy Updates can support updating multiple **component(s)** on a target IoT device connected to IoT Hub. With Proxy updates, you can (1) target over-the-air updates to multiple components on the IoT device or (2) target over-the-air updates to multiple sensors connected to the IoT device. Use cases where proxy updates is applicable include:
-
-* Targeting specific update files to different partitions on the device.
-* Targeting specific update files to different apps/components on the device
-* Targeting specific update files to sensors connected to an IoT devices. These sensors could be connected to the IoT device over a network protocol (for example, USB, CANbus etc.).
-
-## Pre-requisite
-In order to update a component or components that connected to a target IoT Device, the device builder must register a custom **Component Enumerator Extension** that is built specifically for their IoT devices. The Component Enumerator Extension is required so that the Device Update Agent can map a **'child update'** with a specific component, or group of components, which the update is intended for. See [Contoso Component Enumerator](components-enumerator.md) for an example on how to implement and register a custom Component Enumerator extension.
-
-> [!NOTE]
-> Device Update *service* does not know anything about **component(s)** on the target device. Only the Device Update agent does the above mapping.
-
-## Example Proxy update
-In the following example, we will demonstrate how to do a Proxy update and use the multi-step ordered execution feature introduced in the Public Preview Refresh Release. Multi-step ordered execution feature allows for granular update controls including an install order, pre-install, install, and post-install steps. Use cases include, for example, a required preinstall check that is needed to validate the device state before starting an update, etc. Learn more about [multi-step ordered execution](device-update-multi-step-updates.md).
-
-See this tutorial on how to do a [Proxy update using the Device Update agent](device-update-howto-proxy-updates.md) with sample updates for components connected to a Contoso Virtual Vacuum device.
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-raspberry-pi.md
In this tutorial you will learn how to:
> * Create a device group > * Deploy an image update > * Monitor the update deployment- Note: Image updates in this tutorial have been validated on the Raspberry Pi B3 board. ## Prerequisites
IoT Hub, a connection string will be generated for the device.
ssh raspberrypi3 -l root ``` 4. Enter login as 'root', and password should be left as empty.
-5. After you successfully ssh into the device, run
+5. After you successfully ssh into the device, run the below commands
+
+Replace `<device connection string>` with your connection string
```markdown
- /etc/adu/du-config.json
+ echo "connection_string=<device connection string>" > /adu/adu-conf.txt
+ echo "aduc_manufacturer=ADUTeam" >> /adu/adu-conf.txt
+ echo "aduc_model=RefDevice" >> /adu/adu-conf.txt
```
- and add your connection string within the double quotes.
## Connect the device in Device Update IoT Hub
Use that version number in the Import Update step below.
3. Select the Updates tab. 4. Select "+ Import New Update". 5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the _sample import manifest_ you downloaded in step 1 above. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the _sample update file_ that you downloaded in step 1 above.-
+
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png"::: 6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
-7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
+7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.) Select the container you wish to use and click "Select".
:::image type="content" source="media/import-update/container.png" alt-text="Screenshot showing container selection." lightbox="media/import-update/container.png":::
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
In this tutorial you will learn how to:
> * Create a device group > * Deploy an image update > * Monitor the update deployment- ## Prerequisites * If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub.
-## Install Device Update Agent to test it as a simulator
+### Download and install
-1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
- > [!NOTE]
- > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
- >
- > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.
+* Az (Azure CLI) cmdlets for PowerShell:
+ * Open PowerShell > Install Azure CLI ("Y" for prompts to install from "untrusted" source)
-1. Then, install the Device Update agent .deb packages.
+```powershell
+PS> Install-Module Az -Scope CurrentUser
+```
- ```bash
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
- ```
-
-1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
+### Enable WSL on your Windows device (Windows Subsystem for Linux)
- ```markdown
- /etc/adu/du-config.json
- ```
-
-1. Set up the agent to run as a simulator. Run following command on the IoT device so that the Device Update agent will invoke the simulator handler to process an package update with APT ('microsoft/apt:1')
+1. Open PowerShell as Administrator on your machine and run the following command (you might be asked to restart after each step; restart when asked):
- ```sh
- sudo /usr/bin/AducIotAgent --register--content-handler <full path to the handler file> --update-type <update type name>
+```powershell
+PS> Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
+PS> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
+```
- # For example sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_simulator_1.so --update-type 'microsoft/apt:1'
- ```
+ (*You may be prompted to restart after this step*)
-1. Restart the Device Update agent by running the command below.
+2. Go to the Microsoft Store on the web and install [Ubuntu 18.04 LTS](https://www.microsoft.com/p/ubuntu-1804-lts/9n9tngvndl3q?activetab=pivot:overviewtab`).
- ```markdown
- sudo systemctl restart adu-agent
- ```
-
+3. Start "Ubuntu 18.04 LTS" and install.
+
+4. When installed, you'll be asked to set root name (username) and password. Be sure to use a memorable root name password.
+
+5. In PowerShell, run the following command to set Ubuntu to be the default Linux distribution:
+
+```powershell
+PS> wsl --setdefault Ubuntu-18.04
+```
+
+6. List all Linux distributions, making sure that Ubuntu is the default one.
+
+```powershell
+PS> wsl --list
+```
+
+7. You should see: **Ubuntu-18.04 (Default)**
+
+## Download Device Update Ubuntu (18.04 x64) Simulator Reference Agent
+
+The Ubuntu reference agent can be downloaded from the *Assets* section from release notes [here](https://github.com/Azure/iot-hub-device-update/releases).
+
+There are two versions of the agent. For this tutorial, since you're exercising the image-based scenario, use AducIotAgentSim-microsoft-swupdate. If you were going to exercise the package-based scenario instead, you would use AducIotAgentSim-microsoft-apt.
+
+## Install Device Update Agent simulator
+
+1. Start Ubuntu WSL and enter the following command (note that extra space and dot at the end).
+
+```shell
+explorer.exe .
+```
+
+2. Copy AducIotAgentSim-microsoft-swupdate (or AducIotAgentSim-microsoft-apt) from your local folder where it was downloaded under /mnt to your home folder in WSL.
+
+3. Run the following command to make the binaries executable.
+
+```shell
+sudo chmod u+x AducIotAgentSim-microsoft-swupdate
+```
+
+ or
+
+```shell
+sudo chmod u+x AducIotAgentSim-microsoft-apt
+```
Device Update for Azure IoT Hub software is subject to the following license terms: * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md) * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE) Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device update for IoT Hub agent.
-> [!NOTE]
-> After your testing with the simulator run the below command to invoke the APT handler and [deploy over-the-air Package Updates](device-update-ubuntu-agent.md)
-
-```sh
-# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
-```
- ## Add device to Azure IoT Hub Once the Device Update Agent is running on an IoT device, the device needs to be added to the Azure IoT Hub. From within Azure IoT Hub, a connection string will be generated for a particular device.
Start Device Update Agent on your new Software Devices.
1. Start Ubuntu. 2. Run the Device Update Agent and specify the device connection string from the previous section wrapped with apostrophes:
- Replace `<device connection string>` with your connection string
- ```shell
- sudo ./AducIotAgentSim-microsoft-swupdate "<device connection string>"
- ```
+Replace `<device connection string>` with your connection string
+```shell
+sudo ./AducIotAgentSim-microsoft-swupdate "<device connection string>"
+```
- or
+or
- ```shell
- ./AducIotAgentSim-microsoft-apt -c '<device connection string>'
- ```
+```shell
+./AducIotAgentSim-microsoft-apt -c '<device connection string>'
+```
3. Scroll up and look for the string indicating that the device is in "Idle" state. An "Idle" state signifies that the device is ready for service commands:
- ```markdown
- Agent running. [main]
- ```
+```markdown
+Agent running. [main]
+```
## Add a tag to your device
Start Device Update Agent on your new Software Devices.
4. Add a new Device Update tag value as shown below.
- ```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
- ```
+```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+```
## Import update
You have now completed a successful end-to-end image update using Device Update
When no longer needed, clean up your device update account, instance, IoT Hub and IoT device.
-> [!NOTE]
-> After your testing with the simulator run the below command to invoke the APT handler and [deploy over-the-air Package Updates](device-update-ubuntu-agent.md)
-
-```sh
-# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
-```
- ## Next steps
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)--- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)-
-[Troubleshooting](troubleshoot-device-update.md)
+> [!div class="nextstepaction"]
+> [Troubleshooting](troubleshoot-device-update.md)
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
In this tutorial you will learn how to:
> * Create a device group > * Deploy a package update > * Monitor the update deployment- ## Prerequisites * If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub. * The [connection string for an IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-provisioning-information).
-* If you used the [Simulator agent tutorial](device-update-simulator.md) for testing prior to this, run the below command to invoke the APT handler and can deploy over-the-air Package Updates in this tutorial.
-
-```sh
-# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
-```
## Prepare a device ### Using the Automated Deploy to Azure Button
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
> [!div class="mx-imgBorder"] > [![Screenshot showing the iotedge-vm-deploy template](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)- **Subscription**: The active Azure subscription to deploy the virtual machine into. **Resource group**: An existing or newly created Resource Group to contain the virtual machine and it's associated resources.
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
> [!div class="mx-imgBorder"] > [![Screenshot showing the dns name of the iotedge vm](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)- > [!TIP] > If you want to SSH into this VM after setup, use the associated **DNS Name** with the command: `ssh <adminUsername>@<DNS_Name>`-
-### Manually prepare a device
+### (Optional) Manually prepare a device
Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), following are manual steps to install and configure the device. These steps can be used to prepare a physical device. 1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true). > [!NOTE]
- > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ > The Device Update package agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
> > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.- 1. Then, install the Device Update agent .deb packages. ```bash sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt ```
-
-1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
-
- ```markdown
- /etc/adu/du-config.json
- ```
-
-1. Finally restart the Device Update agent by running the command below.
-
- ```markdown
- sudo systemctl restart adu-agent
- ```
Device Update for Azure IoT Hub software packages are subject to the following license terms: * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
This update will update the `aziot-identity-service` and the `aziot-edge` packag
> [!TIP] > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.- 1. Select Deploy update. 1. View the compliance chart. You should see the update is now in progress.
When no longer needed, clean up your device update account, instance, IoT Hub, a
## Next steps
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)--- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+> [!div class="nextstepaction"]
+> [Image Update on Raspberry Pi 3 B+ tutorial](device-update-raspberry-pi.md)
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
There are two storage services IoT Hub can route messages to: [Azure Blob Storag
IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding is not set, then IoT Hub will write the messages in base 64 encoded format.
-The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to delete end endoint and re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
+The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to delete the endpoint and re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
You can select the encoding format using the IoT Hub Create or Update REST API, specifically the [RoutingStorageContainerProperties](/rest/api/iothub/iothubresource/createorupdate#routingstoragecontainerproperties), the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/iot/hub/routing-endpoint), or [Azure PowerShell](/powershell/module/az.iothub/add-aziothubroutingendpoint). The following image shows how to select the encoding format in the Azure portal.
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for
* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
-* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip.md
In this section, you'll create two virtual machines to host the IIS websites.
In this section, you'll change the private IP address of the existing NIC of each virtual machine to **Static**. Next, you'll add a new NIC resource to each virtual machine with a **Static** private IP address configuration.
+For more information on configuring floating IP in the virtual machine configuration, see [Floating IP Guest OS configuration](load-balancer-floating-ip.md#floating-ip-guest-os-configuration).
+ 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. 2. Select **myVM1**.
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-appservice-insights.md
Title: Get more insights when you test Azure App Service workloads
+ Title: Get more insights from App Service diagnostics
-description: 'Learn how to get more insights by using Azure App Service diagnostics when you test App Service workloads.'
+description: 'Learn how to get detailed insights from App Service diagnostics and Azure Load Testing for App Service workloads.'
-# Get more insights when you load-test Azure App Service workloads
+# Get detailed insights from App Service diagnostics and Azure Load Testing Preview for Azure App Service workloads
In this article, you'll learn how to gain more insights from Azure App Service workloads by using Azure Load Testing Preview and Azure App Service diagnostics.
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-define-test-criteria.md
Last updated 11/30/2021
-# Define test criteria for load tests by using Azure Load Testing Preview
+# Define pass/fail criteria for load tests by using Azure Load Testing Preview
In this article, you'll learn how to define pass/fail criteria for your load tests with Azure Load Testing Preview.
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-export-test-results.md
Title: Export load test results for reporting
-description: Learn how to export load test results in Azure Load Testing for use in third-party tools.
+description: Learn how to export load test results in Azure Load Testing and use them for reporting in third-party tools.
Last updated 11/30/2021
-# Export test results in Azure Load Testing Preview for use in third-party tools
+# Export test results from Azure Load Testing Preview for use in third-party tools
In this article, you'll learn how to download the test results from Azure Load Testing Preview in the Azure portal. You might use these results for reporting in third-party tools.
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-find-download-logs.md
Title: Download Apache JMeter logs in Azure Load Testing
+ Title: Download Apache JMeter logs for troubleshooting
description: Learn how you can troubleshoot Apache JMeter script problems by downloading the Azure Load Testing logs in the Azure portal.
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-high-scale-load.md
-# Configure a load test for high-scale load testing
+# Configure Azure Load Testing Preview for high-scale load
In this article, learn how to set up a load test for high-scale load by using Azure Load Testing Preview. To simulate a large number of virtual users, you'll configure the test engine instances.
You can apply the following formula: RPS = (# of VUs) * (1/latency).
For example, if application latency is 20 milliseconds (ms), and you're generating a load of 2,000 VUs, you can achieve around 100,000 RPS.
+Apache JMeter only reports requests that made it to the server and back, either successful or not. If Apache JMeter is unable to connect to your application, the actual number of requests per second will be lower than the maximum value. Possible causes might be that the server is too busy to handle the request, or that an TLS/SSL certificate is missing. To diagnose connection problems, you can check the **Errors** chart in the load testing dashboard and [download the load test log files](./how-to-find-download-logs.md).
+ ## Test engine instances In Azure Load Testing, *test engine* instances are responsible for executing a test plan. If you use an Apache JMeter script to create the test plan, each test engine executes the Apache JMeter script.
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-read-csv-data.md
Title: Read CSV data in a JMeter load test
+ Title: Read CSV data in an Apache JMeter load test
-description: Learn how to read data from a CSV file in JMeter and Azure Load Testing.
+description: Learn how to read external data from a CSV file in Apache JMeter and Azure Load Testing.
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-use-a-managed-identity.md
Title: Use a managed identity for Azure Load Testing
+ Title: Use managed identity to access Azure key vault
description: Learn how to enable managed identity for Azure Load Testing and use it to read secrets from your Azure key vault.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
To create a compute instance you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md)
+* [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
* [Submit a training run](how-to-set-up-training-targets.md)
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/monitor-azure-machine-learning.md
When you connect multiple Azure Machine Learning workspaces to the same Log Anal
by Workspace=tostring(split(_ResourceId, "/")[8]), ClusterName, ClusterType, VmSize, VmPriority ```
+### Create a workspace monitoring dashboard by using a template
+
+A dashboard is a focused and organized view of your cloud resources in the Azure portal. For more information about creating dashboards, see [Create, view, and manage metric alerts using Azure Monitor](../azure-portal/azure-portal-dashboards.md).
+
+To deploy a sample dashboard, you can use a publicly available [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-monitoring-dashboard). The sample dashboard is based on [Kusto queries](../machine-learning/monitor-azure-machine-learning.md#sample-kusto-queries), so you must enable [Log Analytics data collection](../machine-learning/monitor-azure-machine-learning.md#collection-and-routing) for your Azure Machine Learning workspace before you deploy the dashboard.
+ ## Alerts You can access alerts for Azure Machine Learning by opening **Alerts** from the **Azure Monitor** menu. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
In this tutorial, you learn the following tasks:
## Prerequisites -- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/t.com/free/).
+- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor. - Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md) - CLI (v2) (preview). For installation instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md)
For next steps, see:
* [Connect Visual Studio Code to a compute instance](how-to-set-up-vs-code-remote.md) for a full development experience. * For a walkthrough of how to edit, run, and debug code locally, see the [Python hello-world tutorial](https://code.visualstudio.com/docs/Python/Python-tutorial). * [Run Jupyter Notebooks in Visual Studio Code](how-to-manage-resources-vscode.md) using a remote Jupyter server.
-* For a walkthrough of how to train with Azure Machine Learning outside of Visual Studio Code, see [Tutorial: Train models with Azure Machine Learning](tutorial-train-models-with-aml.md).
+* For a walkthrough of how to train with Azure Machine Learning outside of Visual Studio Code, see [Tutorial: Train models with Azure Machine Learning](tutorial-train-models-with-aml.md).
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-server-parameters.md
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-
### innodb_file_per_table > [!NOTE]
-> `innodb_file_per_table` can only be updated in the General Purpose and Memory Optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage).
+> `innodb_file_per_table` can only be updated in the General Purpose and Memory Optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage) and [general purpose storage v1](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb).
MySQL stores the InnoDB table in different tablespaces based on the configuration you provided during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the InnoDB data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single InnoDB table, and is stored in the file system in its own data file. This behavior is controlled by the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes InnoDB to create tables in the system tablespace. Otherwise, InnoDB creates tables in file-per-table tablespaces.
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/fundamentals/networking-overview.md
This section describes networking services in Azure that help protect your netwo
### <a name="privatelink"></a>Azure Private Link [Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.
-Traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers.
+Traffic between your virtual network and the service travels through the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers.
:::image type="content" source="./media/networking-overview/private-endpoint.png" alt-text="Private endpoint overview":::
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)||| |[Colt](https://www.colt.net/why-colt/partner-hub/microsoft/)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
-|[Equinix](https://www.equinix.com/)|[Cloud Optimized WAN Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.cloudoptimizedwan?tab=Overview)|[ExpressRoute Connectivity Strategy Workshop](https://www.equinix.se/resources/data-sheets/expressroute-strategy-workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)||||
+|[Equinix](https://www.equinix.com/)|Cloud Optimized WAN Workshop|[ExpressRoute Connectivity Strategy Workshop](https://www.equinix.se/resources/data-sheets/expressroute-strategy-workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)||||
|[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)| |[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)| |[IIJ](https://www.iij.ad.jp/biz/cloudex/)|[ExpressRoute implementation: 1-Hour Briefing](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxm_consulting)|[ExpressRoute: 2-Week Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxmer_consulting)||||
openshift Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-upgrade.md
From the web console in the previous step, set the **Channel** to the correct ch
Selection a version to update to, and select **Update**. You'll see the update status change to: `Update to <product-version> in progress`. You can review the progress of the cluster update by watching the progress bars for the Operators and nodes. ## Next steps-- [Learn to upgrade an ARO cluster using the OC CLI](https://docs.openshift.com/container-platform/4.6/updating/updating-cluster-between-minor.html)
+- [Learn to upgrade an ARO cluster using the OC CLI](https://docs.openshift.com/container-platform/4.5/updating/updating-cluster-between-minor.html)
- You can find information about available OpenShift Container Platform advisories and updates in the [errata section](https://access.redhat.com/downloads/content/290/ver=4.6/rhel8/4.6.0/x86_64/product-errata) of the Customer Portal.
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/migration.md
For information on configuring these storage types, see [Configuring persistent
### Registry
-Azure Red Hat OpenShift 4 can build images from your source code, deploy them, and manage their lifecycle. To enable this, Azure Red Hat OpenShift provides 4 an [internal, integrated container image registry](https://docs.openshift.com/container-platform/4.6/registry/registry-options.html) that can be deployed in your Azure Red Hat OpenShift environment to locally manage images.
+Azure Red Hat OpenShift 4 can build images from your source code, deploy them, and manage their lifecycle. To enable this, Azure Red Hat OpenShift provides 4 an [internal, integrated container image registry](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html) that can be deployed in your Azure Red Hat OpenShift environment to locally manage images.
-If you're using external registries such as [Azure Container Registry](../container-registry/index.yml), [Red Hat Quay registries](https://docs.openshift.com/container-platform/4.6/registry/registry-options.html#registry-quay-overview_registry-options), or an [authentication enabled Red Hat registry](https://docs.openshift.com/container-platform/4.6/registry/registry-options.html#registry-authentication-enabled-registry-overview_registry-options), follow steps to supply credentials to the cluster to allow the cluster to access the repositories.
+If you're using external registries such as [Azure Container Registry](../container-registry/index.yml), [Red Hat Quay registries](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html#registry-quay-overview_registry-options), or an [authentication enabled Red Hat registry](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html#registry-authentication-enabled-registry-overview_registry-options), follow steps to supply credentials to the cluster to allow the cluster to access the repositories.
### Monitoring
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/responsibility-matrix.md
The customer is responsible for the applications, workloads, and data that they
<li>If a customer adds Red Hat, community, third party, their own, or other services to the cluster by using Operators or external images, the customer is responsible for these services and for working with the appropriate provider (including Red Hat) to troubleshoot any issues.
-<li>Use the provided tools and features to <a href="https://docs.openshift.com/aro/4/architecture/understanding-development.html#application-types">configure and deploy</a>; <a href="https://docs.openshift.com/aro/4/applications/deployments/deployment-strategies.html">keep up-to-date</a>; <a href="https://docs.openshift.com/dedicated/4/applications/working-with-quotas.html">set up resource requests and limits</a>; <a href="https://docs.openshift.com/dedicated/4/getting_started/scaling-your-cluster.html">size the cluster to have enough resources to run apps</a>; <a href="https://docs.openshift.com/dedicated/4/administering_a_cluster/cluster-admin-role.html">set up permissions</a>; integrate with other services; <a href="https://docs.openshift.com/aro/4/openshift_images/images-understand.html">manage any image streams or templates that the customer deploys</a>; <a href="https://docs.openshift.com/dedicated/4/cloud_infrastructure_access/dedicated-understanding-aws.html">externally serve</a>; save, back up, and restore data; and otherwise manage their highly available and resilient workloads.
+<li>Use the provided tools and features to <a href="https://docs.openshift.com/aro/4/architecture/understanding-development.html#application-types">configure and deploy</a>; <a href="https://docs.openshift.com/aro/4/applications/deployments/deployment-strategies.html">keep up-to-date</a>; <a href="https://docs.openshift.com/dedicated/4/applications/working-with-quotas.html">set up resource requests and limits</a>; <a href="https://docs.openshift.com/dedicated/4/getting_started/scaling-your-cluster.html">size the cluster to have enough resources to run apps</a>; <a href="https://docs.openshift.com/dedicated/4/administering_a_cluster/cluster-admin-role.html">set up permissions</a>; integrate with other services; <a href="https://docs.openshift.com/container-platform/4.5/openshift_images/images-understand.html">manage any image streams or templates that the customer deploys</a>; <a href="https://docs.openshift.com/dedicated/4/cloud_infrastructure_access/dedicated-understanding-aws.html">externally serve</a>; save, back up, and restore data; and otherwise manage their highly available and resilient workloads.
<li>Maintain responsibility for monitoring the applications run on Azure Red Hat OpenShift; including installing and operating software to gather metrics and create alerts. </li>
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/modify-target-settings.md
To modify a setting:
1. In the **Across regions** page > **Destination configuration** column, click the link for the resource entry. 2. In **Configuration settings**, you can create a new VM in the destination region. 3. Assign a new availability zone, availability set, or SKU to the destination VM. **Availability zone** and **SKU**.
+4. Modify or add new tag name or value of the destination VM.
-Changes are only made for the resource you're editing. You need to update any dependent resource separately.
+ ![Extension resource tag for VM](media\modify-target-settings\extension-resources-tag-vm.png)
+
+5. Choose to **Retain** or **Do not retain** the user-assigned managed identity.
+
+ ![Extension resource umi for VM](media\modify-target-settings\extension-resources-umi-vm.png)
+> [!NOTE]
+> Retain would assign the user-assigned managed identity to the newly created destination resource. Do not retain would not assign the user-assigned managed identity to the destination resource. The user-assigned managed identity as a resource itself is not been moved to the destination region.
+
+Changes are only made for the resource you're editing. You need to update any dependent resource separately.
## Modify SQL settings
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-debug-session.md
The visual editor is organized into tabs and panes. This section introduces the
The **Skill Graph** provides a visual hierarchy of the skillset and its order of execution from top to bottom. Skills that are dependent upon the output of other skills are positioned lower in the graph. Skills at the same level in the hierarchy can execute in parallel. Color coded labels of skills in the graph indicate the types of skills that are being executed in the skillset (TEXT or VISION).
-Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including it's definition, errors or warnings, and execution history. The **Skill Graph** is where you will select which skill to debug or enhance. When you select a skill, its details will be displayed in the skill details pane to the right of the graph.
+Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including it's definition, errors or warnings, and execution history. The **Skill Graph** is where you will select which skill to debug or enhance. The details pane to the right is where you edit and explore.
:::image type="content" source="media/cognitive-search-debug/skills-graph.png" alt-text="Screenshot of Skills Graph tab." border="true"::: ### Skill details pane
-When you select an object in the **Skill Graph**, the adjacent pane displays a set of areas for working with it. An illustration of the details pane can be found in the previous screenshot.
+When you select an object in the **Skill Graph**, the adjacent pane provides interactive work areas in a tabbed layout. An illustration of the details pane can be found in the previous screenshot.
-In this pane, select a skill to review and edit its composition through **Skill Settings**, **Skill JSON Editor**, and **Executions**:
+Skill details includes the following areas:
-+ Skill Settings shows a formatted version of the skill definition.
-+ Skill JSON Editor shows the raw JSON document of the definition.
-+ Executions shows the number of times a skill was executed.
-+ Errors and warnings shows the messages generated upon session start or refresh.
++ **Skill Settings** shows a formatted version of the skill definition.++ **Skill JSON Editor** shows the raw JSON document of the definition.++ **Executions** shows the number of times a skill was executed.++ **Errors and warnings** shows the messages generated upon session start or refresh.
-On the Executions pane, select the **`</>`** symbol to open the [**Expression Evaluator**](#expression-evaluator) used for viewing and editing the expressions of the skills inputs and outputs.
+On Executions or Skill Settings, select the **`</>`** symbol to open the [**Expression Evaluator**](#expression-evaluator) used for viewing and editing the expressions of the skills inputs and outputs.
Nested input controls in Skill Settings can be used to build complex shapes for [projections](knowledge-store-projection-overview.md), [output field mappings](cognitive-search-output-field-mapping.md) for a complex type field, or an input to a skill. When used with the Expression Evaluator, nested inputs provide an easy test and validate expression builder.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-how-to-debug-skillset.md
Last updated 12/31/2021
Start a debug session to identify and resolve errors, validate changes, and push changes to a published skillset in your Azure Cognitive Search service.
-A debug session is a cached indexer and skillset execution, scoped to a single document, that you can edit and test interactively. If you are unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md).
+A debug session is a cached indexer and skillset execution, scoped to a single document, that you can edit and test your changes interactively. If you are unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
> [!Important]
-> Debug sessions is a preview portal feature, provided under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Debug sessions is a preview portal feature, provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
Debug sessions work with all generally available data sources and most preview d
1. Select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to create the session.
-1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through.
+1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through.
+
+ If your document resides in a blob container in the same storage account used to cache your debug session, you can copy the document URL from the blob property page in the portal.
+
+ :::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true":::
1. Optionally, specify any indexer execution settings that should be used to create the session. Any indexer options that you specify in a debug session have no effect on the indexer itself.
Debug sessions work with all generally available data sources and most preview d
The debug session begins by executing the indexer and skillset on the selected document. The document's content and metadata created will be visible and available in the session.
-## Check field mappings
+## Start with errors and warnings
+
+Indexer execution history in the portal gives you the full error and warning list for all documents. In a debug session, the errors and warnings will be limited to one document. You'll work through this list, make your changes, and then return to the list to verify whether issues are resolved.
+
+To view the messages, select a skill in **AI Enrichment > Skill Graph** and then select **Errors/Warnings** in the details pane.
+
+As a best practice, resolve problems with inputs before moving on to outputs.
+
+To prove whether a modification resolves an error, follow these steps:
+
+1. Select **Save** in Skill Details to preserve your changes.
+
+1. Select **Run** in the session window to invoke skillset execution using the modified definition.
-If you're debugging a skillset that isn't producing output, one of the first things to check is the field maps that specify how content moves out of the pipeline and into a search index.
+1. Return to **Errors/Warnings** to see if the count is reduced. The list will not be refreshed until you open the tab.
+
+## View content of enrichment nodes
+
+AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`) plus nodes for any content that is directly ported from the data source (such as a document key) and metadata. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
+
+Enriched documents are internal, but a debug session gives you access to the content produced during skill execution. To view the content or output of each skill, follow these steps:
1. Start with the default views: **AI enrichment > Skill Graph**, with the graph type set to **Dependency Graph**.
-1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
+1. Select a skill.
- If you are importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
+1. In the details pane to the right, select **Executions**, select an OUTPUT, and then open the Expression Evaluator (**`</>`**) to view the expression and it's result.
-1. Select **Output Field Mappings** at the bottom of the graph. Here you will find mappings from skill outputs to target fields in the search index. Unless you used the Import Data wizard, output field mappings are defined manually and could be incomplete or mistyped.
+ :::image type="content" source="media/cognitive-search-debug/enriched-doc-output-expression.png" alt-text="Screenshot of a skill execution showing output values." border="true":::
- Verify that the fields in **Output Field Mappings** exist in the search index as specified, checking for spelling and [enrichment node path syntax](cognitive-search-concept-annotations-syntax.md).
+1. Alternatively, open **AI enrichment > Enriched Data Structure** to scroll down the list of nodes. The list includes potential and actual nodes, with a column for output, and another column that indicates the upstream object used to produce the output.
- :::image type="content" source="media/cognitive-search-debug/output-field-mappings.png" alt-text="Screenshot of the Output Field Mappings node and details." border="true":::
+ :::image type="content" source="media/cognitive-search-debug/enriched-doc-output.png" alt-text="Screenshot of enriched document showing output values." border="true":::
-## Check skills
+## Edit skill definitions
If the field mappings are correct, check individual skills for configuration and content. If a skill fails to produce output, it might be missing a property or parameter, which can be determined through error and validation messages.
The following steps show you how to get information about a skill.
1. In **AI enrichment > Skill Graph**, select a skill. The Skill Details pane opens to the right.
-1. Select **Executions** to show which inputs and outputs were used during skill execution.
+1. Edit a skill definition using either approach:
- :::image type="content" source="media/cognitive-search-debug/skill-input-output-detection.png" alt-text="Screenshot of Skill graph, details, and execution tab inputs and outputs." border="true":::
+ + **Skill Settings** if you prefer a visual editor
+ + **Skill JSON Editor** to edit the JSON document directly
-1. Select **`</>`** Expression Evaluator to show the values returned by the skill.
+1. Check the [path syntax for referencing nodes](cognitive-search-concept-annotations-syntax.md) in an enrichment tree. Inputs are usually one of the following:
+
+ + `/document/content` for chunks of text. This node is populated from the blob's content property.
+ + `/document/merged_content` for chunks of text in skillets that include Text Merge skill.
+ + `/document/normalized_images/*` for text that is recognized or inferred from images.
+
+## Check field mappings
+
+If skills produce output but the search index is empty, check the field mappings that specify how content moves out of the pipeline and into a search index.
+
+1. Start with the default views: **AI enrichment > Skill Graph**, with the graph type set to **Dependency Graph**.
+
+1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
+
+ If you are importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
+
+1. Select **Output Field Mappings** at the bottom of the graph. Here you will find mappings from skill outputs to target fields in the search index. Unless you used the Import Data wizard, output field mappings are defined manually and could be incomplete or mistyped.
+
+ Verify that the fields in **Output Field Mappings** exist in the search index as specified, checking for spelling and [enrichment node path syntax](cognitive-search-concept-annotations-syntax.md).
+
+ :::image type="content" source="media/cognitive-search-debug/output-field-mappings.png" alt-text="Screenshot of the Output Field Mappings node and details." border="true":::
## Next steps Now that you understand the layout and capabilities of the Debug Sessions visual editor, try the tutorial for a hands-on experience. > [!div class="nextstepaction"]
-> [Explore Debug sessions feature tutorial](./cognitive-search-tutorial-debug-sessions.md)
+> [Tutorial: Explore Debug sessions](./cognitive-search-tutorial-debug-sessions.md)
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-debug-sessions.md
Title: Debug skillsets
+ Title: 'Tutorial: Debug skillsets'
description: Debug sessions (preview) is an Azure portal tool used to find, diagnose, and repair problems in a skillset.
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-sharepoint-online.md
When a system-assigned managed identity is enabled, Azure creates an identity fo
If the SharePoint Online site is in the same tenant as the search service you will need to enable the system-assigned managed identity for the search service. If the SharePoint Online site is in a different tenant from the search service, system-assigned managed identity doesn't need to be enabled.
-![Enable system assigned managed identity](media/search-howto-index-sharepoint-online/enable-managed-identity.png "Enable system assigned managed identity")
After selecting **Save** you will see an Object ID that has been assigned to your search service.
-![System assigned managed identity](media/search-howto-index-sharepoint-online/system-assigned-managed-identity.png "System assigned managed identity")
### Step 2: Decide which permissions the indexer requires
The SharePoint Online indexer will use this Azure AD application for authenticat
+ **Delegated - Sites.Read.All** + **Delegated - User.Read**
- ![Delegated API permissions](media/search-howto-index-sharepoint-online/delegated-api-permissions.png "Delegated API permissions")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/delegated-api-permissions.png" alt-text="Delegated API permissions":::
Delegated permissions allow the search client to connect to SharePoint Online under the security identity of the current user.
The SharePoint Online indexer will use this Azure AD application for authenticat
+ **Application - Files.Read.All** + **Application - Sites.Read.All**
- ![Application API permissions](media/search-howto-index-sharepoint-online/application-api-permissions.png "Application API permissions")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/application-api-permissions.png" alt-text="Application API permissions":::
Using application permissions means that the indexer will access the SharePoint site in a service context. So when you run the indexer it will have access to all content in the SharePoint Online tenant, which requires tenant admin approval. A client secret is also required for authentication. Setting up the client secret is described later in this article.
The SharePoint Online indexer will use this Azure AD application for authenticat
Tenant admin consent is required when using application API permissions. Some tenants are locked down in such a way that tenant admin consent is required for delegated API permissions as well. If either of these are the case, youΓÇÖll need to have a tenant admin grant consent for this Azure AD application before creating the indexer.
- ![Azure AD app grant admin consent](media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png "Azure AD app grant admin consent")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png" alt-text="Azure AD app grant admin consent":::
1. Select the **Authentication** tab. Set **Allow public client flows** to **Yes** then select **Save**. 1. Select **+ Add a platform**, then **Mobile and desktop applications**, then check `https://login.microsoftonline.com/common/oauth2/nativeclient`, then **Configure**.
- ![Azure AD app authentication configuration](media/search-howto-index-sharepoint-online/aad-app-authentication-configuration.png "Azure AD app authentication configuration")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-authentication-configuration.png" alt-text="Azure AD app authentication configuration":::
1. (Application API Permissions only) To authenticate to the Azure AD application using application permissions, the indexer requires a client secret. + Select **Certificates & Secrets** from the menu on the left, then **Client secrets**, then **New client secret**
- ![New client secret](media/search-howto-index-sharepoint-online/application-client-secret.png "New client secret")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret.png" alt-text="New client secret":::
+ In the menu that pops up, enter a description for the new client secret. Adjust the expiration date if necessary. If the secret expires it will need to be recreated and the indexer needs to be updated with the new secret.
- ![Setup client secret](media/search-howto-index-sharepoint-online/application-client-secret-setup.png "Setup client secret")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-setup.png" alt-text="Setup client secret":::
+ The new client secret will appear in the secret list. Once you navigate away from the page the secret will no longer be visible, so copy it using the copy button and save it in a secure location.
- ![Copy client secret](media/search-howto-index-sharepoint-online/application-client-secret-copy.png "Copy client secret")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-copy.png" alt-text="Copy client secret":::
<a name="create-data-source"></a>
There are a few steps to creating the indexer:
1. Provide the code that was provided in the error message.
- ![Enter device code](media/search-howto-index-sharepoint-online/enter-device-code.png "Enter device code")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/enter-device-code.png" alt-text="Enter device code":::
1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you log in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document.
There are a few steps to creating the indexer:
1. Approve the permissions that are being requested.
- ![Approve API permissions](media/search-howto-index-sharepoint-online/aad-app-approve-api-permissions.png "Approve API permissions")
+ :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-approve-api-permissions.png" alt-text="Approve API permissions":::
1. Resend the indexer create request. This time the request should succeed.
There are a few steps to creating the indexer:
} ```
+> [!NOTE]
+> If the Azure AD application requires admin approval and was not approved before logging in, you may see the following screen. [Admin approval](../active-directory/manage-apps/grant-admin-consent.md) is required to continue.
+ ### Step 7: Check the indexer status After the indexer has been created you can check the indexer status by making the following request.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-what-is-azure-search.md
Previously updated : 09/28/2021 Last updated : 01/03/2022 # What is Azure Cognitive Search? Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
-Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, online retail, or knowledge mining for data science.
+Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, online retail, or data exploration.
When you create a search service, you'll work with the following capabilities:
Azure Cognitive Search is well suited for the following application scenarios:
+ Easily implement search-related features: relevance tuning, faceted navigation, filters (including geo-spatial search), synonym mapping, and autocomplete.
-+ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Cosmos DB, into searchable JSON documents. This is achieved during index through [cognitive skills](cognitive-search-concept-intro.md) that add external processing.
++ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Cosmos DB, into searchable JSON documents. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing. + Add linguistic or custom text analysis. If you have non-English content, Azure Cognitive Search supports both Lucene analyzers and Microsoft's natural language processors. You can also configure analyzers to achieve specialized processing of raw content, such as filtering out diacritics, or recognizing and preserving patterns in strings.
For more information about specific functionality, see [Features of Azure Cognit
An end-to-end exploration of core search features can be accomplished in four steps:
-1. [**Choose a tier**](search-sku-tier.md). One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you will need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
+1. [**Decide on a tier**](search-sku-tier.md). One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you will need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
1. [**Create a search service**](search-create-service-portal.md) in the Azure portal.
-1. [**Start with Import data wizard**](search-get-started-portal.md). Choose a built-in sample data source to create, load, and query an index in minutes.
+1. [**Start with Import data wizard**](search-get-started-portal.md). Choose a built-in sample or a supported data source to create, load, and query an index in minutes.
1. [**Finish with Search Explorer**](search-explorer.md), using a portal client to query the search index you just created.
-Alternatively, you can create, load, and query a search index in separate steps:
+Alternatively, you can create, load, and query a search index in atomically:
1. [**Create a search index**](search-what-is-an-index.md) using the portal, [REST API](/rest/api/searchservice/create-index), [.NET SDK](search-howto-dotnet-sdk.md), or another SDK. The index schema defines the structure of searchable content.
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The fields below are specific to DNS events, although many are similar to fields
| <a name=UrlCategory></a>**UrlCategory** | Optional | String | A DNS event source may also look up the category of the requested Domains. The field is called **_UrlCategory_** to align with the Microsoft Sentinel network schema. <br><br>**_DomainCategory_** is added as an alias that's fitting to DNS. <br><br>Example: `Educational \\ Phishing` | | **DomainCategory** | Optional | Alias | Alias to [UrlCategory](#UrlCategory). | | **ThreatCategory** | Optional | String | If a DNS event source also provides DNS security, it may also evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign the domain or IP address with a Threat Category. |
-| **EventSeverity** | Optional | String | If a DNS event source also provides DNS security, it may evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign a severity based on the evaluation. <br><br>Example: `Informational`|
| <a name="dnsnetworkduration"></a>**DnsNetworkDuration** | Optional | Integer | The amount of time, in milliseconds, for the completion of DNS request.<br><br>Example: `1500` | | **Duration** | Alias | | Alias to [DnsNetworkDuration](#dnsnetworkduration) | | **DnsFlagsAuthenticated** | Optional | Boolean | The DNS `AD` flag, which is related to DNSSEC, indicates in a response that all data included in the answer and authority sections of the response have been verified by the server according to the policies of that server. see [RFC 3655 Section 6.1](https://tools.ietf.org/html/rfc3655#section-6.1) for more information. |
sentinel Fusion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/fusion.md
Microsoft Sentinel uses Fusion, a correlation engine based on scalable machine l
Customized for your environment, this detection technology not only reduces [false positive](false-positives.md) rates but can also detect attacks with limited or missing information.
-Since Fusion correlates multiple signals from various products to detect advanced multistage attacks, successful Fusion detections are presented as **Fusion incidents** on the Microsoft Sentinel **Incidents** page and not as **alerts**, and are stored in the *Incidents* table in **Logs** and not in the *SecurityAlerts* table.
+Since Fusion correlates multiple signals from various products to detect advanced multistage attacks, successful Fusion detections are presented as **Fusion incidents** on the Microsoft Sentinel **Incidents** page and not as **alerts**, and are stored in the *SecurityIncident* table in **Logs** and not in the *SecurityAlert* table.
### Configure Fusion
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/network-normalization-schema.md
Fields common to all schemas are described in the [ASIM schema overview](normali
| Field | Class | Type | Description | ||-||--| | **EventCount** | Mandatory | Integer | Netflow sources support aggregation, and the **EventCount** field should be set to the value of the Netflow **FLOWS** field. For other sources, the value is typically set to `1`. |
-| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, supported values include:<br>- `NetworkConnection`<br>- `NetworkSession` |
+| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the value should be `NetworkSession`. |
| **EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` | | **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.1`. |
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-about-schemas.md
The following fields are defined by ASIM for all schemas:
| **EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source.<br><br>Example: `69f37748-ddcd-4331-bf0f-b137f1ea83b`| | **EventOriginalType** | Optional | String | The original event type or ID, if provided by the source. For example, this field will be used to store the original Windows event ID.<br><br>Example: `4624`| | <a name="eventoriginalresultdetails"></a>**EventOriginalResultDetails** | Optional | String | The original result details provided by the source. This value is used to derive [EventResultDetails](#eventresultdetails), which should have only one of the values documented for each schema. |
+| <a name="eventseverity"></a>**EventSeverity** | Enumerated | String | The severity of the event. Valid values are: `Informational`, `Low`, `Medium`, or `High`. |
+| <a name="eventoriginalseverity"></a>**EventOriginalSeverity** | Optional | String | The original severity as provided by the source. This value is used to derive [EventSeverity](#eventseverity). |
| <a name="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. <br><br>Example: `Sysmon`<br><br>**Note**: This field might not be available in the source record. In such cases, this field must be set by the parser. | | **EventProductVersion** | Optional | String | The version of the product generating the event. <br><br>Example: `12.1` | | <a name="eventvendor"></a>**EventVendor** | Mandatory | String | The vendor of the product generating the event. <br><br>Example: `Microsoft` <br><br>**Note**: This field might not be available in the source record. In such cases, this field must be set by the parser. |
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Last updated 5/10/2021 # Deploy a Service Fabric managed cluster across availability zones- Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region. Service Fabric managed cluster supports deployments that span across multiple Availability Zones to provide zone resiliency. This configuration will ensure high-availability of the critical system services and your applications to protect from single-points-of-failure. Azure Availability Zones are only available in select regions. For more information, see [Azure Availability Zones Overview](../availability-zones/az-overview.md).
To enable a zone resilient Azure Service Fabric managed cluster, you must includ
"zonalResiliency": "true" } ```+
+## Migrate an existing non-zone resilient cluster to Zone Resilient (Preview)
+Existing Service Fabric managed clusters which are not spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones as well as clusters in regions where three availability zones are made available post-deployment.
+
+>[!NOTE]
+>Availability Zone spanning is only available on Standard SKU clusters and requires three availability zones in the region.
+
+>[!NOTE]
+>Migration to a zone resilient configuration can cause a brief loss of external connectivity through the load balancer, but will not effect cluster health. This occurs when a new Public IP needs to be created in order to make the networking resilient to Zone failures. Please plan the migration accordingly.
+
+The following steps are required to migrate a cluster to be zone resilient:
+
+* Use apiVersion 2021-11-01-preview or higher
+* Add a new primary node type to the cluster with zones parameter in the nodetype set to ["1", "2", "3"] as show below:
+```json
+{
+ "apiVersion": "2021-11-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ ...
+ "isPrimary": true,
+ "zones": ["1", "2", "3"]
+ ...
+ }
+}
+```
+
+* A brief period of unreachability to the cluster can occur during the step above.
+* Add new secondary node type(s) with same zones parameter as required. Skip if you have no secondary node type.
+* Migrate existing services from the old node types to the new ones. [Recommended using placement properties](./service-fabric-cluster-resource-manager-cluster-description.md)
+* Remove the old node types from the cluster using [Portal or cmdlet](./how-to-managed-cluster-modify-node-type.md). Make sure to remove old node types from your template.
+* Set zonalResiliency: true in the cluster ARM template and do a deployment to mark cluster as zone resilient and ensure all new node type deployments span across availability zones.
++++ [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-multi-az-arch]: ./media/service-fabric-cross-availability-zones/sf-multi-az-topology.png
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
This article summarizes support and prerequisites for disaster recovery of Azure
**CLI** | Not currently supported
-## Resource support
+## Resource move/migrate support
**Resource action** | **Details** |
Secure transfer option | Supported
Write accelerator enabled disks | Not supported Tags | Supported | User-generated tags are replicated every 24 hours. Soft delete | Not supported | Soft delete is not supported because once it is enabled on a storage account, it increases cost. ASR performs very frequent creates/deletes of log files while replicating causing costs to increase.
+iSCSI disks | Not supported | ASR may be used to migrate or failover iSCSI disks into Azure. However, iSCSI disks are not supported for Azure to Azure replication and failover/failback.
>[!IMPORTANT] > To avoid performance issues, make sure that you follow VM disk scalability and performance targets for [managed disks](../virtual-machines/disks-scalability-targets.md). If you use default settings, Site Recovery creates the required disks and storage accounts, based on the source configuration. If you customize and select your own settings,follow the disk scalability and performance targets for your source VMs.
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/quickstart-blobs-javascript-browser.md
Additional resources:
- [Node.js](https://nodejs.org) - [Microsoft Visual Studio Code](https://code.visualstudio.com) - A Visual Studio Code extension for browser debugging, such as:
- - [Debugger for Microsoft Edge](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-edge)
+ - [Debugger for Microsoft Edge](https://devblogs.microsoft.com/visualstudio/debug-javascript-in-microsoft-edge-from-visual-studio/)
- [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) - [Debugger for Firefox](https://marketplace.visualstudio.com/items?itemName=firefox-devtools.vscode-firefox-debug)
storage Storage Quickstart Blobs Javascript Client Libraries Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-javascript-client-libraries-legacy.md
In this quickstart, you learn to manage blobs by using JavaScript code running e
- An Azure Storage account. [Create a storage account](../common/storage-account-create.md). - A local web server. This article uses [Node.js](https://nodejs.org) to open a basic server. - [Visual Studio Code](https://code.visualstudio.com).-- A VS Code extension for browser debugging, such as [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) or [Debugger for Microsoft Edge](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-edge).
+- A VS Code extension for browser debugging, such as [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) or [Debugger for Microsoft Edge](https://devblogs.microsoft.com/visualstudio/debug-javascript-in-microsoft-edge-from-visual-studio/).
## Setting up storage account CORS rules
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
LRS is a good choice for the following scenarios:
Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12 9's) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. No remounting of Azure file shares from the connected clients is required. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones.
storage File Sync Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-monitoring.md
description: Review how to monitor your Azure File Sync deployment by using Azur
Previously updated : 04/13/2021 Last updated : 01/3/2022
The following metrics for Azure File Sync are available in Azure Monitor:
| Metric name | Description | |-|-|
-| Bytes synced | Size of data transferred (upload and download).<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
+| Bytes synced | Size of data transferred (upload and download).<br><br>Unit: Bytes<br>Aggregation Type: Average, Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
| Cloud tiering cache hit rate | Percentage of bytes, not whole files, that have been served from the cache vs. recalled from the cloud.<br><br>Unit: Percentage<br>Aggregation Type: Average<br>Applicable dimensions: Server Endpoint Name, Server Name, Sync Group Name |
-| Cloud tiering recall size | Size of data recalled.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Name, Sync Group Name |
-| Cloud tiering recall size by application | Size of data recalled by application.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Application Name, Server Name, Sync Group Name |
+| Cloud tiering recall size | Size of data recalled.<br><br>Unit: Bytes<br>Aggregation Type: Average, Sum<br>Applicable dimensions: Server Name, Sync Group Name |
+| Cloud tiering recall size by application | Size of data recalled by application.<br><br>Unit: Bytes<br>Aggregation Type: Average, Sum<br>Applicable dimensions: Application Name, Server Name, Sync Group Name |
| Cloud tiering recall success rate | Percentage of recall requests that were successful.<br><br>Unit: Percentage<br>Aggregation Type: Average<br>Applicable dimensions: Server Endpoint Name, Server Name, Sync Group Name |
-| Cloud tiering recall throughput | Size of data recall throughput.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Name, Sync Group Name |
-| Files not syncing | Count of files that are failing to sync.<br><br>Unit: Count<br>Aggregation Types: Average, Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
-| Files synced | Count of files transferred (upload and download).<br><br>Unit: Count<br>Aggregation Type: Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
-| Server cache size | Size of data cached on the server.<br><br>Unit: Bytes<br>Aggregation Type: Average<br>Applicable dimension: Server Endpoint Name, Server Name, Sync Group Name |
-| Server online status | Count of heartbeats received from the server.<br><br>Unit: Count<br>Aggregation Type: Maximum<br>Applicable dimension: Server Name |
-| Sync session result | Sync session result (1=successful sync session; 0=failed sync session)<br><br>Unit: Count<br>Aggregation Types: Maximum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
+| Cloud tiering recall throughput | Size of data recall throughput.<br><br>Unit: Bytes<br>Aggregation Type: Average, Sum, Maximum, Minimum<br>Applicable dimensions: Server Name, Sync Group Name |
+| Files not syncing | Count of files that are failing to sync.<br><br>Unit: Count<br>Aggregation Types: Average<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
+| Files synced | Count of files transferred (upload and download).<br><br>Unit: Count<br>Aggregation Type: Average, Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
+| Server cache size | Size of data cached on the server.<br><br>Unit: Bytes<br>Aggregation Type: Average, Maximum, Minimum<br>Applicable dimension: Server Endpoint Name, Server Name, Sync Group Name |
+| Server online status | Count of heartbeats received from the server.<br><br>Unit: Count<br>Aggregation Type: Average, Count, Sum, Maximum, Minimum<br>Applicable dimension: Server Name |
+| Sync session result | Sync session result (1=successful sync session; 0=failed sync session)<br><br>Unit: Count<br>Aggregation Types: Average, Count, Sum, Maximum, Minimum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
### Alerts
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-planning.md
The following regions require you to request access to Azure Storage before you
- South Africa West - UAE Central
-To request access for these regions, follow the process in [this document](https://azure.microsoft.com/global-infrastructure/geographies/).
+To request access for these regions, follow the process in [this document](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
## Redundancy [!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)]
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-server-registration.md
description: Learn how to register and unregister a Windows Server with an Azure
Previously updated : 04/13/2021 Last updated : 01/3/2022
For example, you can create a new throttle limit to ensure that Azure File Sync
Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" New-StorageSyncNetworkLimit -Day Monday, Tuesday, Wednesday, Thursday, Friday -StartHour 9 -EndHour 17 -LimitKbps 10000 ```
+> [!NOTE]
+> To apply the network limit for 24 hours, use 0 for the -StartHour and -EndHour parameters.
You can see your limit by using the following cmdlet:
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this tables may change over time as support c
| [GRS or GZRS redundancy types](storage-files-planning.md#redundancy)| Γ¢ö | | [AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json)| Γ¢ö | | Azure Storage Explorer| Γ¢ö |
-| Create NFS shares on existing storage accounts*| Γ¢ö |
| Support for more than 16 groups| Γ¢ö | ## Regional availability
storage Storage Files How To Create Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-how-to-create-nfs-shares.md
Now that you have created a FileStorage account and configured the networking, y
1. For **Protocol** select **NFS**. 1. For **Root Squash** make a selection.
- - Root squash (default) - Access for the remote superuser (root) is mapped to UID (65534) and GID (65534).
- - No root squash - Remote superuser (root) receives access as root.
+ - Root squash - Access for the remote superuser (root) is mapped to UID (65534) and GID (65534).
+ - No root squash (default) - Remote superuser (root) receives access as root.
- All squash - All user access is mapped to UID (65534) and GID (65534). 1. Select **Create**.
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-apache-spark-notebook.md
Title: 'Quickstart: Create a serverless Apache Spark pool using web tools' description: This quickstart shows how to use the web tools to create a serverless Apache Spark pool in Azure Synapse Analytics and how to run a Spark SQL query. --++
synapse-analytics Apache Spark Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-autoscale.md
Title: Automatically scale Apache Spark instances description: Use the Azure Synapse autoscale feature to automatically scale Apache Spark Instances--++
synapse-analytics Apache Spark Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-concepts.md
Title: Apache Spark core concepts description: Introduction to core concepts for Apache Spark in Azure Synapse Analytics. -+ Last updated 04/15/2020 -+
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
Title: Overview of how to use Linux Foundation Delta Lake in Apache Spark for Azure Synapse Analytics description: Learn how to use Delta Lake in Apache Spark for Azure Synapse Analytics, to create, and use tables with ACID properties. -+ Last updated 07/28/2020-+ zone_pivot_groups: programming-languages-spark-all-minus-sql
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-history-server.md
Title: Use the extended Spark history server to debug apps description: Use the extended Spark history server to debug and diagnose Spark applications in Azure Synapse Analytics. -+ Last updated 10/15/2020 -+
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
Title: 'Tutorial: Build a machine learning app with Apache Spark MLlib' description: A tutorial on how to use Apache Spark MLlib to create a machine learning app that analyzes a dataset by using classification through logistic regression. -+ Last updated 04/15/2020-+
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-overview.md
Title: What is Apache Spark description: This article provides an introduction to Apache Spark in Azure Synapse Analytics and the different scenarios in which you can use Spark. -+ Last updated 04/15/2020 -+
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
Title: Hyperspace indexes for Apache Spark description: Performance optimization for Apache Spark using Hyperspace indexes -+ Last updated 08/12/2020 -+ zone_pivot_groups: programming-languages-spark-all-minus-sql
synapse-analytics Apache Spark Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-performance.md
Title: Optimize Spark jobs for performance description: This article provides an introduction to Apache Spark in Azure Synapse Analytics. -+ Last updated 04/15/2020-+
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
Title: What is Delta Lake description: Overview of Delta Lake and how it works as part of Azure Synapse Analytics -+ Last updated 04/15/2020 -+
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/cli-samples.md
Title: Azure CLI Samples for Traffic Manager| Microsoft Docs
description: Learn about an Azure CLI script you can use to direct traffic across multiple regions for high application availability. documentationcenter: virtual-network-+ Last updated 10/23/2018-+
traffic-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/powershell-samples.md
Title: Azure PowerShell samples for Traffic Manager| Microsoft Docs
description: With this sample, use Azure PowerShell to deploy and configure Azure Traffic Manager. documentationcenter: traffic-manager-+ Last updated 10/23/2018-+ # Azure PowerShell samples for Traffic Manager
traffic-manager Quickstart Create Traffic Manager Profile Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
Title: 'Quickstart: Create a profile for HA of applications - Azure CLI - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application by using Azure CLI. -+ na Last updated 04/19/2021-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
traffic-manager Quickstart Create Traffic Manager Profile Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/quickstart-create-traffic-manager-profile-powershell.md
Title: 'Quickstart: Create a profile for high availability of applications - Azure PowerShell - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application. --++ Last updated 04/19/2021
traffic-manager Quickstart Create Traffic Manager Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md
Title: 'Quickstart: Create a Traffic Manager by using Azure Resource Manager template (ARM template)' description: This quickstart article describes how to create an Azure Traffic Manager profile by using Azure Resource Manager template (ARM template). --++ Last updated 09/01/2020
traffic-manager Quickstart Create Traffic Manager Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/quickstart-create-traffic-manager-profile.md
Title: 'Quickstart: Create a profile for HA of applications - Azure portal - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application using the Azure portal. --++ Last updated 04/19/2021
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
Title: Route traffic for HA of applications - Azure CLI - Traffic Manager
description: Azure CLI script sample - Route traffic for high availability of applications documentationcenter: traffic-manager-+ tags: azure-infrastructure
na Last updated 04/26/2018-+ # Route traffic for high availability of applications using Azure CLI
traffic-manager Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
Title: Route traffic for HA of applications - Azure PowerShell - Traffic Manager
description: Azure PowerShell script sample - Route traffic for high availability of applications documentationcenter: traffic-manager-+ editor: tags: azure-infrastructure
na Last updated 04/26/2018-+
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-FAQs.md
Title: Azure Traffic Manager - FAQs
description: This article provides answers to frequently asked questions about Traffic Manager documentationcenter: ''-+ na Last updated 03/03/2021-+
traffic-manager Traffic Manager Configure Geographic Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-geographic-routing-method.md
Title: 'Tutorial: Configure geographic traffic routing with Azure Traffic Manager' description: This tutorial explains how to configure the geographic traffic routing method using Azure Traffic Manager -+ na Last updated 10/15/2020-+ # Tutorial: Configure the geographic traffic routing method using Traffic Manager
traffic-manager Traffic Manager Configure Multivalue Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-multivalue-routing-method.md
Title: Configure multivalue traffic routing - Azure Traffic Manager
description: This article explains how to configure Traffic Manager to route traffic to A/AAAA endpoints. documentationcenter: ''-+ na Last updated 09/10/2018-+ # Configure MultiValue routing method in Traffic Manager
traffic-manager Traffic Manager Configure Performance Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-performance-routing-method.md
description: This article explains how to configure Traffic Manager to route tra
documentationcenter: ''-+ na Last updated 03/20/2017-+ # Configure the performance traffic routing method
traffic-manager Traffic Manager Configure Priority Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-priority-routing-method.md
Title: 'Tutorial: Configure priority traffic routing with Azure Traffic Manager'
description: This tutorial explains how to configure the priority traffic routing method in Traffic Manager documentationcenter: ''-+ na Last updated 10/16/2020-+ # Tutorial: Configure priority traffic routing method in Traffic Manager
traffic-manager Traffic Manager Configure Subnet Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-subnet-routing-method.md
Title: Configure subnet traffic routing - Azure Traffic Manager
description: This article explains how to configure Traffic Manager to route traffic from specific subnets. documentationcenter: ''-+ na Last updated 09/17/2018-+ # Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager Traffic Manager Configure Weighted Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-configure-weighted-routing-method.md
Title: 'Tutorial: Configure weighted round-robin traffic routing with Azure Traf
description: This tutorial explains how to load balance traffic using a round-robin method in Traffic Manager documentationcenter: ''-+ na Last updated 10/19/2020-+ # Tutorial: Configure the weighted traffic routing method in Traffic Manager
traffic-manager Traffic Manager Create Rum Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-create-rum-visual-studio.md
Title: Real User Measurements with Visual Studio Mobile Center - Azure Traffic M
description: Set up your mobile application developed using Visual Studio Mobile Center to send Real User Measurements to Traffic Manager documentationcenter: traffic-manager-+ ms.devlang: java
Last updated 03/16/2018-+
traffic-manager Traffic Manager Create Rum Web Pages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-create-rum-web-pages.md
Title: Real User Measurements with web pages - Azure Traffic Manager
description: In this article, learn how to set up your web pages to send Real User Measurements to Azure Traffic Manager. documentationcenter: traffic-manager-+ Last updated 04/06/2021-+ # How to send Real User Measurements to Azure Traffic Manager using web pages
traffic-manager Traffic Manager Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-diagnostic-logs.md
Title: Enable resource logging in Azure Traffic Manager description: Learn how to enable resource logging for your Traffic Manager profile and access the log files that are created as a result. -+ na Last updated 01/25/2019-+ # Enable resource logging in Azure Traffic Manager
traffic-manager Traffic Manager Endpoint Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-endpoint-types.md
Title: Traffic Manager Endpoint Types | Microsoft Docs
description: This article explains different types of endpoints that can be used with Azure Traffic Manager documentationcenter: ''-+ na Last updated 01/21/2021-+ # Traffic Manager endpoints
traffic-manager Traffic Manager Geographic Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-geographic-regions.md
Title: Country/Region hierarchy used by geographic routing - Azure Traffic Manag
description: This article lists Country/Region hierarchy used by Azure Traffic Manager Geographic routing type documentationcenter: ''-+ na Last updated 03/22/2017-+ # Country/Region hierarchy used by Azure Traffic Manager for geographic traffic routing method
traffic-manager Traffic Manager How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-how-it-works.md
Title: How Azure Traffic Manager works | Microsoft Docs
description: This article will help you understand how Traffic Manager routes traffic for high performance and availability of your web applications documentationcenter: ''-+ na Last updated 03/05/2019-+ # How Traffic Manager Works
traffic-manager Traffic Manager Load Balancing Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-load-balancing-azure.md
Title: Using load-balancing services in Azure | Microsoft Docs
description: 'This tutorial shows you how to create a scenario by using the Azure load-balancing portfolio: Traffic Manager, Application Gateway, and Load Balancer.' documentationcenter: ''-+ na Last updated 10/27/2016-+ # Using load-balancing services in Azure
traffic-manager Traffic Manager Manage Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-manage-endpoints.md
Title: Manage endpoints in Azure Traffic Manager | Microsoft Docs
description: This article will help you add, remove, enable and disable endpoints from Azure Traffic Manager. documentationcenter: ''-+ na Last updated 05/08/2017-+ # Add, disable, enable, or delete endpoints
traffic-manager Traffic Manager Manage Profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-manage-profiles.md
Title: Manage Azure Traffic Manager profiles | Microsoft Docs
description: This article helps you create, disable, enable, and delete an Azure Traffic Manager profile. documentationcenter: ''-+ na Last updated 05/10/2017-+ # Manage an Azure Traffic Manager profile
traffic-manager Traffic Manager Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-metrics-alerts.md
Title: Metrics and Alerts in Azure Traffic Manager description: In this article, learn the metrics and alerts available for Traffic Manager in Azure. -+ na Last updated 06/11/2018-+ # Traffic Manager metrics and alerts
traffic-manager Traffic Manager Nested Profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-nested-profiles.md
description: This article explains the 'Nested Profiles' feature of Azure Traffic Manager documentationcenter: ''-+ na Last updated 10/22/2018-+ # Nested Traffic Manager profiles
traffic-manager Traffic Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-overview.md
Title: Azure Traffic Manager | Microsoft Docs description: This article provides an overview of Azure Traffic Manager. Find out if it's the right choice for load-balancing user traffic for your application. -+ na Last updated 01/19/2021-+ #Customer intent: As an IT admin, I want to learn about Traffic Manager and what I can use it for.
traffic-manager Traffic Manager Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-performance-considerations.md
Title: Performance considerations for Azure Traffic Manager | Microsoft Docs
description: Understand performance on Traffic Manager and how to test performance of your website when using Traffic Manager documentationcenter: ''-+ na Last updated 03/16/2017-+ # Performance considerations for Traffic Manager
traffic-manager Traffic Manager Point Internet Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-point-internet-domain.md
Title: Point a Internet domain to Traffic Manager - Azure Traffic Manager description: This article will help you point your company domain name to a Traffic Manager domain name. -+ na Last updated 10/11/2016-+ # Point a company Internet domain to an Azure Traffic Manager domain
traffic-manager Traffic Manager Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-powershell-arm.md
Title: Using PowerShell to manage Traffic Manager in Azure
description: With this learning path, get started using Azure PowerShell for Traffic Manager. documentationcenter: na-+ na Last updated 03/16/2017-+
traffic-manager Traffic Manager Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-routing-methods.md
Title: Azure Traffic Manager - traffic routing methods description: This article helps you understand the different traffic routing methods used by Traffic Manager -+ na Last updated 01/21/2021-+ # Traffic Manager routing methods
traffic-manager Traffic Manager Rum Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-rum-overview.md
Title: Real User Measurements in Azure Traffic Manager
description: In this introduction, learn how Azure Traffic Manager Real User Measurements work. documentationcenter: traffic-manager-+ Last updated 03/16/2018-+
traffic-manager Traffic Manager Subnet Override Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-subnet-override-cli.md
Title: Azure Traffic Manager subnet override using Azure CLI | Microsoft Docs
description: This article will help you understand how Traffic Manager subnet override can be used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings. documentationcenter: ''-+ Last updated 09/18/2019-+ # Traffic Manager subnet override using Azure CLI
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
Title: Azure Traffic Manager subnet override using Azure PowerShell | Microsoft
description: This article will help you understand how Traffic Manager subnet override is used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings using Azure PowerShell. documentationcenter: ''-+ Last updated 09/18/2019-+ # Traffic Manager subnet override using Azure Powershell
traffic-manager Traffic Manager Testing Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-testing-settings.md
Title: Verify Azure Traffic Manager settings description: In this article, learn how to verify your Traffic Manager settings and test the traffic routing method. -+ na Last updated 03/16/2017-+ # Verify Traffic Manager settings
traffic-manager Traffic Manager Traffic View Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-traffic-view-overview.md
Title: Traffic View in Azure Traffic Manager
description: In this introduction, learn how Traffic manager Traffic view works. documentationcenter: traffic-manager-+ Last updated 01/22/2021-+
traffic-manager Traffic Manager Troubleshooting Degraded https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-troubleshooting-degraded.md
Title: Troubleshooting degraded status on Azure Traffic Manager
description: How to troubleshoot Traffic Manager profiles when it shows as degraded status. documentationcenter: ''-+ na Last updated 05/03/2017-+ # Troubleshooting degraded state on Azure Traffic Manager
traffic-manager Tutorial Traffic Manager Improve Website Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/tutorial-traffic-manager-improve-website-response.md
Title: Tutorial - Improve website response with Azure Traffic Manager description: This tutorial article describes how to create a Traffic Manager profile to build a highly responsive website. -+ # Customer intent: As an IT Admin, I want to route traffic so I can improve website response by choosing the endpoint with lowest latency. na Last updated 10/19/2020-+ # Tutorial: Improve website response using Traffic Manager
traffic-manager Tutorial Traffic Manager Subnet Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/tutorial-traffic-manager-subnet-routing.md
Title: Tutorial - Configure subnet traffic routing with Azure Traffic Manager
description: This tutorial explains how to configure Traffic Manager to route traffic from user subnets to specific endpoints. documentationcenter: ''-+ na Last updated 03/08/2021-+ # Tutorial: Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager Tutorial Traffic Manager Weighted Endpoint Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/tutorial-traffic-manager-weighted-endpoint-routing.md
Title: Tutorial:Route traffic to weighted endpoints - Azure Traffic Manager description: This tutorial article describes how to route traffic to weighted endpoints by using Traffic Manager. -+ # Customer intent: As an IT Admin, I want to distribute traffic based on the weight assigned to a website endpoint so that I can control the user traffic to a given website. Last updated 10/19/2020-+ # Tutorial: Control traffic routing with weighted endpoints by using Traffic Manager
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcv2-series.md
The DCsv2-series virtual machines help protect the confidentiality and integrity
These machines are backed by 3.7 GHz Intel® Xeon E-2288G (Coffee Lake) with SGX technology. With Intel® Turbo Boost Max Technology 3.0 these machines can go up to 5.0 GHz. > [!NOTE]
-> Hyperthreading is disabled for added security posture. Pricing is based on the superior performance of physical vs virtual cores, as well as the unique security capabilities of DC-series.
+> Hyperthreading is disabled for added security posture. Pricing is the same as Dv5 and Dsv5-series per physical core.
Example confidential use cases include: databases, blockchain, multiparty data analytics, fraud detection, anti-money laundering, usage analytics, intelligence analysis and machine learning.
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcv3-series.md
These machines are powered by the latest 3rd Generation Intel® Xeon Scalable pr
With this generation, CPU Cores have increased 6x (up to a maximum of 48 physical cores), Encrypted Memory (EPC) has increased 1500x to 256GB, Regular Memory has increased 12x to 384GB. All these changes substantially improve the performance gen-on-gen and unlock new entirely new scenarios. > [!NOTE]
-> Hyperthreading is disabled for added security posture. Pricing is based on the superior performance of physical vs virtual cores, as well as the unique security capabilities of DC-series.
+> Hyperthreading is disabled for added security posture. Pricing is the same as Dv5 and Dsv5-series per physical core.
We are offering two variants dependent on whether the workload benefits from a local disk or not. Whether you choose a VM with a local disk or not, you can attach remote persistent disk storage to all VMs. Remote disk options (such as for the VM boot disk) are billed separately from the VMs in any case, as always.
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-general-purpose-skus.md
This document goes through the hardware specifications and VM packings for all g
The sizes and hardware types available for dedicated hosts vary by region. Refer to the host [pricing page](https://aka.ms/ADHPricing) to learn more.
+## Dadsv5
+### Dadsv5-Type1
+
+The Dadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dadsv5-Type1 runs [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Dadsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 64 | 112 | 768 GiB | D2ads v5 | 32 |
+| | | | D4ads v5 | 27 |
+| | | | D8ads v5 | 14 |
+| | | | D16ads v5 | 7 |
+| | | | D32ads v5 | 3 |
+| | | | D48ads v5 | 2 |
+| | | | D64ads v5 | 1 |
+| | | | D96ads v5 | 1 |
++
+## Dasv5
+### Dasv5-Type1
+
+The Dasv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv5-Type1 runs [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 112 | 768 GiB | D2as v5 | 32 |
+| | | | D4as v5 | 28 |
+| | | | D8as v5 | 14 |
+| | | | D16as v5 | 7 |
+| | | | D32as v5 | 3 |
+| | | | D48as v5 | 2 |
+| | | | D64as v5 | 1 |
+| | | | D96as v5 | 1 |
+
+## Ddsv5
+### Ddsv5-Type1
+
+The Ddsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Ddsv5-Type1 runs [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 96 | 768 GiB | D2ds v5 | 32 |
+| | | | D4ds v5 | 22 |
+| | | | D8ds v5 | 11 |
+| | | | D16ds v5 | 5 |
+| | | | D32ds v5 | 2 |
+| | | | D48ds v5 | 1 |
+| | | | D64ds v5 | 1 |
+| | | | D96ds v5 | 1 |
+
+## Dsv5
+### Dsv5-Type1
+
+The Dsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 100 vCPUs, and 768 GiB of RAM. The Dsv5-Type1 runs [Dsv5-series](dv5-dsv5-series.md#dsv5-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 100 | 768 GiB | D2s v5 | 32 |
+| | | | D4s v5 | 25 |
+| | | | D8s v5 | 12 |
+| | | | D16s v5 | 6 |
+| | | | D32s v5 | 3 |
+| | | | D48s v5 | 2 |
+| | | | D64s v5 | 1 |
+| | | | D96s v5 | 1 |
+ ## Dasv4 ### Dasv4-Type1 The Dasv4-Type1 is a Dedicated Host SKU utilizing AMD's 2.35 GHz EPYCΓäó 7452 processor. It offers 64 physical cores, 96 vCPUs, and 672 GiB of RAM. The Dasv4-Type1 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
This document goes through the hardware specifications and VM packings for all m
The sizes and hardware types available for dedicated hosts vary by region. Refer to the host [pricing page](https://aka.ms/ADHPricing) to learn more.
+## Eadsv5
+### Eadsv5-Type1
+
+The Eadsv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Eadsv5-Type1 runs [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Eadsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||--|-|
+| 64 | 96 | 768 GiB | E2ads v5 | 32 |
+| | | | E4ads v5 | 21 |
+| | | | E8ads v5 | 10 |
+| | | | E16ads v5 | 5 |
+| | | | E20ads v5 | 4 |
+| | | | E32ads v5 | 2 |
+| | | | E48ads v5 | 1 |
+| | | | E64ads v5 | 1 |
+## Easv5
+### Easv5-Type1
+
+The Easv5-Type1 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Easv5-Type1 runs [Easv5-series](easv5-eadsv5-series.md#easv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Easv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 96 | 768 GiB | E2as v5 | 32 |
+| | | | E4as v5 | 21 |
+| | | | E8as v5 | 10 |
+| | | | E16as v5 | 5 |
+| | | | E20as v5 | 4 |
+| | | | E32as v5 | 2 |
+| | | | E48as v5 | 1 |
+| | | | E64as v5 | 1 |
+
+## Edsv5
+### Edsv5-Type1
+
+The Edsv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Edsv5-Type1 runs [Edsv5-series](edv5-edsv5-series.md#edsv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Edsv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--||-|-|
+| 64 | 96 | 768 GiB | E2ds v5 | 32 |
+| | | | E4ds v5 | 21 |
+| | | | E8ds v5 | 10 |
+| | | | E16ds v5 | 5 |
+| | | | E20ds v5 | 4 |
+| | | | E32ds v5 | 2 |
+| | | | E48ds v5 | 1 |
+| | | | E64ds v5 | 1 |
+
+## Esv5
+### Esv5-Type1
+
+The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
+
+The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv5-Type1 host.
+
+| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs |
+|-|--|||-|
+| 64 | 96 | 768 GiB | E2s v5 | 32 |
+| | | | E4s v5 | 21 |
+| | | | E8s v5 | 10 |
+| | | | E16s v5 | 5 |
+| | | | E20s v5 | 4 |
+| | | | E32s v5 | 2 |
+| | | | E48s v5 | 1 |
+| | | | E64s v5 | 1 |
+ ## Easv4 ### Easv4-Type1
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv2-dsv2-series.md
Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® X
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks | Throughput: IOPS | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-overview.md
The portal collects the following input:
- **Data Collection**: Determines if the extension will collect telemetry. For more information, see [Azure DSC extension data collection](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/). -- **Version**: Specifies the version of the DSC extension to install. For information about versions, see [DSC extension version history](/powershell/dsc/getting-started/azuredscexthistory).
+- **Version**: Specifies the version of the DSC extension to install. For information about versions, see [DSC extension version history](/azure/automation/automation-dsc-extension-history).
- **Auto Upgrade Minor Version**: This field maps to the **AutoUpdate** switch in the cmdlets and enables the extension to automatically update to the latest version during installation. **Yes** will instruct the extension handler to use the latest available version and **No** will force the **Version** specified to be installed. Selecting neither **Yes** nor **No** is the same as selecting **No**.
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/infrastructure-automation.md
Learn more details about cloud-init on Azure:
## PowerShell DSC
-[PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview/overview) is a management platform to define the configuration of target machines. DSC can also be used on Linux through the [Open Management Infrastructure (OMI) server](https://collaboration.opengroup.org/omi/).
+[PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview) is a management platform to define the configuration of target machines. DSC can also be used on Linux through the [Open Management Infrastructure (OMI) server](https://collaboration.opengroup.org/omi/).
DSC configurations define what to install on a machine and how to configure the host. A Local Configuration Manager (LCM) engine runs on each target node that processes requested actions based on pushed configurations. A pull server is a web service that runs on a central host to store the DSC configurations and associated resources. The pull server communicates with the LCM engine on each target host to provide the required configurations and report on compliance.
Learn how to:
- [Create an Azure Image Builder template](./linux/image-builder-json.md). ## Next steps
-There are many different options to use infrastructure automation tools in Azure. You have the freedom to use the solution that best fits your needs and environment. To get started and try some of the tools built-in to Azure, see how to automate the customization of a [Linux](./linux/tutorial-automate-vm-deployment.md) or [Windows](./windows/tutorial-automate-vm-deployment.md) VM.
+There are many different options to use infrastructure automation tools in Azure. You have the freedom to use the solution that best fits your needs and environment. To get started and try some of the tools built-in to Azure, see how to automate the customization of a [Linux](./linux/tutorial-automate-vm-deployment.md) or [Windows](./windows/tutorial-automate-vm-deployment.md) VM.
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/run-command-managed.md
az vm run-command create --name "myRunCommand" --vm-name "myVM" --resource-group
This command will return a full list of previously deployed Run Commands along with their properties. ```azurecli-interactive
-az vm run-command list --name "myVM" --resource-group "myRG"
+az vm run-command list --vm-name "myVM" --resource-group "myRG"
``` ### Get execution status and results
virtual-machines Create Vm Specialized Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/create-vm-specialized-portal.md
**Applies to:** :heavy_check_mark: Windows VMs
-There are several ways to create a virtual machine (VM) in Azure:
+There are several ways to create a virtual machine (VM) in Azure:
-- If you already have a virtual hard disk (VHD) to use or you want to copy the VHD from an existing VM to use, you can create a new VM by *attaching* the VHD to the new VM as an OS disk.
+- If you already have a virtual hard disk (VHD) to use or you want to copy the VHD from an existing VM to use, you can create a new VM by *attaching* the VHD to the new VM as an OS disk.
- You can create a new VM from the VHD of a VM that has been deleted. For example, if you have an Azure VM that isn't working correctly, you can delete the VM and use its VHD to create a new VM. You can either reuse the same VHD or create a copy of the VHD by creating a snapshot and then creating a new managed disk from the snapshot. Although creating a snapshot takes a few more steps, it preserves the original VHD and provides you with a fallback. - Take a classic VM and use the VHD to create a new VM that uses the Resource Manager deployment model and managed disks. For the best results, **Stop** the classic VM in the Azure portal before creating the snapshot.
-
-- You can create an Azure VM from an on-premises VHD by uploading the on-premises VHD and attaching it to a new VM. You use PowerShell or another tool to upload the VHD to a storage account, and then you create a managed disk from the VHD. For more information, see [Upload a specialized VHD](create-vm-specialized.md#option-2-upload-a-specialized-vhd). +
+- You can create an Azure VM from an on-premises VHD by uploading the on-premises VHD and attaching it to a new VM. You use PowerShell or another tool to upload the VHD to a storage account, and then you create a managed disk from the VHD. For more information, see [Upload a specialized VHD](create-vm-specialized.md#option-2-upload-a-specialized-vhd).
> [!IMPORTANT]
->
+>
> When you use a specialized disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on. > Thus, don't use a specialized disk if you want to create multiple VMs. Instead, for larger deployments, [create an image](capture-image-resource.md) and then [use that image to create multiple VMs](create-vm-generalized-managed.md).
-We recommend that you limit the number of concurrent deployments to 20 VMs from a single snapshot or VHD.
+We recommend that you limit the number of concurrent deployments to 20 VMs from a single snapshot or VHD.
## Copy a disk
Create a snapshot and then create a disk from the snapshot. This strategy allows
1. From the [Azure portal](https://portal.azure.com), on the left menu, select **All services**. 2. In the **All services** search box, enter **disks** and then select **Disks** to display the list of available disks. 3. Select the disk that you would like to use. The **Disk** page for that disk appears.
-4. From the menu at the top, select **Create snapshot**.
+4. From the menu at the top, select **Create snapshot**.
5. Choose a **Resource group** for the snapshot. You can use either an existing resource group or create a new one. 6. Enter a **Name** for the snapshot. 7. For **Snapshot type**, choose either **Full** or **Incremental**.
After you have the managed disk VHD that you want to use, you can create the VM
5. On the **Basics** page for the new VM, enter a **Virtual machine name** and either select an existing **Resource group** or create a new one. 6. For **Size**, select **Change size** to access the **Size** page. 7. Select a VM size row and then choose **Select**.
-8. On the **Networking** page, you can either let the portal create all new resources or you can select an existing **Virtual network** and **Network security group**. The portal always creates a new network interface and public IP address for the new VM.
-9. On the **Management** page, make any changes to the monitoring options.
-10. On the **Guest config** page, add any extensions as needed.
-11. When you're done, select **Review + create**.
-12. If the VM configuration passes validation, select **Create** to start the deployment.
+8. On the **Disks** page, you may notice that the "OS Disk Type" cannot be changed. This preselected value is configured at the point of Snapshot or VHD creation and will carry over to the new VM. If you need to modify disk type take a new snapshot from an existing VM or disk.
+9. On the **Networking** page, you can either let the portal create all new resources or you can select an existing **Virtual network** and **Network security group**. The portal always creates a new network interface and public IP address for the new VM.
+10. On the **Management** page, make any changes to the monitoring options.
+11. On the **Guest config** page, add any extensions as needed.
+12. When you're done, select **Review + create**.
+13. If the VM configuration passes validation, select **Create** to start the deployment.
## Next steps You can also use PowerShell to [upload a VHD to Azure and create a specialized VM](create-vm-specialized.md).--
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
The control plane deployment for the [SAP deployment automation framework on Azu
## Prepare the deployment credentials
-The SAP Deployment Frameworks uses Service Principals when doing the deployment. You can create the Service Principal for the Control Plane deployment using the following steps using an account with permissions to create Service Principals:
+The SAP Deployment Frameworks uses Service Principals when doing the deployments. You can create the Service Principal for the Control Plane deployment using the following steps using an account with permissions to create Service Principals:
```azurecli
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip
> - password > - tenant
-Assign the correct permissions to the Service Principal:
+Optionally assign the following permissions to the Service Principal:
```azurecli az role assignment create --assignee <appId> --role "User Access Administrator"
You can copy the sample configuration files to start testing the deployment auto
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -R sap-automation/samples/WORKSPACES WORKSPACES
+cp -Rp sap-automation/samples/WORKSPACES WORKSPACES
```
New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP00-INFRAS
## Next step > [!div class="nextstepaction"]
-> [Configure SAP Workload Zone](automation-deploy-workload-zone.md)
+> [Configure SAP Workload Zone](automation-configure-workload-zone.md)
virtual-machines Automation Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deploy-system.md
location="westeurope"
network_name="SAP01" # sid is a mandatory field that defines the SAP Application SID
-sid="RH7"
+sid="S15"
app_tier_vm_sizing="Production" app_tier_use_DHCP=true
You can copy the sample configuration files to start testing the deployment auto
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -R sap-automation/deploy/samples/WORKSPACES WORKSPACES
+cp -Rp sap-automation/deploy/samples/WORKSPACES WORKSPACES
```
New-SAPSystem -Parameterfile DEV-WEEU-SAP01-X01.tfvars
### Output files
-The deployment will create a Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for thee Ansible playbooks.
+The deployment will create a Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for the Ansible playbooks.
## Next steps > [!div class="nextstepaction"]
virtual-machines Automation Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deployment-framework.md
# SAP deployment automation framework on Azure
-The [SAP deployment automation framework on Azure](https://github.com/Azure/sap-automation) is an open-source orchestration tool for
-deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB on any of the SAP-supported operating system versions and deploy them into any Azure region.
+The [SAP deployment automation framework on Azure](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB on any of the SAP-supported operating system versions and deploy them into any Azure region. The framework uses [Terraform](https://www.terraform.io/) for infrastructure deployment, and [Ansible](https://www.ansible.com/) for the operating system and application configuration.
+
+Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure.
+
+[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can automate deployment and configuration of resources in your environment.
The [automation framework](https://github.com/Azure/sap-automation) has two main components: - Deployment infrastructure (control plane) - SAP Infrastructure (SAP Workload)
-The dependency between the control plane and the application plane is illustrated in the diagram below.
+You will use the control plane of the SAP deployment automation framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
+
+The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment a single control plane is used to manage multiple SAP deployments.
:::image type="content" source="./media/automation-deployment-framework/control-plane-sap-infrastructure.png" alt-text="Diagram showing the SAP deployment automation framework's dependency between the control plane and application plane.":::
The following diagram shows the key components of the control plane and workload
:::image type="content" source="./media/automation-deployment-framework/automation-diagram-full.png" alt-text="Diagram showing the SAP deployment automation framework environment.":::
-The framework uses [Terraform](https://www.terraform.io/) for infrastructure deployment, and [Ansible](https://www.ansible.com/) for the operating system and application configuration.
-
-Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure. It codifies infrastructure in configuration files that describe the topology of cloud resources.
-
-[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can automate deployment and configuration of resources in your environment.
- > [!NOTE] > This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance. >
The application configuration will be performed from the Ansible Controller in t
## About the control plane
-The control plane houses the infrastructure from which other environments will be deployed. Once the
-control plane is deployed, it rarely needs to be redeployed, if ever.
+The control plane houses the infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
The control plane provides the following services - Terraform Deployment Infrastructure