Updates from: 06/25/2021 03:10:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specif
|Modify users or groups with PATCH requests|[section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.| |Retrieve a known resource for a user or group created earlier|[section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)| |Query users or groups|[section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.|
-|Query user by ID and by manager|section 3.4.2|
-|Query groups by ID and by member|section 3.4.2|
|The filter [excludedAttributes=members](#get-group) when querying the group resource|section 3.4.2.5| |Accept a single bearer token for authentication and authorization of AAD to your application.|| |Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user should not be returned is when it is hard deleted from the application.|
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specif
Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with AAD:
+##### General:
* `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero members.
-* Response to a query/filter request should always be a `ListResponse`.
-* Groups are optional, but only supported if the SCIM implementation supports **PATCH** requests.
+* Values sent should be stored in the same format as what the were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data should not happen between data being sent by Azure AD and data being stored in the SCIM application. (e.g. A phone number sent as 55555555555 should not be saved/returned as +5 (555) 555-5555)
* It isn't necessary to include the entire resource in the **PATCH** response.
-* Microsoft AAD only uses the following operators: `eq`, `and`
* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). AAD emits the values of `op` as **Add**, **Replace**, and **Remove**. * Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com).
+* Support HTTPS on your SCIM endpoint.
+* Custom complex and multivalued attributes are supported but AAD does not have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes are not well supported at this time.
+
+##### Retrieving Resources:
+* Response to a query/filter request should always be a `ListResponse`.
+* Microsoft AAD only uses the following operators: `eq`, `and`
* The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md).+
+##### /Users:
* The entitlements attribute is not supported.
-* Support HTTPS on your SCIM endpoint.
-* [Schema discovery](#schema-discovery)
- * Schema discovery is not currently supported on the custom application, but it is being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add additional attributes to an existing connector.
- * If a value is not present, do not send null values.
- * Property values should be camel cased (e.g. readWrite).
- * Must return a list response.
- * The /schemas request will be made by the Azure AD SCIM client every time someone saves the provisioning configuration in the Azure Portal or every time a user lands on the edit provisioning page in the Azure Portal. Any additional attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to additional target attributes being added. It will not result in attributes being removed.
+* Any attributes that are considered for user uniqueness must be usable as part of a filtered query. (e.g. if user uniqueness is evaluated for both userName and emails[type eq "work"], a GET to /Users with a filter must allow for both _userName eq "user@contoso.com"_ and _emails[type eq "work"] eq "user@contoso.com"_ queries.
+
+##### /Groups:
+* Groups are optional, but only supported if the SCIM implementation supports **PATCH** requests.
+* Groups must have uniqueness on the 'displayName' value for the purpose of matching between Azure Active Directory and the SCIM application. This is not a requirement of the SCIM protocol, but is a requirement for integrating a SCIM service with Azure Active Directory.
+
+##### /Schemas (Schema discovery):
+
+* [Sample request/response](#schema-discovery)
+* Schema discovery is not currently supported on the custom non-gallery SCIM application, but it is being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add additional attributes to the schema of an existing gallery SCIM application.
+* If a value is not present, do not send null values.
+* Property values should be camel cased (e.g. readWrite).
+* Must return a list response.
+* The /schemas request will be made by the Azure AD SCIM client every time someone saves the provisioning configuration in the Azure Portal or every time a user lands on the edit provisioning page in the Azure Portal. Any additional attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to additional target attributes being added. It will not result in attributes being removed.
+ ### User provisioning and deprovisioning
The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho
|Authorization method|Pros|Cons|Support| |--|--|--|--|
-|Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Supported on a case-by-case basis for gallery apps. Not supported for non-gallery apps.|
+|Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Not supported for new gallery or non-gallery apps.|
|Long-lived bearer token|Long-lived tokens do not require a user to be present. They are easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. | |OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.| |OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Not supported for gallery and non-gallery apps. Support is in our backlog.|
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 06/07/2021 Last updated : 06/24/2021
# Features and licenses for Azure AD Multi-Factor Authentication
-To protect user accounts in your organization, multi-factor authentication should be used. This feature is especially important for accounts that have privileged access to resources. Basic multi-factor authentication features are available to Microsoft 365 and Azure Active Directory (Azure AD) administrators for no extra cost. If you want to upgrade the features for your admins or extend multi-factor authentication to the rest of your users, you can purchase Azure AD Multi-Factor Authentication in several ways.
+To protect user accounts in your organization, multi-factor authentication should be used. This feature is especially important for accounts that have privileged access to resources. Basic multi-factor authentication features are available to Microsoft 365 and Azure Active Directory (Azure AD) global administrators for no extra cost. If you want to upgrade the features for your admins or extend multi-factor authentication to the rest of your users, you can purchase Azure AD Multi-Factor Authentication in several ways.
> [!IMPORTANT] > This article details the different ways that Azure AD Multi-Factor Authentication can be licensed and used. For specific details about pricing and billing, see the [Azure AD Multi-Factor Authentication pricing page](https://azure.microsoft.com/pricing/details/multi-factor-authentication/).
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Continuous access evaluation is implemented by enabling services, like Exchange
- Administrator explicitly revokes all refresh tokens for a user - High user risk detected by Azure AD Identity Protection
-This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within mins after one of these critical events.
+This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within minutes after one of these critical events.
> [!NOTE] > Teams and SharePoint Online does not support user risk events yet. ### Conditional Access policy evaluation (preview)
-Exchange and SharePoint are able to synchronize key Conditional Access policies so they can be evaluated within the service itself.
+Exchange Online, SharePoint Online, Teams, and MS Graph are able to synchronize key Conditional Access policies so they can be evaluated within the service itself.
This process enables the scenario where users lose access to organizational files, email, calendar, or tasks from Microsoft 365 client apps or SharePoint Online immediately after network location changes.
This process enables the scenario where users lose access to organizational file
| : | :: | :: | :: | :: | :: | | **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
+| | Teams web | Teams Win32 | Teams iOS | Teams Android | Teams Mac |
+| : | :: | :: | :: | :: | :: |
+| **Teams Service** | Supported | Supported | Supported | Supported | Supported |
+| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
+| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
+ ### Client-side claim challenge Before continuous access evaluation, clients would always try to replay the access token from its cache as long as it was not expired. With CAE, we are introducing a new case that a resource provider can reject a token even when it is not expired. In order to inform clients to bypass their cache even though the cached tokens have not expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications below support claim challenge:
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-certificate-credentials.md
Previously updated : 12/3/2020 Last updated : 06/23/2021 -+ # Microsoft identity platform application authentication certificate credentials
One form of credential that an application can use for authentication is a [JSON
## Assertion format
-To compute the assertion, you can use one of the many JWT libraries in the language of your choice - [MSAL supports this using `.WithCertificate()`](msal-net-client-assertions.md). The information is carried by the token in its [Header](#header), [Claims](#claims-payload), and [Signature](#signature).
+To compute the assertion, you can use one of the many JWT libraries in the language of your choice - [MSAL supports this using `.WithCertificate()`](msal-net-client-assertions.md). The information is carried by the token in its Header, Claims, and Signature.
### Header
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-configure-publisher-domain.md
Previously updated : 07/23/2020 Last updated : 06/23/2021 -+ # How to: Configure an application's publisher domain
If your app isn't registered in a tenant, you'll only see the option to verify a
``` 1. Replace the placeholder *{YOUR-APP-ID-HERE}* with the application (client) ID that corresponds to your app.- 1. Host the file at: `https://{YOUR-DOMAIN-HERE}.com/.well-known/microsoft-identity-association.json`. Replace the placeholder *{YOUR-DOMAIN-HERE}* to match the verified domain.- 1. Click the **Verify and save domain** button. You're not required to maintain the resources that are used for verification after a domain has been verified. When the verification is finished, you can remove the hosted file.
You're not required to maintain the resources that are used for verification aft
If your tenant has verified domains, select one of the domains from the **Select a verified domain** dropdown. > [!NOTE]
-> The expected `Content-Type` header that should be returned is `application/json`. You may get an error as mentioned below if you use anything else, like `application/json; charset=utf-8`:
->
+> The expected `Content-Type` header that should be returned is `application/json`. You may get an error if you use anything else, like `application/json; charset=utf-8`:
+>
> `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.` >
The behavior for new applications created after May 21, 2019 will depend on the
## Implications on redirect URIs
-Applications that sign in users with any work or school account, or personal Microsoft accounts ([multi-tenant](single-and-multi-tenant-apps.md)) are subject to few restrictions when specifying redirect URIs.
+Applications that sign in users with any work or school account, or personal Microsoft accounts (multi-tenant) are subject to few restrictions when specifying redirect URIs.
### Single root domain restriction
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-create-service-principal-portal.md
Title: Create an Azure AD app & service principal in the portal
+ Title: Create an Azure AD app and service principal in the portal
-description: Create a new Azure Active Directory app & service principal to manage access to resources with role-based access control in Azure Resource Manager.
+description: Create a new Azure Active Directory app and service principal to manage access to resources with role-based access control in Azure Resource Manager.
Last updated 06/16/2021 --+ # How to: Use the portal to create an Azure AD application and service principal that can access resources
This article shows you how to use the portal to create the service principal in
> Instead of creating a service principal, consider using managed identities for Azure resources for your application identity. If your code runs on a service that supports managed identities and accesses resources that support Azure AD authentication, managed identities are a better option for you. To learn more about managed identities for Azure resources, including which services currently support it, see [What is managed identities for Azure resources?](../managed-identities-azure-resources/overview.md). ## App registration, app objects, and service principals+ There is no way to directly create a service principal using the Azure portal. When you register an application through the Azure portal, an application object and service principal are automatically created in your home directory or tenant. For more information on the relationship between app registration, application objects, and service principals, read [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md). ## Permissions required for registering an app
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Last updated 06/14/2021 -+ # Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
There are some restrictions on the format of the redirect URIs you add to an app
## Add credentials
-Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are [web apps](scenario-web-app-call-api-overview.md), other [web APIs](scenario-protected-web-api-overview.md), or [service-type and daemon-type applications](scenario-daemon-overview.md). Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
+Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are web apps, other web APIs, or service-type and daemon-type applications. Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
You can add both certificates and client secrets (a string) as credentials to your confidential client app registration.
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to an ASP.NET web app | Azure"
+ Title: "Quickstart: ASP.NET web app that signs in users"
-description: In this quickstart, learn how to implement Microsoft sign-in on an ASP.NET web app by using OpenID Connect.
+description: Download and run a code sample that shows how an ASP.NET web app can sign in Azure AD users.
Last updated 09/25/2020
-#Customer intent: As an application developer, I want to know how to write an ASP.NET web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
+# Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
-# Quickstart: Add Microsoft identity platform sign-in to an ASP.NET web app
+# Quickstart: ASP.NET web app that signs in Azure AD users
-In this quickstart, you download and run a code sample that demonstrates how an ASP.NET web app can sign in users from any Azure Active Directory (Azure AD) organization.
+In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
> [!div renderon="docs"] > The following diagram shows how the sample app works:
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
To run the web application from an IDE, select run, and then go to the home page
If you want to deploy the web sample to Tomcat, make a couple changes to the source code.
-1. Open *ms-identity-java-webapp/pom.xml*.
- - Under `<name>msal-web-sample</name>`, add `<packaging>war</packaging>`.
-
-2. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
+1. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
- Delete all source code and replace it with this code:
If you want to deploy the web sample to Tomcat, make a couple changes to the sou
} ```
-3. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
+2. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
- Go to *tomcat/conf/server.xml*. - Search for the `<connector>` tag, and replace the existing connector with this connector:
If you want to deploy the web sample to Tomcat, make a couple changes to the sou
clientAuth="false" sslProtocol="TLS"/> ```
-4. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn package` to build the project.
+3. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn package` to build the project.
- This command will generate a *msal-web-sample-0.1.0.war* file in your */targets* directory. - Rename this file to *msal4jsample.war*. - Deploy the WAR file by using Tomcat or any other J2EE container solution. - To deploy the msal4jsample.war file, copy it to the */webapps/* directory in your Tomcat installation, and then start the Tomcat server.
-5. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
+4. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
> [!IMPORTANT]
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-app-gallery-listing.md
Title: Publish your app to the Azure Active Directory app gallery
-description: Learn how to list an application that supports single sign-on in the Azure Active Directory app gallery.
+description: Learn how to list an application that supports single sign-on in the Azure Active Directory app gallery. Publishing to the app gallery makes it easier for customers to find and add your app to their tenant.
Previously updated : 06/10/2021 Last updated : 06/23/2021 -+ # Publish your app to the Azure AD app gallery
-You can publish your app in the Azure AD app gallery. When your app is published, it will show up as an option for customers when they are adding apps to their tenant.
+You can publish your app in the Azure Active Directory (Azure AD) app gallery. When your app is published, it will show up as an option for customers when they are [adding apps to their tenant](/en-us/azure/active-directory/manage-apps/add-application-portal).
+
+The steps to publishing your app in the Azure AD app gallery are:
+1. Prerequisites
+1. Choose the right single sign-on standard for your app.
+1. Implement single sign-on in your app.
+1. Implement SCIM user provisioning in your app (optional)
+1. Create your Azure tenant and test your app.
+1. Create and publish documentation.
+1. Submit your app.
+1. Join the Microsoft partner network.
+
+## What is the Azure AD application gallery?
+
+The [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps?page=1) is a catalog of thousands of apps that make it easy to deploy and configure single sign-on (SSO) and automated user provisioning.
Some of the benefits of adding your app to the Azure AD gallery include:
Some of the benefits of adding your app to the Azure AD gallery include:
- A quick search finds your application in the gallery. - Free, Basic, and Premium Azure AD customers can all use this integration. - Mutual customers get a step-by-step configuration tutorial.
+- Customers who use the System for Cross-domain Identity Management ([SCIM](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010)) can use provisioning for the same app.
In addition, there are many benefits when your customers use Azure AD as an identity provider for your app. Some of these include:
In addition, there are many benefits when your customers use Azure AD as an iden
- Add security and convenience when users sign on to applications by using Azure AD SSO and removing the need for separate credentials. > [!TIP]
-> When you offer your application for use by other companies through a purchase or subscription, you make your application available to customers within their own Azure tenants. This is known as creating a multi-tenant application. For an overview of this concept, see [Multitenant Applications in Azure](../../dotnet-develop-multitenant-applications.md) and [Tenancy in Azure Active Directory](single-and-multi-tenant-apps.md).
-
-> [!IMPORTANT]
-> To publish your app in the Azure AD gallery you must agree to specific terms and conditions. Before you begin, make sure to read and agree to the [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
-
-The steps to publishing your app in the Azure AD app gallery are:
-1. Choose the right single sign-on standard for your app.
-2. Implement single sign-on in your app.
-3. Create your Azure tenant and test your app.
-4. Create and publish documentation.
-5. Submit your app.
-6. Join the Microsoft partner network.
-
-## What is the Azure AD application gallery?
--- Customers find the best possible single sign-on experience.-- Configuration of the application is simple and minimal.-- A quick search finds your application in the gallery.-- Free, Basic, and Premium Azure AD customers can all use this integration.-- Mutual customers get a step-by-step configuration tutorial.-- Customers who use the System for Cross-domain Identity Management ([SCIM](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010)) can use provisioning for the same app.
+> When you offer your application for use by other companies through a purchase or subscription, you make your application available to customers within their own Azure tenants. This is known as creating a multi-tenant application. For an overview of this concept, see [Tenancy in Azure Active Directory](single-and-multi-tenant-apps.md).
## Prerequisites
+To publish your app in the Azure AD gallery you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
You need a permanent account for testing with at least two users registered. - For federated applications (Open ID and SAML/WS-Fed), the application must support the software-as-a-service (SaaS) model for getting listed in the Azure AD app gallery. The enterprise gallery applications must support multiple customer configurations and not any specific customer.-- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any customer can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.
+- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any customer can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.
- For SAML 2.0/WS-Fed, your application must have the capability to do the SAML/WS-Fed SSO integration in SP or IDP mode. Make sure this capability is working correctly before you submit the request. - For password SSO, make sure that your application supports form authentication so that password vaulting can be done to get single sign-on to work as expected. - You need a permanent account for testing with at least two users registered.
-**How to get Azure AD for developers?**
- You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program). ## Step 1 - Choose the right single sign-on standard for your app
For OAuth and OIDC, see [guidance on authentication patterns](v2-app-types.md) a
For SAML and WS-Fed, your application must have the capability to do SSO integration in SP or IDP mode. Make sure this capability is working correctly before you submit the request.
-To learn more about authentication, see [What is authentication?](../azuread-dev/v1-authentication-scenarios.md).
+To learn more about authentication, see [What is authentication?](authentication-vs-authorization.md).
> [!IMPORTANT] > For federated applications (OpenID and SAML/WS-Fed), the app must support the Software as a Service (SaaS) model. Azure AD gallery applications must support multiple customer configurations and should not be specific to any single customer.
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 05/18/2021 Last updated : 06/23/2021
Groups created in | Security group default behavior | Microsoft 365 group defaul
1. Select **Groups**, and then select **General** settings.
- ![Azure Active Directory groups general settings](./media/groups-self-service-management/groups-settings-general.png)
+ ![Azure Active Directory groups general settings.](./media/groups-self-service-management/groups-settings-general.png)
1. Set **Owners can manage group membership requests in the Access Panel** to **Yes**. 1. Set **Restrict user ability to access groups features in the Access Panel** to **No**.
-1. If you set **Users can create security groups in Azure portals, API or PowerShell** or **Users can create Microsoft 365 groups in Azure portals, API or PowerShell** to
+1. Set **Users can create security groups in Azure portals, API or PowerShell** to **Yes** or **No**.
- - **Yes**: All users in your Azure AD organization are allowed to create new security groups and add members to these groups in Azure portals, API or PowerShell. These new groups would also show up in the Access Panel for all other users. If the policy setting on the group allows it, other users can create requests to join these groups.
- - **No**: Users can't create groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups.
+ For more information about this setting, see the next section [Group settings](#group-settings).
- These settings were recently changed to add support for API and PowerShell. For more information about this change, see the next section [Groups setting change](#groups-setting-change).
+1. Set **Users can create Microsoft 365 groups in Azure portals, API or PowerShell** to **Yes** or **No**.
+
+ For more information about this setting, see the next section [Group settings](#group-settings).
You can also use **Owners who can assign members as group owners in the Azure portal** to achieve more granular access control over self-service group management for your users.
When users can create groups, all users in your organization are allowed to crea
> [!NOTE] > An Azure Active Directory Premium (P1 or P2) license is required for users to request to join a security group or Microsoft 365 group and for owners to approve or deny membership requests. Without an Azure Active Directory Premium license, users can still manage their groups in the Access Panel, but they can't create a group that requires owner approval in the Access Panel, and they can't request to join a group.
-## Groups setting change
-
-The current security groups and Microsoft 365 groups settings are being deprecated and replaced. The current settings are being replaced because they only control group creation in Azure portals and do not apply to API or PowerShell. The new settings control group creation in Azure portals, and also API and PowerShell.
-
-| Deprecated setting | New setting |
-| | |
-| Users can create security groups in Azure portals | Users can create security groups in Azure portals, API or PowerShell |
-| Users can create Microsoft 365 groups in Azure portals | Users can create Microsoft 365 groups in Azure portals, API or PowerShell |
+## Group settings
-Until the current setting is fully deprecated, both settings will appear in the Azure portals. You should configure this new setting before the end of **May 2021**. To configure the security groups settings, you must be assigned the Global Administrator or Privileged Role Administrator role.
+The group settings enable to control who can create security and Microsoft 365 groups.
-![Azure Active Directory security groups setting change](./media/groups-self-service-management/security-groups-setting.png)
+![Azure Active Directory security groups setting change.](./media/groups-self-service-management/security-groups-setting.png)
-The following table helps you decide which values to choose.
+ The following table helps you decide which values to choose.
-| If you want this ... | Choose these values |
-| | |
-| Users can create groups using Azure portals, API or PowerShell | Set both settings to **Yes**. Changes can take up to 15 minutes to take effect. |
-| Users **can't** create groups using Azure portals, API or PowerShell | Set both settings to **No**. Changes can take up to 15 minutes to take effect. |
-| Users can create groups using Azure portals, but not using API or PowerShell | Not supported |
-| Users can create groups using API or PowerShell, but not using Azure portals | Not supported |
+| Setting | Value | Effect on your tenant |
+| | :: | |
+| Users can create security groups in Azure portals, API or PowerShell | Yes | All users in your Azure AD organization are allowed to create new security groups and add members to these groups in Azure portals, API, or PowerShell. These new groups would also show up in the Access Panel for all other users. If the policy setting on the group allows it, other users can create requests to join these groups. |
+| | No | Users can't security create groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups. |
+| Users can create Microsoft 365 groups in Azure portals, API or PowerShell | Yes | All users in your Azure AD organization are allowed to create new Microsoft 365 groups and add members to these groups in Azure portals, API, or PowerShell. These new groups would also show up in the Access Panel for all other users. If the policy setting on the group allows it, other users can create requests to join these groups. |
+| | No | Users can't create Microsoft 365 groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups. |
-The following table lists what happens for different values for these settings. It's not recommended to have the deprecated setting and the new setting set to different values.
+Here are some additional details about these group settings.
-| Users can create groups using Azure portals | Users can create groups using Azure portals, API or PowerShell | Effect on your tenant |
-| :: | :: | |
-| Yes | Yes | Users can create groups using Azure portals, API or PowerShell. Changes can take up to 15 minutes to take effect.|
-| No | No | Users **can't** create groups using Azure portals, API or PowerShell. Changes can take up to 15 minutes to take effect. |
-| Yes | No | Users **can't** create groups using Azure portals, API or PowerShell. It's not recommended to have these settings set to different values. Changes can take up to 15 minutes to take effect. |
-| No | Yes | Until the **Users can create groups using Azure portals** setting is fully deprecated in **June 2021**, users can create groups using API or PowerShell, but not Azure portals. Starting sometime in **June 2021**, the **Users can create groups using Azure portals, API or PowerShell** setting will take effect and users can create groups using Azure portals, API or PowerShell. |
+- These setting can take up to 15 minutes to take effect.
+- If you want to enable some, but not all, of your users to create groups, you can assign those users a role that can create groups, such as [Groups Administrator](../roles/permissions-reference.md#groups-administrator).
+- These settings are for users and don't impact service principals. For example, if you have a service principal with permissions to create groups, even if you set these settings to **No**, the service principal will still be able to create groups.
## Next steps
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) | | ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
-| ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>AZURE ADVANCED THREAT PROTECTION (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c) |
+| ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR IDENTITY (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| EXCHANGE ONLINE (PLAN 1) | EXCHANGESTANDARD | 4b9405b0-7788-4568-add1-99614e613b69 | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)| | EXCHANGE ONLINE (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) | | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) |
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Previously updated : 04/10/2019 Last updated : 06/22/2021 -+
Example:
![Diagram showing a guest user signing into a company's apps](media/tutorial-mfa/aad-b2b-mfa-example.png)
-1. An admin or employee at Company A invites a guest user to use a cloud or on-premises application that is configured to require MFA for access.
-2. The guest user signs in with their own work, school, or social identity.
-3. The user is asked to complete an MFA challenge.
-4. The user sets up MFA with Company A and chooses their MFA option. The user is allowed access to the application.
+1. An admin or employee at Company A invites a guest user to use a cloud or on-premises application that is configured to require MFA for access.
+1. The guest user signs in with their own work, school, or social identity.
+1. The user is asked to complete an MFA challenge.
+1. The user sets up MFA with Company A and chooses their MFA option. The user is allowed access to the application.
In this tutorial, you will: > [!div class="checklist"]
-> * Test the sign-in experience before MFA setup.
-> * Create a Conditional Access policy that requires MFA for access to a cloud app in your environment. In this tutorial, weΓÇÖll use the Microsoft Azure Management app to illustrate the process.
-> * Use the What If tool to simulate MFA sign-in.
-> * Test your Conditional Access policy.
-> * Clean up the test user and policy.
+> - Test the sign-in experience before MFA setup.
+> - Create a Conditional Access policy that requires MFA for access to a cloud app in your environment. In this tutorial, weΓÇÖll use the Microsoft Azure Management app to illustrate the process.
+> - Use the What If tool to simulate MFA sign-in.
+> - Test your Conditional Access policy.
+> - Clean up the test user and policy.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To complete the scenario in this tutorial, you need:
+- **Access to Azure AD Premium edition**, which includes Conditional Access policy capabilities. To enforce MFA, you need to create an Azure AD Conditional Access policy. Note that MFA policies are always enforced at your organization, regardless of whether the partner has MFA capabilities. If you set up MFA for your organization, youΓÇÖll need to make sure you have sufficient Azure AD Premium licenses for your guest users.
+- **A valid external email account** that you can add to your tenant directory as a guest user and use to sign in. If you don't know how to create a guest account, see [Add a B2B guest user in the Azure portal](add-users-administrator.md).
## Create a test guest user in Azure AD 1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. 2. In the left pane, select **Azure Active Directory**.
-3. Under **Manage**, select **Users**.
-4. Select **New guest user**.
+3. Under **Manage**, select **Users**.
+4. Select **New guest user**.
![Screenshot showing where to select the New guest user option](media/tutorial-mfa/tutorial-mfa-user-3.png)
-5. Under **User name**, enter the email address of the external user. Optionally, include a welcome message.
+5. Under **User name**, enter the email address of the external user. Optionally, include a welcome message.
![Screenshot showing where to enter the guest invitation message](media/tutorial-mfa/tutorial-mfa-user-4.png)
-6. Select **Invite** to automatically send the invitation to the guest user. A **Successfully invited user** message appears.
-7. After you send the invitation, the user account is automatically added to the directory as a guest.
+6. Select **Invite** to automatically send the invitation to the guest user. A **Successfully invited user** message appears.
+7. After you send the invitation, the user account is automatically added to the directory as a guest.
## Test the sign-in experience before MFA setup
-1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
-2. Note that youΓÇÖre able to access the Azure portal using just your sign-in credentials. No additional authentication is required.
-3. Sign out.
+
+1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
+1. Note that youΓÇÖre able to access the Azure portal using just your sign-in credentials. No additional authentication is required.
+1. Sign out.
## Create a Conditional Access policy that requires MFA
-1. Sign in to your [Azure portal](https://portal.azure.com/) as a security administrator or a Conditional Access administrator.
-2. In the Azure portal, select **Azure Active Directory**.
-3. On the **Azure Active Directory** page, in the **Security** section, select **Conditional Access**.
-4. On the **Conditional Access** page, in the toolbar on the top, select **New policy**.
-5. On the **New** page, in the **Name** textbox, type **Require MFA for B2B portal access**.
-6. In the **Assignments** section, select **Users and groups**.
-7. On the **Users and groups** page, choose **Select users and groups**, and then select **All guest users (preview)**.
+
+1. Sign in to your [Azure portal](https://portal.azure.com/) as a security administrator or a Conditional Access administrator.
+2. In the Azure portal, select **Azure Active Directory**.
+3. On the **Azure Active Directory** page, in the **Security** section, select **Conditional Access**.
+4. On the **Conditional Access** page, in the toolbar on the top, select **New policy**.
+5. On the **New** page, in the **Name** textbox, type **Require MFA for B2B portal access**.
+6. In the **Assignments** section, select **Users and groups**.
+7. On the **Users and groups** page, choose **Select users and groups**, and then select **All guest users (preview)**.
![Screenshot showing selecting all guest users](media/tutorial-mfa/tutorial-mfa-policy-6.png)
-9. Select **Done**.
+9. Select **Done**.
10. On the **New** page, in the **Assignments** section, select **Cloud apps**.
-11. On the **Cloud apps** page, choose **Select apps**, and then choose **Select**.
+11. On the **Cloud apps** page, choose **Select apps**, and then choose **Select**.
![Screenshot showing the Cloud apps page and the Select option](media/tutorial-mfa/tutorial-mfa-policy-10.png)
To complete the scenario in this tutorial, you need:
## Use the What If option to simulate sign-in
-1. On the **Conditional Access - Policies** page, select **What If**.
+1. On the **Conditional Access - Policies** page, select **What If**.
![Screenshot that highlights where to select the What if option on the Conditional Access - Policies page.](media/tutorial-mfa/tutorial-mfa-whatif-1.png)
-2. Select **User**, choose your test guest user, and then choose **Select**.
+2. Select **User**, choose your test guest user, and then choose **Select**.
![Screenshot showing a guest user selected](media/tutorial-mfa/tutorial-mfa-whatif-2.png)
-3. Select **Cloud apps**.
-4. On the **Cloud apps** page, choose **Select apps** and then click **Select**. In the applications list, select **Microsoft Azure Management**, and then click **Select**.
+3. Select **Cloud apps**.
+4. On the **Cloud apps** page, choose **Select apps** and then click **Select**. In the applications list, select **Microsoft Azure Management**, and then click **Select**.
![Screenshot showing the Microsoft Azure Management app selected](media/tutorial-mfa/tutorial-mfa-whatif-3.png)
-5. On the **Cloud apps** page, select **Done**.
-6. Select **What If**, and verify that your new policy appears under **Evaluation results** on the **Policies that will apply** tab.
+5. On the **Cloud apps** page, select **Done**.
+6. Select **What If**, and verify that your new policy appears under **Evaluation results** on the **Policies that will apply** tab.
![Screenshot showing where to select the What if option](media/tutorial-mfa/tutorial-mfa-whatif-4.png) ## Test your Conditional Access policy
-1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
-2. You should see a request for additional authentication methods. Note that it could take some time for the policy to take effect.
+
+1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
+2. You should see a request for additional authentication methods. Note that it could take some time for the policy to take effect.
![Screenshot showing the More information required message](media/tutorial-mfa/mfa-required.png)
-
-3. Sign out.
+
+3. Sign out.
## Clean up resources+ When no longer needed, remove the test user and the test Conditional Access policy.
-1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator.
-2. In the left pane, select **Azure Active Directory**.
-3. Under **Manage**, select **Users**.
-4. Select the test user, and then select **Delete user**.
-5. In the left pane, select **Azure Active Directory**.
-6. Under **Security**, select **Conditional Access**.
-7. In the **Policy Name** list, select the context menu (…) for your test policy, and then select **Delete**. Select **Yes** to confirm.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Under **Manage**, select **Users**.
+4. Select the test user, and then select **Delete user**.
+5. In the left pane, select **Azure Active Directory**.
+6. Under **Security**, select **Conditional Access**.
+7. In the **Policy Name** list, select the context menu (…) for your test policy, and then select **Delete**. Select **Yes** to confirm.
## Next steps+ In this tutorial, youΓÇÖve created a Conditional Access policy that requires guest users to use MFA when signing in to one of your cloud apps. To learn more about adding guest users for collaboration, see [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md).
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Last updated 03/02/2021
-+
For applications you build, you can create user flows that allow a user to sign
### Add identity providers (optional)
-Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account (Preview), and Email One-time Passcode (Preview).
+Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account (Preview), and Email One-time Passcode (Preview). For more information, see these articles:
- [Microsoft Account (Preview) identity provider](microsoft-account.md) - [Email one-time passcode authentication](one-time-passcode.md)
active-directory How To Connect Health Ad Fs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md
The Azure AD Connect Health agent for AD FS correlates event IDs from AD FS depe
***Why do I see NotSet or NotApplicable in the Application ID/Name for some AD FS sign-ins?*** The AD FS Sign-In Report will display OAuth Ids in the Application ID field for OAuth sign-ins. In the WS-Fed, WS-Trust sign-in scenarios, the application ID will be NotSet or NotApplicable and the Resource IDs and Relying Party identifiers will be present in the Resource ID field.
+***Why do I see Resource ID and Resource Name fields as "Not Set"?***
+The ResourceId/Name fields will be "NotSet" in some error cases, such as "Username and Password incorrect" and in WSTrust based failed sign-ins.
+ ***Are there any more known issues with the report in preview?*** The report has a known issue where the "Authentication Requirement" field in the "Basic Info" tab will be populated as a single factor authentication value for AD FS sign-ins regardless of the sign-in. Additionally, the Authentication Details tab will display "Primary or Secondary" under the Requirement field, with a fix in progress to differentiate Primary or Secondary authentication types.
The report has a known issue where the "Authentication Requirement" field in the
## Related links * [Azure AD Connect Health](./whatis-azure-ad-connect.md) * [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md)
-* [Risky IP report](how-to-connect-health-adfs-risky-ip.md)
+* [Risky IP report](how-to-connect-health-adfs-risky-ip.md)
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
Follow these steps to set up alerts to monitor the trust relationship:
After the environment is configured, the data flows as follows:
-1. Azure AD Logs get populated per the activity in the tenant.
-2. The log information flows to the Azure Log Analytics workspace.
-3. A background job from Azure Monitor executes the log query based on the configuration of the Alert Rule in the configuration step (2) above.
+ 1. Azure AD Logs get populated per the activity in the tenant.
+ 2. The log information flows to the Azure Log Analytics workspace.
+ 3. A background job from Azure Monitor executes the log query based on the configuration of the Alert Rule in the configuration step (2) above.
``` AuditLogs | extend TargetResource = parse_json(TargetResources)
After the environment is configured, the data flows as follows:
4. If the result of the query matches the alert logic (that is, the number of results is greater than or equal to 1), then the action group kicks in. LetΓÇÖs assume that it kicked in, so the flow continues in step 5. 5. Notification is sent to the action group selected while configuring the alert.
+ > [!NOTE]
+ > In addition to setting up alerts, we recommend periodically reviewing the configured domains within your Azure AD tenant and removing any stale, unrecognized, or suspicious domains.
+++ ## Next steps
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
Previously updated : 03/19/2021 Last updated : 06/24/2021
The HTML report has the following:
## Prerequisites
-In order to use the Single Object Sync tool, you will need to use the 2021 March release of Azure AD Connect or later.
+In order to use the Single Object Sync tool, you will need to use the following:
+ - 2021 March release ([1.6.4.0](reference-connect-version-history.md#1640)) of Azure AD Connect or later.
+ - [PowerShell 5.0](https://docs.microsoft.com/powershell/scripting/windows-powershell/whats-new/what-s-new-in-windows-powershell-50?view=powershell-7.1)
### Run the Single Object Sync tool
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal.md
Previously updated : 10/29/2019 Last updated : 06/23/2021
To add an application to your Azure AD tenant:
2. In the **Azure Active Directory** pane, select **Enterprise applications**. The **All applications** pane opens and displays a random sample of the applications in your Azure AD tenant. 3. In the **Enterprise applications** pane, select **New application**. ![Select New application to add a gallery app to your tenant](media/add-application-portal/new-application.png)
-4. Switch to the new gallery preview experience: In the banner at the top of the **Add an application page**, select the link that says **Click here to try out the new and improved app gallery**.
-5. The **Browse Azure AD Gallery (Preview)** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning.
+4. Switch to the gallery experience: In the banner at the top of the **Add an application page**, select the link that says **Click here to try out the new and improved app gallery**.
+5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning.
![Search for an app by name or category](media/add-application-portal/browse-gallery.png) 6. You can browse the gallery for the application you want to add, or search for the application by entering its name in the search box. Then select the application from the results. 7. The next step depends on the way the developer of the application implemented single sign-on (SSO). Single sign-on can be implemented by app developers in four ways. The four ways are SAML, OpenID Connect, Password, and Linked. When you add an app, you can choose to filter and see only apps using a particular SSO implementation as shown in the screenshot. For example, a popular standard to implement SSO is called Security Assertion Markup Language (SAML). Another standard that is popular is called OpenId Connect (OIDC). The way you configure SSO with these standards is different so take note of the type of SSO that is implemented by the app that you are adding.
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent-groups.md
You can use the Azure AD PowerShell Preview module, [AzureADPreview](/powershell
+> [!NOTE]
+> "User can consent to apps accessing company data on their behalf" setting, when turned off, does not disable the "Users can consent to apps accessing company data for groups they own" option
+ ## Next steps To learn more:
To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on Microsoft Q&A ](/answers/topics/azure-active-directory.html)
+* [Azure AD on Microsoft Q&A ](/answers/topics/azure-active-directory.html)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
na Previously updated : 06/11/2021 Last updated : 06/23/2021
Each JSON download consists of four different files:
![Download files](./media/concept-all-sign-ins/download-files.png "Download files")
+## Return log data with Microsoft Graph
+In addition to using the Azure portal, you can query sign-in logs using the Microsoft Graph API to return different types of sign-in information. To avoid potential performance issues, scope your query to just the data you care about.
+
+The following example scopes the query by the number records, by a specific time period, and by type of sign-in event:
+
+```msgraph-interactive
+GET https://graph.microsoft.com/beta/auditLogs/signIns?$top=100&$filter=createdDateTime ge 2020-09-10T06:00:00Z and createdDateTime le 2020-09-17T06:00:00Z and signInEventTypes/any(t: t eq 'nonInteractiveUser')
+```
+
+The query parameters in the example provide the following results:
+
+- The [$top](/graph/query-parameters#top-parameter) parameter returns the top 100 results.
+- The [$filter](/graph/query-parameters#filter-parameter) parameter limits the time frame for results to return and uses the signInEventTypes property to return only non-interactive user sign-ins.
+
+The following values are available for filtering by different sign-in types:
+
+- interactiveUser
+- nonInteractiveUser
+- servicePrincipal
+- managedIdentity
## Next steps
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
na Previously updated : 04/18/2019 Last updated : 06/23/2021
To use this feature, you need:
7. Select **OK** to exit the event hub configuration.
-8. Do either or both of the following:
- * To send audit logs to the event hub, select the **AuditLogs** check box.
- * To send sign-in logs to the event hub, select the **SignInLogs** check box.
+8. Do any combination of the following:
+ - To send audit logs to the event hub, select the **AuditLogs** check box.
+ - To send interactive user sign-in logs to the event hub, select the **SignInLogs** check box.
+ - To send non-interactive user sign-in logs to the event hub, select the **NonInteractiveUserSignInLogs** check box.
+ - To send service principal sign-in logs to the event hub, select the **ServicePrincipalSignInLogs** check box.
+ - To send managed identity sign-in logs to the event hub, select the **ManagedIdentitySignInLogs** check box.
+ - To send provisioning logs to the event hub, select the **ProvisioningLogs** check box.
+ - To send sign-ins sent to Azure AD by an AD FS Connect Health agent, select the **ADFSSignInLogs** check box.
-9. Select **Save** to save the setting.
+ >[!Note]
+ >Some sign-in categories contain large amounts of log data depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins.
- ![Diagnostics settings](./media/quickstart-azure-monitor-stream-logs-to-event-hub/DiagnosticSettings.png)
+9. Select **Save** to save the setting.
10. After about 15 minutes, verify that events are displayed in your event hub. To do so, go to the event hub from the portal and verify that the **incoming messages** count is greater than zero.
active-directory Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/best-practices.md
Follow these steps to help you find the right role.
## 2. Use Privileged Identity Management to grant just-in-time access
-One of the principles of least privilege is that access should be granted only for a specific period of time. [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) lets you grant just-in-time access to your administrators. Microsoft recommends that you enable PIM in Azure AD. Using PIM, a user can be made an eligible member of an Azure AD role. The can then activate their role for a limited timeframe every time the needs to use it. Privileged access is automatically removed when the timeframe expires. You can also [configure PIM settings](../privileged-identity-management/pim-how-to-change-default-settings.md) to require approval or receive notification emails when someone activates their role assignment. Notifications provide an alert when new users are added to highly privileged roles.
+One of the principles of least privilege is that access should be granted only for a specific period of time. [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) lets you grant just-in-time access to your administrators. Microsoft recommends that you enable PIM in Azure AD. Using PIM, a user can be made an eligible member of an Azure AD role where they can then activate the role for a limited time when needed. Privileged access is automatically removed when the timeframe expires. You can also [configure PIM settings](../privileged-identity-management/pim-how-to-change-default-settings.md) to require approval or receive notification emails when someone activates their role assignment. Notifications provide an alert when new users are added to highly privileged roles.
## 3. Turn on multi-factor authentication for all your administrator accounts
Avoid using on-premises synced accounts for Azure AD role assignments. If your o
## Next steps -- [Securing privileged access for hybrid and cloud deployments in Azure AD](security-planning.md)
+- [Securing privileged access for hybrid and cloud deployments in Azure AD](security-planning.md)
active-directory Checkproof Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/checkproof-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure CheckProof for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to CheckProof.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: b036510b-bf7a-4284-ac17-41a5b10e2b55
+++
+ na
+ms.devlang: na
+ Last updated : 06/21/2021+++
+# Tutorial: Configure CheckProof for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both CheckProof and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [CheckProof](https://checkproof.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in CheckProof
+> * Remove users in CheckProof when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and CheckProof
+> * Provision groups and group memberships in CheckProof
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/checkproof-tutorial) to CheckProof (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A CheckProof account with **SCIM Provisioning** function enabled.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and CheckProof](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure CheckProof to support provisioning with Azure AD
+
+1. Log in to [CheckProof admin account](https://admin.checkproof.com/login).
+
+2. Navigate to **Settings** > **Company Settings**.
+
+ ![provision](media/checkproof-provisioning-tutorial/settings.png)
+
+3. Click on the **PROVISIONING** tab.
+
+4. The **Provisioning URL** and **Provisioning Secret Token** will be displayed. These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your CheckProof application in the Azure portal.
+
+ ![tenant](media/checkproof-provisioning-tutorial/token.png)
+
+## Step 3. Add CheckProof from the Azure AD application gallery
+
+Add CheckProof from the Azure AD application gallery to start managing provisioning to CheckProof. If you have previously setup CheckProof for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to CheckProof, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to CheckProof
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in CheckProof based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for CheckProof in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **CheckProof**.
+
+ ![The CheckProof link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your CheckProof Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to CheckProof. If the connection fails, ensure your CheckProof account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to CheckProof**.
+
+9. Review the user attributes that are synchronized from Azure AD to CheckProof in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in CheckProof for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the CheckProof API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ |||--|
+ |userName|String|&check;|
+ |active|Boolean|
+ |roles|String|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |externalId|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to CheckProof**.
+
+11. Review the group attributes that are synchronized from Azure AD to CheckProof in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in CheckProof for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ |||--|
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for CheckProof, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to CheckProof by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory H5mag Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/h5mag-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure H5mag for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to H5mag.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 87b4715b-c4b4-4e4b-aa25-21dfc5135a0a
+++
+ na
+ms.devlang: na
+ Last updated : 06/21/2021+++
+# Tutorial: Configure H5mag for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both H5mag and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [H5mag](https://www.h5mag.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in H5mag
+> * Remove users in H5mag when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and H5mag
+> * Single sign-on to H5mag (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in [H5mag](https://account.h5mag.com) with an Enterprise license. If your account needs an upgrade to an Enterprise license, reach out to `support@h5mag.com`.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and H5mag](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure H5mag to support provisioning with Azure AD
+
+1. Log in to your [H5mag environment](https://account.h5mag.com/login) and navigate to **[Account](https://account.h5mag.com/account)** -> **[Provisioning & SSO](https://account.h5mag.com/account/provisioning)**.
+
+2. Click on the **Generate Token** button. The provisioning URL and API Token will be displayed. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your H5mag application in the Azure portal.
+
+3. Click on the **Save** button to store the generated token.
+
+4. If you want to redirect your users to use Microsoft login page when they attempt to log in using H5mag's own system, you can set a SSO redirect on this page as well by selecting **Microsoft 365 / Azure AD** in the SSO Provider options.
+
+## Step 3. Add H5mag from the Azure AD application gallery
+
+Add H5mag from the Azure AD application gallery to start managing provisioning to H5mag. If you have previously setup H5mag for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to H5mag, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to H5mag
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in H5mag based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for H5mag in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **H5mag**.
+
+ ![The H5mag link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your H5mag Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to H5mag. If the connection fails, ensure your H5mag account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to H5mag**.
+
+9. Review the user attributes that are synchronized from Azure AD to H5mag in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in H5mag for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the H5mag API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |externalId|String|
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |locale|String|
+ |timezone|String|
+ |userType|String|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for H5mag, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to H5mag by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
You can't use service endpoints to restrict access to apps that run in an App Se
With service endpoints, you can configure your app with application gateways or other web application firewall (WAF) devices. You can also configure multi-tier applications with secure back ends. For more information, see [Networking features and App Service](networking-features.md) and [Application Gateway integration with service endpoints](networking/app-gateway-with-service-endpoints.md). > [!NOTE]
-> - Service endpoints aren't currently supported for web apps that use IP Secure Sockets Layer (SSL) virtual IP (VIP).
+> - Service endpoints aren't currently supported for web apps that use IP-based TLS/SSL bindings with a virtual IP (VIP).
> #### Set a service tag-based rule
app-service App Service Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-plan-manage.md
You can move an app to another App Service plan, as long as the source plan and
> [!IMPORTANT] > If you're moving an app from a higher-tiered plan to a lower-tiered plan, such as from **D1** to **F1**, the app may lose certain capabilities in the target plan. For example, if your app uses TLS/SSL certificates, you might see this error message: >
- > `Cannot update the site with hostname '<app_name>' because its current SSL configuration 'SNI based SSL enabled' is not allowed in the target compute mode. Allowed SSL configuration is 'Disabled'.`
+ > `Cannot update the site with hostname '<app_name>' because its current TLS/SSL configuration 'SNI based SSL enabled' is not allowed in the target compute mode. Allowed TLS/SSL configuration is 'Disabled'.`
5. When finished, select **OK**.
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-common.md
Here, you can configure some common settings for the app. Some settings require
- **Platform settings**: Lets you configure settings for the hosting platform, including: - **Bitness**: 32-bit or 64-bit. (Defaults to 32-bit for App Service created in the portal.) - **WebSocket protocol**: For [ASP.NET SignalR] or [socket.io](https://socket.io/), for example.
- - **Always On**: Keeps the app loaded even when there's no traffic. It's required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- > [!NOTE]
- > With the Always On feature, the front end load balancer sends a request to the application root. This application endpoint of the App Service can't be configured.
+ - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded.
+
+ Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- **Managed pipeline version**: The IIS [pipeline mode]. Set it to **Classic** if you have a legacy app that requires an older version of IIS. - **HTTP version**: Set to **2.0** to enable support for [HTTPS/2](https://wikipedia.org/wiki/HTTP/2) protocol. > [!NOTE]
app-service Configure Domain Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-domain-traffic-manager.md
After the records for your domain name have propagated, use the browser to verif
## Next steps > [!div class="nextstepaction"]
-> [Secure a custom DNS name with an SSL binding in Azure App Service](configure-ssl-bindings.md)
+> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnetcore.md
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
## Detect HTTPS session
-In App Service, [SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to know if the user requests are encrypted or not, configure the Forwarded Headers Middleware in *Startup.cs*:
+In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to know if the user requests are encrypted or not, configure the Forwarded Headers Middleware in *Startup.cs*:
- Configure the middleware with [ForwardedHeadersOptions](/dotnet/api/microsoft.aspnetcore.builder.forwardedheadersoptions) to forward the `X-Forwarded-For` and `X-Forwarded-Proto` headers in `Startup.ConfigureServices`. - Add private IP address ranges to the known networks, so that the middleware can trust the App Service load balancer.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Azure App Service for Linux supports out of the box tuning and customization thr
- [Configure app settings](configure-common.md#configure-app-settings) - [Set up a custom domain](app-service-web-tutorial-custom-domain.md)-- [Configure SSL bindings](configure-ssl-bindings.md)
+- [Configure TLS/SSL bindings](configure-ssl-bindings.md)
- [Add a CDN](../cdn/cdn-add-to-web-app.md) - [Configure the Kudu site](https://github.com/projectkudu/kudu/wiki/Configurable-settings#linux-on-app-service-settings)
To disable this feature, create an Application Setting named `WEBSITE_AUTH_SKIP_
### Configure TLS/SSL
-Follow the instructions in the [Secure a custom DNS name with an SSL binding in Azure App Service](configure-ssl-bindings.md) to upload an existing SSL certificate and bind it to your application's domain name. By default your application will still allow HTTP connections-follow the specific steps in the tutorial to enforce SSL and TLS.
+Follow the instructions in the [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) to upload an existing TLS/SSL certificate and bind it to your application's domain name. By default your application will still allow HTTP connections-follow the specific steps in the tutorial to enforce TLS/SSL.
### Use KeyVault References
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-nodejs.md
fi
## Detect HTTPS session
-In App Service, [SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
+In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
Popular web frameworks let you access the `X-Forwarded-*` information in your standard app pattern. In [Express](https://expressjs.com/), you can use [trust proxies](https://expressjs.com/guide/behind-proxies.html). For example:
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-php.md
If you would rather not use *.htaccess* rewrite, you can deploy your Laravel app
## Detect HTTPS session
-In App Service, [SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
+In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
```php if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
db_server = os.environ['DATABASE_SERVER']
## Detect HTTPS session
-In App Service, [SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) (wikipedia.org) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
+In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) (wikipedia.org) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
```python if 'X-Forwarded-Proto' in request.headers and request.headers['X-Forwarded-Proto'] == 'https':
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-bindings.md
Your inbound IP address can change when you delete a binding, even if that bindi
By default, anyone can still access your app using HTTP. You can redirect all HTTP requests to the HTTPS port.
-In your app page, in the left navigation, select **SSL settings**. Then, in **HTTPS Only**, select **On**.
+In your app page, in the left navigation, select **TLS/SSL settings**. Then, in **HTTPS Only**, select **On**.
![Enforce HTTPS](./media/configure-ssl-bindings/enforce-https.png)
When the operation is complete, navigate to any of the HTTP URLs that point to y
Your app allows [TLS](https://wikipedia.org/wiki/Transport_Layer_Security) 1.2 by default, which is the recommended TLS level by industry standards, such as [PCI DSS](https://wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard). To enforce different TLS versions, follow these steps:
-In your app page, in the left navigation, select **SSL settings**. Then, in **TLS version**, select the minimum TLS version you want. This setting controls the inbound calls only.
+In your app page, in the left navigation, select **TLS/SSL settings**. Then, in **TLS version**, select the minimum TLS version you want. This setting controls the inbound calls only.
![Enforce TLS 1.1 or 1.2](./media/configure-ssl-bindings/enforce-tls1-2.png)
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-from-template.md
However, just like apps that run on the public multitenant service, developers c
## App Service Environment v1 ## App Service Environment has two versions: ASEv1 and ASEv2. The preceding information was based on ASEv2. This section shows you the differences between ASEv1 and ASEv2.
-In ASEv1, you manage all of the resources manually. That includes the front ends, workers, and IP addresses used for IP-based SSL. Before you can scale out your App Service plan, you must scale out the worker pool that you want to host it.
+In ASEv1, you manage all of the resources manually. That includes the front ends, workers, and IP addresses used for IP-based TLS/SSL binding. Before you can scale out your App Service plan, you must scale out the worker pool that you want to host it.
ASEv1 uses a different pricing model from ASEv2. In ASEv1, you pay for each vCPU allocated. That includes vCPUs that are used for front ends or workers that aren't hosting any workloads. In ASEv1, the default maximum-scale size of an ASE is 55 total hosts. That includes workers and front ends. One advantage to ASEv1 is that it can be deployed in a classic virtual network and a Resource Manager virtual network. To learn more about ASEv1, see [App Service Environment v1 introduction][ASEv1Intro].
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-ilb-ase.md
With an ILB ASE, you can do things such as:
There are some things that you can't do when you use an ILB ASE: -- Use IP-based SSL.
+- Use IP-based TLS/SSL binding.
- Assign IP addresses to specific apps. - Buy and use a certificate with an app through the Azure portal. You can obtain certificates directly from a certificate authority and use them with your apps. You can't obtain them through the Azure portal.
To configure DNS in Azure DNS Private zones:
The DNS settings for your ASE default domain suffix do not restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ILB ASE. If you then want to create a zone named contoso.net, you could do so and point it to the ILB IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at &lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net.
-The zone named .&lt;asename&gt;.appserviceenvironment.net is globally unique. Before May 2019, customers were able to specify the domain suffix of the ILB ASE. If you wanted to use .contoso.com for the domain suffix, you were able do so and that would include the scm site. There were challenges with that model including; managing the default SSL certificate, lack of single sign-on with the scm site, and the requirement to use a wildcard certificate. The ILB ASE default certificate upgrade process was also disruptive and caused application restarts. To solve these problems, the ILB ASE behavior was changed to use a domain suffix based on the name of the ASE and with a Microsoft owned suffix. The change to the ILB ASE behavior only affects ILB ASEs made after May 2019. Pre-existing ILB ASEs must still manage the default certificate of the ASE and their DNS configuration.
+The zone named .&lt;asename&gt;.appserviceenvironment.net is globally unique. Before May 2019, customers were able to specify the domain suffix of the ILB ASE. If you wanted to use .contoso.com for the domain suffix, you were able do so and that would include the scm site. There were challenges with that model including; managing the default TLS/SSL certificate, lack of single sign-on with the scm site, and the requirement to use a wildcard certificate. The ILB ASE default certificate upgrade process was also disruptive and caused application restarts. To solve these problems, the ILB ASE behavior was changed to use a domain suffix based on the name of the ASE and with a Microsoft owned suffix. The change to the ILB ASE behavior only affects ILB ASEs made after May 2019. Pre-existing ILB ASEs must still manage the default certificate of the ASE and their DNS configuration.
## Publish with an ILB ASE
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/intro.md
For more information on how ASEs work with virtual networks and on-premises netw
App Service Environment has two versions: ASEv1 and ASEv2. The preceding information was based on ASEv2. This section shows you the differences between ASEv1 and ASEv2.
-In ASEv1, you need to manage all of the resources manually. That includes the front ends, workers, and IP addresses used for IP-based SSL. Before you can scale out your App Service plan, you need to first scale out the worker pool where you want to host it.
+In ASEv1, you need to manage all of the resources manually. That includes the front ends, workers, and IP addresses used for IP-based TLS/SSL bindings. Before you can scale out your App Service plan, you need to first scale out the worker pool where you want to host it.
ASEv1 uses a different pricing model from ASEv2. In ASEv1, you pay for each vCPU allocated. That includes vCPUs used for front ends or workers that aren't hosting any workloads. In ASEv1, the default maximum-scale size of an ASE is 55 total hosts. That includes workers and front ends. One advantage to ASEv1 is that it can be deployed in a classic virtual network and a Resource Manager virtual network. To learn more about ASEv1, see [App Service Environment v1 introduction][ASEv1Intro].
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/network-info.md
An ASE has a few IP addresses to be aware of. They are:
- **Public inbound IP address**: Used for app traffic in an External ASE, and management traffic in both an External ASE and an ILB ASE. - **Outbound public IP**: Used as the "from" IP for outbound connections from the ASE that leave the VNet, which aren't routed down a VPN. - **ILB IP address**: The ILB IP address only exists in an ILB ASE.-- **App-assigned IP-based SSL addresses**: Only possible with an External ASE and when IP-based SSL is configured.
+- **App-assigned IP-based TLS/SSL addresses**: Only possible with an External ASE and when IP-based TLS/SSL binding is configured.
All these IP addresses are visible in the Azure portal from the ASE UI. If you have an ILB ASE, the IP for the ILB is listed.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
There are a few features that are not available in ASEv3 that were available in
- use remote debug with your apps - upgrade yet from ASEv2 - monitor your traffic with Network Watcher or NSG Flow-- configure IP-based SSL with your apps
+- configure a IP-based TLS/SSL binding with your apps
## Pricing
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
In an ASE, you can scale an App Service plan up to 100 instances. An ASE can hav
## IP addresses
-App Service can allocate a dedicated IP address to an app. This capability is available after you configure IP-based SSL, as described in [Bind an existing custom TLS/SSL certificate to Azure App Service][ConfigureSSL]. In an ILB ASE, you can't add more IP addresses to be used for IP-based SSL.
+App Service can allocate a dedicated IP address to an app. This capability is available after you configure a IP-based TLS/SSL binding, as described in [Bind an existing custom TLS/SSL certificate to Azure App Service][ConfigureSSL]. In an ILB ASE, you can't add more IP addresses to be used for the IP-based TLS/SSL binding.
-With an External ASE, you can configure IP-based SSL for your app in the same way as in the multitenant App Service. There's always one spare address in the ASE, up to 30 IP addresses. Each time you use one, another is added so that an address is always readily available. A time delay is required to allocate another IP address. That delay prevents adding IP addresses in quick succession.
+With an External ASE, you can configure a IP-based TLS/SSL binding for your app in the same way as in the multitenant App Service. There's always one spare address in the ASE, up to 30 IP addresses. Each time you use one, another is added so that an address is always readily available. A time delay is required to allocate another IP address. That delay prevents adding IP addresses in quick succession.
## Front-end scaling
To configure DNS in Azure DNS Private zones:
The DNS settings for your ASE default domain suffix do not restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ILB ASE. If you then want to create a zone named *contoso.net*, you could do so and point it to the ILB IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
-The zone named *.&lt;asename&gt;.appserviceenvironment.net* is globally unique. Before May 2019, customers were able to specify the domain suffix of the ILB ASE. If you wanted to use *.contoso.com* for the domain suffix, you were able do so and that would include the scm site. There were challenges with that model including; managing the default SSL certificate, lack of single sign-on with the scm site, and the requirement to use a wildcard certificate. The ILB ASE default certificate upgrade process was also disruptive and caused application restarts. To solve these problems, the ILB ASE behavior was changed to use a domain suffix based on the name of the ASE and with a Microsoft owned suffix. The change to the ILB ASE behavior only affects ILB ASEs made after May 2019. Pre-existing ILB ASEs must still manage the default certificate of the ASE and their DNS configuration.
+The zone named *.&lt;asename&gt;.appserviceenvironment.net* is globally unique. Before May 2019, customers were able to specify the domain suffix of the ILB ASE. If you wanted to use *.contoso.com* for the domain suffix, you were able do so and that would include the scm site. There were challenges with that model including; managing the default TLS/SSL certificate, lack of single sign-on with the scm site, and the requirement to use a wildcard certificate. The ILB ASE default certificate upgrade process was also disruptive and caused application restarts. To solve these problems, the ILB ASE behavior was changed to use a domain suffix based on the name of the ASE and with a Microsoft owned suffix. The change to the ILB ASE behavior only affects ILB ASEs made after May 2019. Pre-existing ILB ASEs must still manage the default certificate of the ASE and their DNS configuration.
## Publishing
app-service Ip Address Change Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/ip-address-change-ssl.md
Title: Prepare for SSL IP address change
-description: If your SSL IP address is going to be changed, learn what to do so that your app continues to work after the change.
+ Title: Prepare for TLS/SSL IP address change
+description: If your TLS/SSL IP address is going to be changed, learn what to do so that your app continues to work after the change.
Last updated 06/28/2018
-# How to prepare for an SSL IP address change
+# How to prepare for a TLS/SSL IP address change
-If you received a notification that the SSL IP address of your Azure App Service app is changing, follow the instructions in this article to release existing SSL IP address and assign a new one.
+If you received a notification that the TLS/SSL IP address of your Azure App Service app is changing, follow the instructions in this article to release existing TLS/SSL IP address and assign a new one.
-## Release SSL IP addresses and assign new ones
+## Release TLS/SSL IP addresses and assign new ones
1. Open the [Azure portal](https://portal.azure.com).
app-service Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-disaster-recovery.md
Identify all the App Service resources that the impacted app currently uses. For
- [App Service plans](overview-hosting-plans.md) - [Deployment slots](deploy-staging-slots.md) - [Custom domains purchased in Azure](manage-custom-dns-buy-domain.md)-- [SSL certificates](configure-ssl-certificate.md)
+- [TLS/SSL certificates](configure-ssl-certificate.md)
- [Azure Virtual Network integration](web-sites-integrate-with-vnet.md) - [Hybrid connections](app-service-hybrid-connections.md). - [Managed identities](overview-managed-identity.md)
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-move-across-regions.md
Identify all the App Service resources that you're currently using. For example:
- [App Service plans](overview-hosting-plans.md) - [Deployment slots](deploy-staging-slots.md) - [Custom domains purchased in Azure](manage-custom-dns-buy-domain.md)-- [SSL certificates](configure-ssl-certificate.md)
+- [TLS/SSL certificates](configure-ssl-certificate.md)
- [Azure Virtual Network integration](web-sites-integrate-with-vnet.md) - [Hybrid connections](app-service-hybrid-connections.md). - [Managed identities](overview-managed-identity.md)
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
The deployment process to Azure App Service will use your Azure credentials from
Run the Maven command below to configure the deployment. This command will help you to set up the App Service operating system, Java version, and Tomcat version. ```azurecli-interactive
-mvn com.microsoft.azure:azure-webapp-maven-plugin:1.16.0:config
+mvn com.microsoft.azure:azure-webapp-maven-plugin:1.16.1:config
``` ::: zone pivot="platform-windows"
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server-services-overview.md
+
+ Title: Automanage for Windows Server Services (preview)
+description: Overview of Automanage for Windows Server Services and capabilities with Windows Server Azure Edition
+++++ Last updated : 06/23/2021+++
+# Automanage for Windows Server Services (preview)
+
+Automanage for Windows Server Services brings new capabilities specifically to Windows Server Azure Edition. These capabilities include:
+- Hotpatch
+- SMB over QUIC
+- Extended Network
+
+> [!IMPORTANT]
+> Automanage for Windows Server Services is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Automanage for Windows Server capabilities can be found in one or more of these Windows Server Azure Edition images:
+
+> [!NOTE]
+> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+
+- Windows Server 2019 Datacenter: Azure Edition (Core)
+- Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)
+- Windows Server 2022 Datacenter: Azure Edition (Core)
+
+Capabilities vary by image, see [getting started](#getting-started-with-windows-server-azure-edition) for more detail.
+
+## Automanage for Windows Server capabilities
+
+### Hotpatch
+
+Hotpatch is available in public preview on the following images:
+
+> [!NOTE]
+> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+
+- Windows Server 2019 Datacenter: Azure Edition (Core)
+- Windows Server 2022 Datacenter: Azure Edition (Core)
+
+Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of Hotpatching. To learn more, see [Hotpatch](automanage-hotpatch.md).
+
+### SMB over QUIC
+
+SMB over QUIC is available in public preview on the following images:
+
+> [!NOTE]
+> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+
+- Windows Server 2022 Datacenter: Azure Edition (Desktop experience)
+- Windows Server 2022 Datacenter: Azure Edition (Core)
+
+SMB over QUIC enables users to access files when working remotely without a VPN, by tunneling SMB traffic over the QUIC protocol. To learn more, see [SMB over QUIC](https://aka.ms/smboverquic).
+
+### Azure Extended Network
+
+Azure Extended Network is available in public preview on the following images:
+
+> [!NOTE]
+> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+
+- Windows Server 2022 Datacenter: Azure Edition (Desktop experience)
+- Windows Server 2022 Datacenter: Azure Edition (Core)
+
+Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](https://docs.microsoft.com/windows-server/manage/windows-admin-center/azure/azure-extended-network).
++
+## Getting started with Windows Server Azure Edition
+
+> [!NOTE]
+> Not all images and regions are available yet in Public Preview. See table below for more information about availability.
+
+It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the Windows Server Azure Edition images support only a subset of capabilities. See the table below for a matrix of capabilities and images.
+
+### Deciding which image to use
+
+|Image|Capabilities|Preview state|Regions|On date|
+|--|--|--|--|--|
+| Windows Server 2019 Datacenter: Azure Edition (Core) | Hotpatch | Public preview | (all) | March 12, 2021 |
+| Windows Server 2022 Datacenter: Azure Edition (Desktop experience) | SMB over QUIC, Extended Network | Public preview in some regions | North Europe, South Central US, West Central US | June 22, 2021 |
+| Windows Server 2022 Datacenter: Azure Edition (Core) | Hotpatch, SMB over QUIC, Extended Network | Public preview to start | (all) | July 12, 2021 |
+
+### Creating a VM
+
+> [!NOTE]
+> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+
+To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the Windows Server Azure Edition image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use. Configuration of those capabilities may be needed during VM creation. You can learn more about VM configuration in the individual capability topics (such as [Hotpatch](automanage-hotpatch.md)).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure Automanage](automanage-virtual-machines.md)
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
Title: How to create update deployments for Azure Automation Update Management
description: This article describes how to schedule update deployments and review their status. Previously updated : 06/22/2021 Last updated : 06/24/2021
To schedule a new update deployment, perform the following steps. Depending on t
> Deploying updates by update classification doesn't work on RTM versions of CentOS. To properly deploy updates for CentOS, select all classifications to make sure updates are applied. There's currently no supported method to enable native classification-data availability on CentOS. See the following for more information about [Update classifications](overview.md#update-classifications). >[!NOTE]
- > Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+ > Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment. If you have classification set as **Critical** and **Security**, the update deployment will work as expected. Only the *classification of updates* during an assessment is affected.
> > Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 06/22/2021 Last updated : 06/24/2021
When you schedule an update to run on a Linux machine, that for example is confi
Categorization is done for Linux updates as **Security** or **Others** based on the OVAL files, which includes updates addressing security issues or vulnerabilities. But when the update schedule is run, it executes on the Linux machine using the appropriate package manager like YUM, APT, or ZYPPER to install them. The package manager for the Linux distro may have a different mechanism to classify updates, where the results may differ from the ones obtained from OVAL files by Update Management. To manually check the machine and understand which updates are security relevant by your package manager, see [Troubleshoot Linux update deployment](../troubleshoot/update-management.md#updates-linux-installed-different). >[!NOTE]
-> Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+> Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment. If you have classification set as **Critical** and **Security**, the update deployment will work as expected. Only the *classification of updates* during an assessment is affected.
> > Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
Azure Arc enabled SQL Managed Instance is an Azure SQL data service that can be
Azure Arc enabled SQL Managed Instance has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
+To learn more about these capabilities, you can also refer to this Data Exposed episode.
+> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-SQL-Managed-Instance--Data-Exposed/player?format=ny]
+ ## Next steps Learn more about [Features and Capabilities of Azure Arc enabled SQL Managed Instance](managed-instance-features.md)
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
Read more details at:
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+To learn more about these capabilities, you can also refer to this Data Exposed episode.
+> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-PostgreSQL-Hyperscale--Data-Exposed/player?format=ny]
+ ## Compare solutions This section describes how Azure Arc enabled PostgreSQL Hyperscale differs from Azure Database for PostgreSQL Hyperscale (Citus)?
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
In order to route all authorization access checks to the authorization service i
The `apiserver` of the cluster is configured to use [webhook token authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) and [webhook authorization](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) so that `TokenAccessReview` and `SubjectAccessReview` requests are routed to the guard webhook server. The `TokenAccessReview` and `SubjectAccessReview` requests are triggered by requests for Kubernetes resources sent to the `apiserver`.
-Guard then makes a `checkAccess` call on the authorization service in Azure to see if the requesting Azure AD entity has access to the resource of concern.
+Guard then makes a `checkAccess` call on the authorization service in Azure to see if the requesting Azure AD entity has access to the resource of concern.
If a role in assignment that permits this access exists, then an `allowed` response is sent from the authorization service guard. Guard, in turn, sends an `allowed` response to the `apiserver`, enabling the calling entity to access the requested Kubernetes resource.
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
> [!IMPORTANT] > Azure Arc agents require both of the following protocols/ports/outbound URLs to function: > * TCP on port 443: `https://:443`
-> * TCP on port 9418: `git://:9418`
| Endpoint (DNS) | Description | | -- | - |
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
| `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. | | `https://login.microsoftonline.com` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. | | `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
+| `https://gbl.his.arc.azure.com` | Required to get the regional endpoint for pulling system-assigned Managed Service Identity (MSI) certificates. |
| `https://<region-code>.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Service Identity (MSI) certificates. `<region-code>` mapping for Azure cloud regions: `eus` (East US), `weu` (West Europe), `wcus` (West Central US), `scus` (South Central US), `sea` (South East Asia), `uks` (UK South), `wus2` (West US 2), `ae` (Australia East), `eus2` (East US 2), `ne` (North Europe), `fc` (France Central). | ## 1. Register providers for Azure Arc enabled Kubernetes
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
In this tutorial, you will apply configurations using GitOps on an Azure Arc ena
>[!TIP] > If the `k8s-configuration` extension is already installed, you can update it to the latest version using the following command - `az extension update --name k8s-configuration`
+- If your Git repository is located outside the firewall and git protocol is being used with the configuration repository parameter, then TCP on port 9418 (`git://:9418`) needs to be enabled for egress access on firewall.
+ ## Create a configuration The [example repository](https://github.com/Azure/arc-k8s-demo) used in this article is structured around the persona of a cluster operator. The manifests in this repository provision a few namespaces, deploy workloads, and provide some team-specific configuration. Using this repository with GitOps creates the following resources on your cluster:
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 06/22/2021 Last updated : 06/24/2021 # Use Azure Private Link to securely connect networks to Azure Arc
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Sign in to the [Azure portal](https://portal.azure.com).
+1. To register your subscription for the Azure Arc enabled servers Private Link preview, you need to register the resource provider **Microsoft.HybridCompute**. You can do this from the Azure portal, with Azure PowerShell, or the Azure CLI. Do not proceed with step 3 until you've confirmed the resource provider is registered, otherwise you'll recieve a deployment error.
+
+ * To register from the Azure portal, see [Register the resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) to enable the Arc enabled servers Private Link preview from the Azure portal. For step 5, specify **Microsoft.HybridCompute**.
+
+ * To register using the Azure PowerShell, run the following command. See [registering a resource provider with Azure PowerShell](../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) to learn more.
+
+ ```azurepowershell
+ Register-AzProviderFeature -ProviderNamespace Microsoft.HybridCompute -FeatureName ArcServerPrivateLinkPreview
+ ```
+
+ Which returns a message that registration is on-going. To verify the resource provider is successfully registered, use:
+
+ ```azurepowershell
+ Get-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
+ ```
+
+ * To register using the Azure CLI, run the following command. See [registering a resource provider with the Azure CLI](../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli) to learn more.
+
+ ```azurecli
+ az feature register --namespace Microsoft.HybridCompute --name ArcServerPrivateLinkPreview
+ ```
+
+ Which returns a message that registration is on-going. To verify the resource provider is successfully registered, use:
+
+ ```azurecli-interactive
+ az provider show --namespace Microsoft.HybridCompute
+ ```
+ 1. Go to **Create a resource** in the Azure portal and search for **Azure Arc Private Link Scope**. Or you can use the following link to open the [Azure Arc Private Link Scope](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) page in the portal. :::image type="content" source="./media/private-link-security/find-scope.png" alt-text="Find Private Link Scope" border="true":::
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
Title: Best practices for Azure Cache for Redis description: Learn how to use your Azure Cache for Redis effectively by following these best practices. - Last updated 01/06/2020
By following these best practices, you can help maximize the performance and cos
* **Use TLS encryption** - Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible. If your client library or tool doesn't support TLS, then enabling unencrypted connections can be done [through the Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In such cases where encrypted connections aren't possible, placing your cache and client application into a virtual network would be recommended. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
-* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have keep-alive configuration that pings Azure Redis automatically. However, in clients that don't have a keep-alive setting, customer applications are responsible for keeping the connection alive.
+* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
## Memory management
azure-maps Geographic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/geographic-coverage.md
Title: Geographic coverage information | Microsoft Azure Maps
+ Title: Geographic coverage information in Microsoft Azure Maps
description: Details of where geographic data is available within Microsoft Azure Maps. Previously updated : 6/11/2020 Last updated : 6/23/2021
The following links provide detail coverage information for each of the services
* [Traffic coverage](traffic-coverage.md) * [Render coverage](render-coverage.md) * [Routing coverage](routing-coverage.md)
-* [Mobility coverage](mobility-coverage.md)
* [Weather coverage](weather-coverage.md) ## Next steps
azure-maps How To Request Real Time Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-real-time-data.md
Title: Request real-time public transit data with Microsoft Azure Maps Mobility
description: Learn how to request real-time public transit data, such as arrivals at a transit stop. See how to use the Azure Maps Mobility services (Preview) for this purpose. Previously updated : 06/22/2021 Last updated : 06/23/2021
# Request real-time public transit data using the Azure Maps Mobility services (Preview) > [!IMPORTANT]
-> Azure Maps Mobility services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> The Azure Maps Mobility Services Preview has been retired and will no longer be available and supported after October 5, 2021. All other Azure Maps APIs and Services are unaffected by this retirement announcement.
+> For details, see [Azure Maps Mobility Preview Retirement](https://azure.microsoft.com/updates/azure-maps-mobility-services-preview-retirement/).
This article shows you how to use Azure Maps [Mobility services](/rest/api/maps/mobility) to request real-time public transit data.
azure-maps How To Request Transit Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-transit-data.md
Title: Request transit data with Microsoft Azure Maps Mobility services (Preview
description: Learn how to use the Azure Maps Mobility services (Preview) to request public transit data, such as metro area IDs, transit stops, routes, and route itineraries. Previously updated : 12/07/2020 Last updated : 06/23/2021
# Request public transit data using the Azure Maps Mobility services (Preview) > [!IMPORTANT]
-> Azure Maps Mobility services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> The Azure Maps Mobility Services Preview has been retired and will no longer be available and supported after October 5, 2021. All other Azure Maps APIs and Services are unaffected by this retirement announcement.
+> For details, see [Azure Maps Mobility Preview Retirement](https://azure.microsoft.com/updates/azure-maps-mobility-services-preview-retirement/).
This article shows you how to use Azure Maps [Mobility services](/rest/api/maps/mobility) to request public transit data. Transit data includes transit stops, route information, and travel time estimations.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-services.md
Title: 'Tutorial: Migrate web services from Bing Maps | Microsoft Azure Maps'
+ Title: 'Tutorial: Migrate web services from Bing Maps to Microsoft Azure Maps'
description: Tutorial on how to migrate web services from Bing Maps to Microsoft Azure Maps.
The Azure Maps routing service provides the following APIs for calculating route
- [Calculate route](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues. - [Batch route](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.-- [Mobility services (Preview) ](/rest/api/maps/mobility): Calculate routes and directions using public transit. The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-web-services.md
Title: 'Tutorial - Migrate web services from Google Maps | Microsoft Azure Maps'
description: Tutorial on how to migrate web services from Google Maps to Microsoft Azure Maps Previously updated : 08/19/2020 Last updated : 06/23/2021
The following service APIs aren't currently available in Azure Maps:
- Nearest Roads - This is achievable using the Web SDK as shown [here](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic), but not available as a service currently. - Static street view
-Azure Maps has several additional REST web services that may be of interest:
+Azure Maps has several other REST web services that may be of interest:
- [Spatial operations](/rest/api/maps/spatial): Offload complex spatial calculations and operations, such as geofencing, to a service. - [Traffic](/rest/api/maps/traffic): Access real-time traffic flow and incident data.
This table cross-references the Google Maps API parameters with the comparable A
Review [best practices for search](how-to-use-best-practices-for-search.md).
-The Azure Maps reverse geocoding API has some additional features, which aren't available in Google Maps. These features might be useful to integrate with your application, as you migrate your app:
+The Azure Maps reverse geocoding API has some other features, which aren't available in Google Maps. These features might be useful to integrate with your application, as you migrate your app:
* Retrieve speed limit data * Retrieve road use information: local road, arterial, limited access, ramp, and so on
The Azure Maps routing service provides the following APIs for calculating route
- [**Calculate route**](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The POST Route Direction in Azure Maps has an option can that take in thousands of [supporting points](/rest/api/maps/route/postroutedirections#supportingpoints) and will use them to recreate a logical route path between them (snap to road). - [**Batch route**](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.-- [**Mobility services (Preview)**](/rest/api/maps/mobility): Calculate routes and directions using public transit. The table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The table cross-references the Google Maps API parameters with the comparable AP
| `origin` | `query` | | `region` | *N/A* ΓÇô This feature is geocoding related. Use the *countrySet* parameter when using the Azure Maps geocoding API. | | `traffic_model` | *N/A* ΓÇô Can only specify if traffic data should be used with the *traffic* parameter. |
-| `transit_mode` | See [Mobility services (Preview) documentation](/rest/api/maps/mobility) |
-| `transit_routing_preference` | See [Mobility services (Preview) documentation](/rest/api/maps/mobility) |
| `units` | *N/A* ΓÇô Azure Maps only uses the metric system. | | `waypoints` | `query` | > [!TIP] > By default, the Azure Maps route API only returns a summary. It returns the distance and times and the coordinates for the route path. Use the `instructionsType` parameter to retrieve turn-by-turn instructions. And, use the `routeRepresentation` parameter to filter out the summary and route path.
-Azure Maps routing API has additional features, that aren't available in Google Maps. When migrating your app, consider using these features, you might find them useful.
+Azure Maps routing API has other features that aren't available in Google Maps. When migrating your app, consider using these features, you might find them useful.
* Support for route type: shortest, fastest, trilling, and most fuel efficient.
-* Support for additional travel modes: bus, motorcycle, taxi, truck, and van.
+* Support for other travel modes: bus, motorcycle, taxi, truck, and van.
* Support for 150 waypoints. * Compute multiple travel times in a single request; historic traffic, live traffic, no traffic.
-* Avoid additional road types: carpool roads, unpaved roads, already used roads.
+* Avoid other road types: carpool roads, unpaved roads, already used roads.
* Specify custom areas to avoid. * Limit the elevation, which the route may ascend. * Route based on engine specifications. Calculate routes for combustion or electric vehicles based on engine specifications, and the remaining fuel or charge. * Support commercial vehicle route parameters. Such as, vehicle dimensions, weight, number of axels, and cargo type. * Specify maximum vehicle speed.
-In addition to this, the route service in Azure Maps supports [calculating routable ranges](/rest/api/maps/route/getrouterange). Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
+In addition, the route service in Azure Maps supports [calculating routable ranges](/rest/api/maps/route/getrouterange). Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
Review the [best practices for routing](how-to-use-best-practices-for-routing.md) documentation.
Add markers using the `markers` parameter in the URL. The `markers` parameter ta
&markers=markerStyles|markerLocation1|markerLocation2|... ```
-To add additional styles, use the `markers` parameters
+To add other styles, use the `markers` parameters
to the URL with a different style and set of locations. Specify marker locations with the "latitude,longitude" format.
Add markers to a static map image by specifying the `pins` parameter in the URL.
&pins=iconType|pinStyles||pinLocation1|pinLocation2|... ```
-To use additional styles, add additional `pins` parameters to the URL with a different style and set of locations.
+To use other styles, add extra `pins` parameters to the URL with a different style and set of locations.
In Azure Maps, the pin location needs to be in the "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format.
Add lines and polygon to a static map image using the `path` parameter in the UR
&path=pathStyles|pathLocation1|pathLocation2|... ```
-Use additional styles by adding additional `path` parameters to the URL with a different style and set of locations.
+Use other styles by adding extra `path` parameters to the URL with a different style and set of locations.
Path locations are specified with the `latitude1,longitude1|latitude2,longitude2|…` format. Paths can be encoded or contain addresses for points.
This table cross-references the Google Maps API parameters with the comparable A
| `location` | `query` | | `timestamp` | `timeStamp` |
-In addition to this API, Azure Maps provides a number of time zone APIs. These APIs convert the time based on the names or the IDs of the time zone:
+In addition to this API, Azure Maps provides many time zone APIs. These APIs convert the time based on the names or the IDs of the time zone:
- [**Time zone by ID**](/rest/api/maps/timezone/gettimezonebyid): Returns current, historical, and future time zone information for the specified IANA time zone ID. - [**Time zone Enum IANA**](/rest/api/maps/timezone/gettimezoneenumiana): Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps.md
This article provides insights on how to migrate web, mobile and server-based ap
## Azure Maps platform overview
-Azure Maps provides developers from all industries powerful geospatial capabilities. The capabilities are packed with regularly updated map data to provide geographic context for web, and mobile applications. Azure Maps has an Azure One API compliant set of REST APIs. The REST APIs offer Maps Rendering, Search, Routing, Traffic, Time Zones, Geolocation, Geofencing, Map Data, Weather, Mobility, and Spatial Operations. Operations are accompanied by both Web and Android SDKs to make development easy, flexible, and portable across multiple platforms.
+Azure Maps provides developers from all industries powerful geospatial capabilities. The capabilities are packed with regularly updated map data to provide geographic context for web, and mobile applications. Azure Maps has an Azure One API compliant set of REST APIs. The REST APIs offer Maps Rendering, Search, Routing, Traffic, Time Zones, Geolocation, Geofencing, Map Data, Weather, and Spatial Operations. Operations are accompanied by both Web and Android SDKs to make development easy, flexible, and portable across multiple platforms.
## High-level platform comparison
azure-maps Weather Services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-faq.md
Title: Microsoft Azure Maps Weather services frequently asked questions (FAQ)
description: Find answer to common questions about Azure Maps Weather services data and features. Previously updated : 12/07/2020 Last updated : 06/23/2021
This article answers to common questions about Azure Maps [Weather services](/re
**How does Azure Maps source Weather data?**
-Azure Maps is built with the collaboration of world-class mobility and location technology partners, including AccuWeather, who provides the underlying weather data. To read the announcement of Azure MapsΓÇÖ collaboration with AccuWeather, see [Rain or shine: Azure Maps Weather Services will bring insights to your enterprise](https://azure.microsoft.com/blog/rain-or-shine-azure-maps-weather-services-will-bring-insights-to-your-enterprise/).
+Azure Maps is built with the collaboration of world-class mobility and location technology partners, including AccuWeather, who provides the underlying weather data. To read the announcement of Azure Map's collaboration with AccuWeather, see [Rain or shine: Azure Maps Weather Services will bring insights to your enterprise](https://azure.microsoft.com/blog/rain-or-shine-azure-maps-weather-services-will-bring-insights-to-your-enterprise/).
-AccuWeather has real-time weather and environmental information available anywhere in the world, largely due to their partnerships with numerous national governmental weather agencies and other proprietary arrangements. A list of this foundational information is provided below.
+AccuWeather has real-time weather and environmental information available anywhere in the world, largely because of their partnerships with many national governmental weather agencies and other proprietary arrangements. A list of this foundational information is provided below.
* Publicly available global surface observations from government agencies * Proprietary surface observation datasets from governments and private companies
AccuWeather has real-time weather and environmental information available anywhe
* Air quality observations * Observations from departments of transportation
-Tens of thousands of surface observations, along with other data, are incorporated to create and influence the current conditions made available to users. This includes not only freely available standard datasets, but also unique observations obtained from national meteorological services in many countries/regions including India, Brazil, and Canada and other proprietary inputs. These unique datasets increase the spatial and temporal resolution of current condition data for our users.
+Tens of thousands of surface observations, along with other data, are incorporated to create and influence the current conditions made available to users. These surface observations include not only freely available standard datasets, but also unique observations obtained from national meteorological services in many countries/regions, such as India, Brazil, Canada, and other proprietary inputs. These unique datasets increase the spatial and temporal resolution of current condition data for our users.
-These datasets are reviewed in real time for accuracy for the Digital Forecast System, which utilizes AccuWeatherΓÇÖs proprietary artificial intelligence algorithms to continuously modify the forecasts, ensuring they always incorporate the latest data and thereby maximizing their continual accuracy.
+These datasets are reviewed in real time for accuracy for the Digital Forecast System, which uses AccuWeatherΓÇÖs proprietary artificial intelligence algorithms to continuously modify the forecasts, ensuring they always incorporate the latest data and, in that way, maximize their continual accuracy.
**What models create weather forecast data?**
-Numerous weather forecast guidance systems are utilized to formulate global forecasts. Over 150 numerical forecast models are used each day, both external and internal datasets. This includes government models such as the European Centre ECMWF and the U.S. Global Forecast System (GFS). Additionally, AccuWeather incorporates proprietary high-resolution models that downscale forecasts to specific locations and strategic regional domains to predict weather with further accuracy. AccuWeatherΓÇÖs unique blending and weighting algorithms have been developed over the last several decades. These algorithms optimally leverage the numerous forecast inputs to provide highly accurate forecasts.
+Many weather forecast guidance systems are used to formulate global forecasts. Over 150 numerical forecast models are used each day, both external and internal datasets. These models include government models such as the European Centre ECMWF and the U.S. Global Forecast System (GFS). Also, AccuWeather incorporates proprietary high-resolution models that downscale forecasts to specific locations and strategic regional domains to predict weather with further accuracy. AccuWeatherΓÇÖs unique blending and weighting algorithms have been developed over the last several decades. These algorithms optimally apply the many forecast inputs to provide highly accurate forecasts.
## Weather services coverage and availability **What kind of coverage can I expect for different countries/regions?**
-Weather service coverage varies by country/region. All features are not available in every country/region. For more information, see [coverage documentation](./weather-coverage.md).
+Weather service coverage varies by country/region. All features aren't available in every country/region. For more information, see [coverage documentation](./weather-coverage.md).
## Data update frequency **How often is Current Conditions data updated?**
-Current Conditions data is approximately updated at least once an hour, but can be updated more frequently with rapidly changing conditions ΓÇô such as large temperature changes, sky conditions changes, precipitation changes, and so on. Most observation stations around the world report many times per hour as conditions change. However, a few areas will still only update once, twice, or four times an hour at scheduled intervals.
+Current Conditions data is updated at least once an hour, but can be updated more frequently with rapidly changing conditions ΓÇô such as large temperature changes, sky conditions changes, precipitation changes, and so on. Most observation stations around the world report many times per hour as conditions change. However, a few areas will still only update once, twice, or four times an hour at scheduled intervals.
-Azure Maps caches the Current Conditions data for up to 10 minutes to help capture the near real-time update frequency of the data as it occurs. To see when the cached response expires and avoid displaying outdated data, you can leverage the Expires Header information in the HTTP header of the Azure Maps API response.
+Azure Maps caches the Current Conditions data for up to 10 minutes to help capture the near real-time update frequency of the data as it occurs. To see when the cached response expires and avoid displaying outdated data, you can use the Expires Header information in the HTTP header of the Azure Maps API response.
**How often is Daily and Hourly Forecast data updated?**
-Daily and Hourly Forecast data is updated multiple times per day, as updated observations are received. For example, if a forecasted high/low temperature is surpassed, our Forecast data will adjust at the next update cycle. This can happen at different intervals but typically happens within an hour. Many sudden weather conditions can cause a forecast data change. For example, on a hot summer afternoon, an isolated thunderstorm can suddenly emerge, bringing heavy cloud coverage and rain. The isolated storm can effectively drop temperature by as much as 10 degrees. This new temperature value will impact the Hourly and Daily Forecasts for the remainder of the day, and as such, will be updated in our datasets.
+Daily and Hourly Forecast data is updated multiple times per day, as updated observations are received. For example, if a forecasted high/low temperature is surpassed, our Forecast data will adjust at the next update cycle. Updates happen at different intervals but typically happens within an hour. Many sudden weather conditions may cause a forecast data change. For example, on a hot summer afternoon, an isolated thunderstorm might suddenly emerge, bringing heavy cloud coverage and rain. The isolated storm could effectively drop temperature by as much as 10 degrees. This new temperature value will impact the Hourly and Daily Forecasts for the rest of the day, and as such, will be updated in our datasets.
-Azure Maps Forecast APIs are cached for up to 30 mins. To see when the cached response expires and avoid displaying outdated data, you can leverage the Expires Header information in the HTTP header of the Azure Maps API response. We recommend updating as necessary based on a specific product use case and UI (user interface).
+Azure Maps Forecast APIs are cached for up to 30 mins. To see when the cached response expires and avoid displaying outdated data, you can look at the Expires Header information in the HTTP header of the Azure Maps API response. We recommend updating as necessary based on a specific product use case and UI (user interface).
## Developing with Azure Maps SDKs
The Azure Maps [Weather concept article](./weather-services-concepts.md#radar-an
**Can I create radar and satellite tile animations?**
-Yes. In addition to real-time radar and satellite tiles, Azure Maps customers can request past and future tiles to enhance data visualizations with map overlays. This can be done by directly calling [Get Map Tile v2 API](/rest/api/maps/renderv2/getmaptilepreview) or by requesting tiles via Azure Maps web SDK. Radar tiles are provided for up to 1.5 hours in the past, and for up to 2 hours in the future. The tiles and are available in 5-minute intervals. Infrared tiles are provided for up to 3 hours in the past, and are available in 10-minute intervals. For more information, see the open-source Weather Tile Animation [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Animated%20tile%20layer).
+Yes. In addition to real-time radar and satellite tiles, Azure Maps customers can request past and future tiles to enhance data visualizations with map overlays. Customers can call the [Get Map Tile v2 API](/rest/api/maps/renderv2/getmaptilepreview) or request tiles via Azure Maps web SDK. Radar tiles are available for up to 1.5 hours in the past, and for up to 2 hours in the future. The tiles are available in 5-minute intervals. Infrared tiles are provided for up to 3 hours in the past, and are available in 10-minute intervals. For more information, see the open-source Weather Tile Animation [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Animated%20tile%20layer).
**Do you offer icons for different weather conditions?**
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
Below are SDKs/scenarios not supported in the Public Preview:
- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead. - On by default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc. - [Availability tests](availability-overview.md).
+- [Profiler](profiler-overview.md).
## Prerequisites to enable Azure AD authentication ingestion
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
Open ports 80 (http) and 443 (https) for incoming traffic from these addresses (
### IP Addresses
-If you're looking for the actual IP addresses so you can add them to the list of allowed IP's in your firewall, please download the JSON file describing Azure IP Ranges. These files contain the most up-to-date information.
+If you're looking for the actual IP addresses so you can add them to the list of allowed IP's in your firewall, please download the JSON file describing Azure IP Ranges. These files contain the most up-to-date information. For Azure public cloud, you may also look up the IP address ranges by location using the table below.
After downloading the appropriate file, open it using your favorite text editor and search for "ApplicationInsightsAvailability" to go straight to the section of the file describing the service tag for availability tests.
Download [Government Cloud IP addresses](https://www.microsoft.com/download/deta
#### Azure China Cloud Download [China Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
+#### Addresses grouped by location (Azure Public Cloud)
+
+```
+Australia East
+20.40.124.176/28
+20.40.124.240/28
+20.40.125.80/28
+
+Brazil South
+191.233.26.176/28
+191.233.26.128/28
+191.233.26.64/28
+
+France Central (Formerly France South)
+20.40.129.96/28
+20.40.129.112/28
+20.40.129.128/28
+20.40.129.144/28
+
+France Central
+20.40.129.32/28
+20.40.129.48/28
+20.40.129.64/28
+20.40.129.80/28
+
+East Asia
+52.229.216.48/28
+52.229.216.64/28
+52.229.216.80/28
+
+North Europe
+52.158.28.64/28
+52.158.28.80/28
+52.158.28.96/28
+52.158.28.112/28
+
+Japan East
+52.140.232.160/28
+52.140.232.176/28
+52.140.232.192/28
+
+West Europe
+51.144.56.96/28
+51.144.56.112/28
+51.144.56.128/28
+51.144.56.144/28
+51.144.56.160/28
+51.144.56.176/28
+
+UK South
+51.105.9.128/28
+51.105.9.144/28
+51.105.9.160/28
+
+UK West
+20.40.104.96/28
+20.40.104.112/28
+20.40.104.128/28
+20.40.104.144/28
+
+Southeast Asia
+52.139.250.96/28
+52.139.250.112/28
+52.139.250.128/28
+52.139.250.144/28
+
+West US
+40.91.82.48/28
+40.91.82.64/28
+40.91.82.80/28
+40.91.82.96/28
+40.91.82.112/28
+40.91.82.128/28
+
+Central US
+13.86.97.224/28
+13.86.97.240/28
+13.86.98.48/28
+13.86.98.0/28
+13.86.98.16/28
+13.86.98.64/28
+
+North Central US
+23.100.224.16/28
+23.100.224.32/28
+23.100.224.48/28
+23.100.224.64/28
+23.100.224.80/28
+23.100.224.96/28
+23.100.224.112/28
+23.100.225.0/28
+
+South Central US
+20.45.5.160/28
+20.45.5.176/28
+20.45.5.192/28
+20.45.5.208/28
+20.45.5.224/28
+20.45.5.240/28
+
+East US
+20.42.35.32/28
+20.42.35.64/28
+20.42.35.80/28
+20.42.35.96/28
+20.42.35.112/28
+20.42.35.128/28
+
+```
+ ### Discovery API You may also want to [programmatically retrieve](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api-public-preview) the current list of service tags together with IP address range details.
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
Previously updated : 5/05/2021 Last updated : 6/24/2021
The Application Insights option to [Enable alerting on custom metric dimensions]
### Workspace-based Application Insights
-For Application Insights resources which send their data to a Log Analytics workspace, called [workspace-based Application Insights resources](create-workspace-resource.md), the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables customers to leverage all options of the Log Analytics [pricing model](../logs/manage-cost-storage.md#pricing-model) that includes Capacity Reservations in addition to Pay-As-You-Go. Log Analytics also has more options for data retention, including [retention by data type](../logs/manage-cost-storage.md#retention-by-data-type). Application Insights data types in the workspace receive 90 days of retention without charges. Usage of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. Learn how to track data ingestion and retention costs in Log Analytics using the [Usage and estimated costs](../logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs), [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill) and [Log Analytics queries](#data-volume-for-workspace-based-application-insights-resources).
+For Application Insights resources which send their data to a Log Analytics workspace, called [workspace-based Application Insights resources](create-workspace-resource.md), the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. This enables you to leverage all options of the Log Analytics [pricing model](../logs/manage-cost-storage.md#pricing-model), including **Commitment Tiers** in addition to Pay-As-You-Go. Commitment Tiers offer pricing up to 30% lower than Pay-As-You-Go. Log Analytics also has more options for data retention, including [retention by data type](../logs/manage-cost-storage.md#retention-by-data-type). Application Insights data types in the workspace receive 90 days of retention without charges. Usage of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. Learn how to track data ingestion and retention costs in Log Analytics using the [Usage and estimated costs](../logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs), [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill) and [Log Analytics queries](#data-volume-for-workspace-based-application-insights-resources).
## Estimating the costs to manage your application
You can write a script to set the pricing tier by using Azure Resource Managemen
[apiproperties]: app-insights-api-custom-events-metrics.md#properties [start]: ./app-insights-overview.md [pricing]: https://azure.microsoft.com/pricing/details/application-insights/
-[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
+[pricing]: https://azure.microsoft.com/pricing/details/application-insights/
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
This article describes how to enable [SQL insights](sql-insights-overview.md) to
> [!NOTE] > To enable SQL insights by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL insights](resource-manager-sql-insights.md).
+To learn how to enable SQL Insights, you can also refer to this Data Exposed episode.
+> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
+ ## Create Log Analytics workspace SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
You need a user on the SQL deployments that you want to monitor. Follow the proc
The instructions below cover the process per type of SQL that you can monitor. To accomplish this with a script on several SQL resouces at once, please refer to the following [README file](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/SQL%20Insights%20Onboarding%20Scripts/Permissions_LoginUser_Account_Creation-README.txt) and [example script](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/SQL%20Insights%20Onboarding%20Scripts/Permissions_LoginUser_Account_Creation.ps1).
-### Azure SQL database
+### Azure SQL Database
+
+> [!NOTE]
+> SQL insights does not support the following Azure SQL Database scenarios:
+> - **Elastic pools**: Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic pools.
+> - **Low service tiers**: Metrics cannot be gathered for databases on Basic, S0, S1, and S2 [service tiers](../../azure-sql/database/resource-limits-dtu-single-databases.md)
+>
+> SQL insights has limited support for the following Azure SQL Database scenarios:
+> - **Serverless tier**: Metrics can be gathered for databases using the [serverless compute tier](../../azure-sql/database/serverless-tier-overview.md). However, the process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state
+ Open Azure SQL Database with [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or [Query Editor (preview)](../../azure-sql/database/connect-query-portal.md) in the Azure portal. Run the following script to create a user with the required permissions. Replace *user* with a username and *mystrongpassword* with a password.
Depending upon the network settings of your SQL resources, the virtual machines
## Configure network settings Each type of SQL offers methods for your monitoring virtual machine to securely access SQL. The sections below cover the options based upon the type of SQL.
-### Azure SQL Databases
+### Azure SQL Database
SQL insights supports accessing your Azure SQL Database via it's public endpoint as well as from it's virtual network.
For access via the public endpoint, you would add a rule under the **Firewall se
:::image type="content" source="media/sql-insights-enable/firewall-settings.png" alt-text="Firewall settings." lightbox="media/sql-insights-enable/firewall-settings.png":::
-### Azure SQL Managed Instances
+### Azure SQL Managed Instance
If your monitoring virtual machine will be in the same VNet as your SQL MI resources, then see [Connect inside the same VNet](../../azure-sql/managed-instance/connect-application-instance.md#connect-inside-the-same-vnet). If your monitoring virtual machine will be in the different VNet than your SQL MI resources, then see [Connect inside a different VNet](../../azure-sql/managed-instance/connect-application-instance.md#connect-inside-a-different-vnet).
-### Azure virtual machine and Azure SQL virtual machine
+### SQL Server
If your monitoring virtual machine is in the same VNet as your SQL virtual machine resources, then see [Connect to SQL Server within a virtual network](../../azure-sql/virtual-machines/windows/ways-to-connect-to-sql.md#connect-to-sql-server-within-a-virtual-network). If your monitoring virtual machine will be in the different VNet than your SQL virtual machine resources, then see [Connect to SQL Server over the internet](../../azure-sql/virtual-machines/windows/ways-to-connect-to-sql.md#connect-to-sql-server-over-the-internet). ## Store monitoring password in Key Vault
Open SQL insights by selecting **SQL (preview)** from the **Insights** section o
The profile will store the information that you want to collect from your SQL systems. It has specific settings for: - Azure SQL Database -- Azure SQL Managed Instances
+- Azure SQL Managed Instance
- SQL Server running on virtual machines For example, you might create one profile named *SQL Production* and another named *SQL Staging* with different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
The connection string specifies the username that SQL insights should use when l
The connections string will vary for each type of SQL resource:
-#### Azure SQL Databases
+#### Azure SQL Database
Enter the connection string in the form: ```
Get the details from the **Connection strings** menu item for the database.
To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary. -
-#### Azure virtual machines running SQL Server
-Enter the connection string in the form:
-
-```
-"sqlVmConnections":ΓÇ»[
- "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
-]
-```
-
-If your monitoring virtual machine is in the same VNET, use the private IP address of the Server. Otherwise, use the public IP address. If you're using Azure SQL virtual machine, you can see which port to use here on the **Security** page for the resource.
---
-### Azure SQL Managed Instances
+#### Azure SQL Managed Instance
Enter the connection string in the form: ```
Get the details from the **Connection strings** menu item for the managed instan
To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring of a single secondary and the collected data will be tagged to reflect Primary or Secondary.
+#### SQL Server
+Enter the connection string in the form:
+
+```
+"sqlVmConnections":ΓÇ»[
+ "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
+]
+```
+
+If your monitoring virtual machine is in the same VNET, use the private IP address of the Server. Otherwise, use the public IP address. If you're using Azure SQL virtual machine, you can see which port to use here on the **Security** page for the resource.
+ ## Monitoring profile created
If you do not see data, see [Troubleshooting SQL insights](sql-insights-troubles
## Next steps -- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
+- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers** w
> [!NOTE] > Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Additionally, three new larger commitment tiers have been added at 1000, 2000 and 5000 GB/day.
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](https://docs.microsoft.com/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](https://docs.microsoft.com/azure/azure-monitor/reference/tables/heartbeat), [Usage](https://docs.microsoft.com/azure/azure-monitor/reference/tables/usage) and [Operation](https://docs.microsoft.com/azure/azure-monitor/reference/tables/operation) types. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events).
Also, some solutions, such as [Azure Defender (Security Center)](https://azure.microsoft.com/pricing/details/azure-defender/), [Azure Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 05/07/2021 Last updated : 06/24/2021
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue). -- A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and doesn't protect blob, queue, table, or file data within that storage account.
+- A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. However, if the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), the lock protects those resources.
+
+ For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion is denied. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use the control plane operations.
- A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Jump to a resource provider namespace:
> | - | -- | -- | > | netAppAccounts | Yes | No | > | netAppAccounts / accountBackups | No | No |
-> | netAppAccounts / capacityPools | Yes | No |
+> | netAppAccounts / capacityPools | Yes | Yes |
> | netAppAccounts / capacityPools / volumes | Yes | No | > | netAppAccounts / capacityPools / volumes / snapshots | No | No | > | netAppAccounts / volumeGroups | No | No |
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/data-types.md
description: Describes the data types that are available in Azure Resource Manag
Previously updated : 05/07/2021 Last updated : 06/24/2021 # Data types in ARM templates
Arrays start with a left bracket (`[`) and end with a right bracket (`]`). An ar
] } },+
+"outputs": {
+ "arrayOutput": {
+ "type": "array",
+ "value": "[variables('exampleArray')]"
+ },
+ "firstExampleArrayElement": {
+ "type": "int",
+ "value": "[parameters('exampleArray')[0]]"
+ }
+}
``` The elements of an array can be the same type or different types.
The elements of an array can be the same type or different types.
"example string" ] }+
+"outputs": {
+ "arrayOutput": {
+ "type": "array",
+ "value": "[variables('mixedArray')]"
+ },
+ "firstMixedArrayElement": {
+ "type": "string",
+ "value": "[variables('mixedArray')[0]]"
+ }
+}
``` ## Booleans
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/variables.md
Title: Variables in templates description: Describes how to define variables in an Azure Resource Manager template (ARM template). Previously updated : 05/14/2021 Last updated : 06/24/2021 # Variables in ARM templates
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
Previously updated : 06/18/2021 Last updated : 06/24/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
> [!NOTE] > The linked samples are on an external public repository and are provided 'as is', without warranty, and are not supported under any Microsoft support program/service.+
+## See also
+
+- Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.
+- [Auditing for SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/managed-instance/auditing-configure)
+- [Auditing for SQL Server](https://docs.microsoft.com/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-overview.md
Previously updated : 01/25/2021 Last updated : 06/24/2021 tags: azure-synpase # Dynamic data masking
You can use the REST API to programmatically manage data masking policy and rule
Dynamic data masking can be configured by the Azure SQL Database admin, server admin, or the role-based access control (RBAC) [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role.
-## Next steps
+## See also
-[Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking)
+- [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking) for SQL Server.
+- Data Exposed episode about [Granular Permissions for Azure SQL Dynamic Data Masking](https://channel9.msdn.com/Shows/Data-Exposed/Granular-Permissions-for-Azure-SQL-Dynamic-Data-Masking) on Channel 9.
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
Whenever the database engine or the operating system is upgraded, or a failure i
## General Purpose service tier zone redundant availability (Preview)
-Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your new and existing serverlesss and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
+Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your new and existing serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
Zone redundant configuration for the general purpose tier has two layers:
azure-sql Xevent Code Event File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/xevent-code-event-file.md
$policySasStartTime = '2017-10-01';
$storageAccountLocation = 'YOUR_STORAGE_ACCOUNT_LOCATION'; $storageAccountName = 'YOUR_STORAGE_ACCOUNT_NAME';
-$contextName = 'YOUR_CONTEXT_NAME';
$containerName = 'YOUR_CONTAINER_NAME'; $policySasToken = ' ? ';
azure-sql Availability Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-overview.md
vm-windows-sql-server Previously updated : "10/07/2020" Last updated : "06/01/2021"
azure-sql Availability Group Vnn Azure Load Balancer Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-vnn-azure-load-balancer-configure.md
ms.devlang: na
vm-windows-sql-server Previously updated : 06/02/2020 Last updated : 06/14/2021
Before you complete the steps in this article, you should already have:
## Create load balancer
+You can create either an internal load balancer or an external load balancer. An internal load balancer can only be from accessed private resources that are internal to the network. An external load balancer can route traffic from the public to internal resources. When you configure an internal load balancer, use the same IP address as the availability group listener resource for the frontend IP when configuring the load-balancing rules. When you configure an external load balancer, you cannot use the same IP address as the availability group listener as the the listener IP address cannot be a public IP address. As such, to use an external load balancer, logically allocate an IP address in the same subnet as the availability group that does not conflict with any other IP address, and use this address as the frontend IP address for the load-balancing rules.
+ Use the [Azure portal](https://portal.azure.com) to create the load balancer: 1. In the Azure portal, go to the resource group that contains the virtual machines.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
1. Associate the backend pool with the availability set that contains the VMs.
-1. Under **Target network IP configurations**, select **VIRTUAL MACHINE** and choose the virtual machines that will participate as cluster nodes. Be sure to include all virtual machines that will host the FCI or availability group.
+1. Under **Target network IP configurations**, select **VIRTUAL MACHINE** and choose the virtual machines that will participate as cluster nodes. Be sure to include all virtual machines that will host the availability group.
1. Select **OK** to create the backend pool.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
## Set load-balancing rules
+Set the load-balancing rules for the load balancer.
+
+# [Private load balancer](#tab/ilb)
+ 1. On the load balancer pane, select **Load-balancing rules**. 1. Select **Add**.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
1. Set the load-balancing rule parameters: - **Name**: A name for the load-balancing rules.
- - **Frontend IP address**: The IP address for the SQL Server FCIs or the AG listener's clustered network resource.
+ - **Frontend IP address**: The IP address for the AG listener's clustered network resource.
- **Port**: The SQL Server TCP port. The default instance port is 1433. - **Backend port**: The same port as the **Port** value when you enable **Floating IP (direct server return)**. - **Backend pool**: The backend pool name that you configured earlier.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
1. Select **OK**.
+# [Public load balancer](#tab/elb)
+
+1. On the load balancer pane, select **Load-balancing rules**.
+
+1. Select **Add**.
+
+1. Set the load-balancing rule parameters:
+
+ - **Name**: A name for the load-balancing rules.
+ - **Frontend IP address**: The public IP address that clients use to connect to the public endpoint.
+ - **Port**: The SQL Server TCP port. The default instance port is 1433.
+ - **Backend port**: The same port used by the listener of the AG. The port is 1433 by default.
+ - **Backend pool**: The backend pool name that you configured earlier.
+ - **Health probe**: The health probe that you configured earlier.
+ - **Session persistence**: None.
+ - **Idle timeout (minutes)**: 4.
+ - **Floating IP (direct server return)**: Disabled.
+
+1. Select **OK**.
+++ ## Configure cluster probe Set the cluster probe port parameter in PowerShell.
+# [Private load balancer](#tab/ilb)
+ To set the cluster probe port parameter, update the variables in the following script with values from your environment. Remove the angle brackets (`<` and `>`) from the script. ```powershell
The following table describes the values that you need to update:
|**Value**|**Description**| ||| |`Cluster Network Name`| The Windows Server Failover Cluster name for the network. In **Failover Cluster Manager** > **Networks**, right-click the network and select **Properties**. The correct value is under **Name** on the **General** tab.|
-|`AG listener IP Address Resource Name`|The resource name for the SQL Server FCI's or AG listener's IP address. In **Failover Cluster Manager** > **Roles**, under the SQL Server FCI role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
-|`ILBIP`|The IP address of the internal load balancer (ILB). This address is configured in the Azure portal as the ILB's frontend address. This is also the SQL Server FCI's IP address. You can find it in **Failover Cluster Manager** on the same properties page where you located the `<AG listener IP Address Resource Name>`.|
-|`nnnnn`|The probe port that you configured in the load balancer's health probe. Any unused TCP port is valid.|
+|`AG listener IP Address Resource Name`|The resource name for the IP address of the AG listener. In **Failover Cluster Manager** > **Roles**, under the availability group role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`ILBIP`|The IP address of the internal load balancer (ILB). This address is configured in the Azure portal as the frontend address of the ILB. This is the same IP address as the availability group listener. You can find it in **Failover Cluster Manager** on the same properties page where you located the `<AG listener IP Address Resource Name>`.|
+|`nnnnn`|The probe port that you configured in the health probe of the load balancer. Any unused TCP port is valid.|
+|"SubnetMask"| The subnet mask for the cluster parameter. It must be the TCP IP broadcast address: `255.255.255.255`.|
++
+After you set the cluster probe, you can see all the cluster parameters in PowerShell. Run this script:
+
+```powershell
+Get-ClusterResource $IPResourceName | Get-ClusterParameter
+```
+
+# [Public load balancer](#tab/elb)
+
+To set the cluster probe port parameter, update the variables in the following script with values from your environment. Remove the angle brackets (`<` and `>`) from the script.
+
+```powershell
+$ClusterNetworkName = "<Cluster Network Name>"
+$IPResourceName = "<Availability group Listener IP Address Resource Name>"
+$ELBIP = "<n.n.n.n>"
+[int]$ProbePort = <nnnnn>
+
+Import-Module FailoverClusters
+
+Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ELBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}
+```
+
+The following table describes the values that you need to update:
++
+|**Value**|**Description**|
+|||
+|`Cluster Network Name`| The Windows Server Failover Cluster name for the network. In **Failover Cluster Manager** > **Networks**, right-click the network and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`AG listener IP Address Resource Name`|The resource name for the IP address of the AG listener.In **Failover Cluster Manager** > **Roles**, under the availability group role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`ELBIP`|The IP address of the external load balancer (ELB). This address is configured in the Azure portal as the frontend address of the ELB and is used to connect to the public load balancer from external resources.|
+|`nnnnn`|The probe port that you configured in the health probe of the load balancer. Any unused TCP port is valid.|
|"SubnetMask"| The subnet mask for the cluster parameter. It must be the TCP IP broadcast address: `255.255.255.255`.|
After you set the cluster probe, you can see all the cluster parameters in Power
Get-ClusterResource $IPResourceName | Get-ClusterParameter ```
+> [!NOTE]
+> Since there is no private IP address for the external load balancer, users cannot directly use the VNN DNS name as it resolves the IP address within the subnet. Use either the public IP address of the public LB or configure another DNS mapping on the DNS server.
++++ ## Modify connection string For clients that support it, add the `MultiSubnetFailover=True` to the connection string. While the MultiSubnetFailover connection option is not required, it does provide the benefit of a faster subnet failover. This is because the client driver will attempt to open up a TCP socket for each IP address in parallel. The client driver will wait for the first IP to respond with success and once it does, will then use it for the connection.
Get-ClusterResource yourListenerName|Set-ClusterParameter HostRecordTTL 300
To learn more, see the SQL Server [listener connection timeout](/troubleshoot/sql/availability-groups/listener-connection-times-out) documentation. + > [!TIP] > - Set the MultiSubnetFailover parameter = true in the connection string even for HADR solutions that span a single subnet to support future spanning of subnets without the need to update connection strings. > - By default, clients cache cluster DNS records for 20 minutes. By reducing HostRecordTTL you reduce the Time to Live (TTL) for the cached record, legacy clients may reconnect more quickly. As such, reducing the HostRecordTTL setting may result in increased traffic to the DNS servers.
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
vm-windows-sql-server Previously updated : 04/25/2021 Last updated : 06/01/2021 # Documentation changes for SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)]
azure-sql Failover Cluster Instance Vnn Azure Load Balancer Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-vnn-azure-load-balancer-configure.md
Before you complete the steps in this article, you should already have:
## Create load balancer
+You can create either an internal load balancer or an external load balancer. An internal load balancer can only be from accessed private resources that are internal to the network. An external load balancer can route traffic from the public to internal resources. When you configure an internal load balancer, use the same IP address as the FCI resource for the frontend IP when configuring the load-balancing rules. When you configure an external load balancer, you cannot use the same IP address as the FCI IP address cannot be a public IP address. As such, to use an external load balancer, logically allocate an IP address in the same subnet as the FCI that does not conflict with any other IP address, and use this address as the frontend IP address for the load-balancing rules.
++ Use the [Azure portal](https://portal.azure.com) to create the load balancer: 1. In the Azure portal, go to the resource group that contains the virtual machines.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
## Set load-balancing rules
-1. On the load balancer pane, select **Load-balancing rules**.
+Set the load-balancing rules for the load balancer.
-1. Select **Add**.
+# [Private load balancer](#tab/ilb)
+
+Set the load-balancing rules for the private load balancer by following these steps:
+
+1. On the load balancer pane, select **Load-balancing rules**.
+1. Select **Add**.
1. Set the load-balancing rule parameters: - **Name**: A name for the load-balancing rules.
- - **Frontend IP address**: The IP address for the SQL Server FCI's or the AG listener's clustered network resource.
+ - **Frontend IP address**: The IP address for the clustered network resource of the SQL Server FCI.
- **Port**: The SQL Server TCP port. The default instance port is 1433. - **Backend port**: The same port as the **Port** value when you enable **Floating IP (direct server return)**. - **Backend pool**: The backend pool name that you configured earlier.
Use the [Azure portal](https://portal.azure.com) to create the load balancer:
1. Select **OK**.
+# [Public load balancer](#tab/elb)
+
+Set the load-balancing rules for the public load balancer by following these steps:
+
+1. On the load balancer pane, select **Load-balancing rules**.
+1. Select **Add**.
+1. Set the load-balancing rule parameters:
+
+ - **Name**: A name for the load-balancing rules.
+ - **Frontend IP address**: The public IP address that clients use to connect to the public endpoint.
+ - **Port**: The SQL Server TCP port. The default instance port is 1433.
+ - **Backend port**: The port used by the FCI instance. The default is 1433.
+ - **Backend pool**: The backend pool name that you configured earlier.
+ - **Health probe**: The health probe that you configured earlier.
+ - **Session persistence**: None.
+ - **Idle timeout (minutes)**: 4.
+ - **Floating IP (direct server return)**: Disabled.
+
+1. Select **OK**.
+++++ ## Configure cluster probe Set the cluster probe port parameter in PowerShell.
+# [Private load balancer](#tab/ilb)
+ To set the cluster probe port parameter, update the variables in the following script with values from your environment. Remove the angle brackets (`<` and `>`) from the script. ```powershell
The following table describes the values that you need to update:
|**Value**|**Description**| ||| |`Cluster Network Name`| The Windows Server Failover Cluster name for the network. In **Failover Cluster Manager** > **Networks**, right-click the network and select **Properties**. The correct value is under **Name** on the **General** tab.|
-|`SQL Server FCI/AG listener IP Address Resource Name`|The resource name for the SQL Server FCI's or AG listener's IP address. In **Failover Cluster Manager** > **Roles**, under the SQL Server FCI role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`SQL Server FCI IP Address Resource Name`|The resource name for the SQL Server FCI IP address. In **Failover Cluster Manager** > **Roles**, under the SQL Server FCI role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
|`ILBIP`|The IP address of the internal load balancer (ILB). This address is configured in the Azure portal as the ILB's frontend address. This is also the SQL Server FCI's IP address. You can find it in **Failover Cluster Manager** on the same properties page where you located the `<SQL Server FCI/AG listener IP Address Resource Name>`.| |`nnnnn`|The probe port that you configured in the load balancer's health probe. Any unused TCP port is valid.| |"SubnetMask"| The subnet mask for the cluster parameter. It must be the TCP IP broadcast address: `255.255.255.255`.|
After you set the cluster probe, you can see all the cluster parameters in Power
Get-ClusterResource $IPResourceName | Get-ClusterParameter ```
+# [Public load balancer](#tab/elb)
+
+To set the cluster probe port parameter, update the variables in the following script with values from your environment. Remove the angle brackets (`<` and `>`) from the script.
+
+```powershell
+$ClusterNetworkName = "<Cluster Network Name>"
+$IPResourceName = "<SQL Server FCI IP Address Resource Name>"
+$ELBIP = "<n.n.n.n>"
+[int]$ProbePort = <nnnnn>
+
+Import-Module FailoverClusters
+
+Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ELBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}
+```
+
+The following table describes the values that you need to update:
++
+|**Value**|**Description**|
+|||
+|`Cluster Network Name`| The Windows Server Failover Cluster name for the network. In **Failover Cluster Manager** > **Networks**, right-click the network and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`SQL Server FCI IP Address Resource Name`|The resource name for the IP address of the SQL Server FCI. In **Failover Cluster Manager** > **Roles**, under the SQL Server FCI role, under **Server Name**, right-click the IP address resource and select **Properties**. The correct value is under **Name** on the **General** tab.|
+|`ELBIP`|The IP address of the external load balancer (ELB). This address is configured in the Azure portal as the frontend address of the ELB and is used to connect to the public load balancer from external resources. |
+|`nnnnn`|The probe port that you configured in the health probe of the load balancer. Any unused TCP port is valid.|
+|"SubnetMask"| The subnet mask for the cluster parameter. It must be the TCP IP broadcast address: `255.255.255.255`.|
+
+After you set the cluster probe, you can see all the cluster parameters in PowerShell. Run this script:
+
+```powershell
+Get-ClusterResource $IPResourceName | Get-ClusterParameter
+```
+
+> [!NOTE]
+> Since there is no private IP address for the external load balancer, users cannot directly use the VNN DNS name as it resolves the IP address within the subnet. Use either the public IP address of the public LB or configure another DNS mapping on the DNS server.
+++ ## Modify connection string For clients that support it, add the `MultiSubnetFailover=True` to the connection string. While the MultiSubnetFailover connection option is not required, it does provide the benefit of a faster subnet failover. This is because the client driver will attempt to open up a TCP socket for each IP address in parallel. The client driver will wait for the first IP to respond with success and once it does, will then use it for the connection.
Take the following steps:
To test connectivity, sign in to another virtual machine in the same virtual network. Open **SQL Server Management Studio** and connect to the SQL Server FCI name.
->[!NOTE]
->If you need to, you can [download SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
+> [!NOTE]
+> If you need to, you can [download SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
azure-sql Hadr Cluster Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/hadr-cluster-best-practices.md
vm-windows-sql-server Previously updated : "04/25/2021" Last updated : "06/01/2021"
azure-sql Hadr Cluster Quorum Configure How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to.md
vm-windows-sql-server Previously updated : "04/30/2021" Last updated : "06/01/2021"
azure-sql Hadr Windows Server Failover Cluster Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/hadr-windows-server-failover-cluster-overview.md
vm-windows-sql-server Previously updated : "04/25/2021" Last updated : "06/01/2021"
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
ms.devlang: na
vm-windows-sql-server Previously updated : 05/06/2021 Last updated : 06/01/2021
azure-video-analyzer Player Widget https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/player-widget.md
Last updated 05/11/2021
# Using the Azure Video Analyzer player widget
-In this tutorial you will learn how to use Azure Video Analyzer Player widget within your application. This code is an easy to embed widget which will allow your end users to play video and navigate through the portions of a segmented video file. To do this you'll be generating a static HTML page with the widget embedded, and all the pieces to make it work.
+In this tutorial, you will learn how to use Azure Video Analyzer Player widget within your application. This code is an easy-to-embed widget that will allow your end users to play video and navigate through the portions of a segmented video file. To do this, you'll be generating a static HTML page with the widget embedded, and all the pieces to make it work.
In this tutorial you will:
In this tutorial you will:
## Prerequisites
-Prerequisites for this tutorial are:
-* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have
-* [Visual Studio Code](https://code.visualstudio.com/) or other editor for the HTML file.
-one.
+Prerequisites for this tutorial:
+
+* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
+* [Visual Studio Code](https://code.visualstudio.com/) or another editor for the HTML file.
* Either [Continuous video recording and playback](./use-continuous-video-recording.md) or [Detect motion and record video on edge devices](./detect-motion-record-video-clips-cloud.md) ## Create a token
-In this section we will create a JWT token that we will use later in the document. We will use a sample application that will generate the JWT token and provide you with all the fields required to create the access policy.
+In this section, we will create a JWT token that we will use later in the article. We will use a sample application that will generate the JWT token and provide you with all the fields required to create the access policy.
> [!NOTE]
-> If you are familiar with how to generate a JWT token based on either an RSA or ECC certificate then you can skip this section.
+> If you are familiar with how to generate a JWT token based on either an RSA or ECC certificate, you can skip this section.
-1. Download the JWTTokenIssuer application located [here](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp/tree/main/src/jwt-token-issuer/).
+1. Download the [JWTTokenIssuer application](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp/tree/main/src/jwt-token-issuer/).
- > [!NOTE]
- > For more information about configuring your audience values see this [article](./access-policies.md)
+ > [!NOTE]
+ > For more information about configuring your audience values, see [Access policies](./access-policies.md).
-2. Launch Visual Studio Code and open folder that contains the *.sln file.
-3. In the explorer pane navigate to the program.cs file
-4. Modify line 77 - change the audience to your Video Analyzer endpoint plus /videos/* so that it looks like:
+2. Open Visual Studio Code, and then go to the folder where you downloaded the JWTTokenIssuer application. This folder should contain the *\*.csproj* file.
+3. In the explorer pane, go to the *program.cs* file.
+4. On line 77, change the audience to your Video Analyzer endpoint, followed by /videos/\*, so it looks like:
``` https://{Azure Video Analyzer Account ID}.api.{Azure Long Region Code}.videoanalyzer.azure.net/videos/* ```
-5. Modify line 78 - change the issuer to the issuer value of your certificate. Example: https://contoso.com
> [!NOTE]
- > The Video Analyzer endpoint can be found in overview section of the Video Analyzer resource in Azure. You will need to click on the link "JSON View"
+ > The Video Analyzer endpoint can be found in overview section of the Video Analyzer resource in the Azure portal. This value is referenced as `clientApiEndpointUrl` in [List Video Analyzer video resources](#list-video-analyzer-video-resources) later in this article.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/player-widget/endpoint.png" alt-text="Player widget - endpoint":::
+ :::image type="content" source="media/player-widget/client-api-url.png" alt-text="Screenshot that shows the player widget endpoint.":::
+
+5. On line 78, change the issuer to the issuer value of your certificate. Example: `https://contoso.com`
6. Save the file.
-7. Press `F5` to run the JWTTokenIssuer application.
+7. Select `F5` to run the JWTTokenIssuer application.
+
+ > [!NOTE]
+ > You might be prompted with the message `Required assets to build and debug are missing from 'jwt token issuer'. Add them?` Select `Yes`.
+
+ :::image type="content" source="media/player-widget/visual-studio-code-required-assets.png" alt-text="Screenshot that shows the required asset prompt in Visual Studio Code.":::
+
-This will build and execute the application. After the build it will run by creating a certificate via openssl. You can also run the JWTTokenIssuer.exe file located in the debug folder. The advantage of running the application is that you can specify input options as follows:
+The application builds and then executes. After it builds, it creates a self-signed certificate and generates the JWT token information from that certificate. You also can run the JWTTokenIssuer.exe file that's located in the debug folder of the directory where the JWTTokenIssuer built from. The advantage of running the application is that you can specify input options as follows:
-- JwtTokenIssuer [--audience=<audience>] [--issuer=<issuer>] [--expiration=<expiration>] [--certificatePath=<filepath> --certificatePassword=<password>]
+- `JwtTokenIssuer [--audience=<audience>] [--issuer=<issuer>] [--expiration=<expiration>] [--certificatePath=<filepath> --certificatePassword=<password>]`
-JWTTokenIssuer will create the JWT token and the following needed components:
+JWTTokenIssuer creates the JWT token and the following needed components:
-- `kty`; `alg`; `kid`; `n`; `e`
+- `Issuer`, `Audience`, `Key Type`, `Algorithm`, `Key Id`, `RSA Key Modulus`, `RSA Key Exponent`, `Token`
-Ensure to copy these values out for later use.
+Be sure to copy these values for later use.
## Create an access policy
-Access policies define the permissions and duration of access to a given Video Analyzer video stream. For this tutorial we will configure an Access Policy for Video Analyzer in the Azure portal.
+Access policies define the permissions and duration of access to a given Video Analyzer video stream. For this tutorial, we will configure an Access Policy for Video Analyzer in the Azure portal.
-1. Log into the Azure portal and navigate to your Resource Group where your Video Analyzer account is located.
+1. Sign in to the Azure portal and go to your resource group where your Video Analyzer account is located.
1. Select the Video Analyzer resource.
-1. Under Video Analyzer select Access Policies
+1. Under **Video Analyzer**, select **Access Policies**.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/player-widget/portal-access-policies.png" alt-text="Player widget - portal access policies":::
-1. Click on new and enter the following:
+ :::image type="content" source="./media/player-widget/portal-access-policies.png" alt-text="Player widget - portal access policies.":::
+
+1. Select **New** and enter the following information:
> [!NOTE] > These values come from the JWTTokenIssuer application created in the previous step.
Access policies define the permissions and duration of access to a given Video A
- Issuer - must match the JWT Token Issuer
- - Audience - Audience for the JWT Token -- ${System.Runtime.BaseResourceUrlPattern} is the default. To learn more about Audience and ${System.Runtime.BaseResourceUrlPattern} see this [article](./access-policies.md)
+ - Audience - Audience for the JWT Token -- `${System.Runtime.BaseResourceUrlPattern}` is the default. To learn more about Audience and `${System.Runtime.BaseResourceUrlPattern}`, see [Access policies](./access-policies.md).
- - Key Type - kty -- RSA
+ - Key Type - RSA
- Algorithm - supported values are RS256, RS384, RS512
- - Key ID - kid -- generated from your certificate
+ - Key ID - generated from your certificate. For more information, see [Create a token](#create-a-token).
- - N value - for RSA the N value is the Modulus
+ - RSA Key Modulus - generated from your certificate. For more information, see [Create a token](#create-a-token).
- - E Value - for RSA the E value is the Public Exponent
+ - RSA Key Exponent - generated from your certificate. For more information, see [Create a token](#create-a-token).
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/player-widget/access-policies-portal.png" alt-text="Player widget - access policies portal":::
-1. click `save`.
+ :::image type="content" source="./media/player-widget/access-policies-portal.png" alt-text="Player widget - access policies portal":::
+
+1. Select **Save**.
## List Video Analyzer video resources
function getVideos()
document.getElementById("videoList").value = xhttp.responseText.toString(); } ```
+ > [!NOTE]
+ >The clientApiEndPoint and token are collected from [Create a token](#create-a-token).
## Add the Video Analyzer Player Component
Video name: <input type="text" id="videoName" /><br><br>
## Host the page
-You can test this page locally, but you may want to test a hosted version. In case you do not have a quick way to host a page, here are instructions on how to do so using [static websites](../../storage/blobs/storage-blob-static-website.md) with Storage. The below is a condensed version of [these more complete instructions](../../storage/blobs/storage-blob-static-website-how-to.md) updated for the files we are using in this tutorial.
+You can test this page locally, but you may want to test a hosted version. In case you do not have a quick way to host a page, here are instructions on how to do so using [static websites](../../storage/blobs/storage-blob-static-website.md) with Storage. The following steps are a condensed version of [these more complete instructions](../../storage/blobs/storage-blob-static-website-how-to.md) updated for the files we are using in this tutorial.
1. Create a Storage account 1. Under `Data management` on the left, click on `Static website`
We did a simple configuration for the player above, but it supports a wider rang
### Alternate ways to load the code into your application
-The package used to get the code into your application is an NPM package [here](https://www.npmjs.com/package/@azure/video-analyzer-widgets). While in the above example the latest version was loaded at run time directly from the repository, you can also download and install the package locally using:
+The package used to get the code into your application is an [NPM package](https://www.npmjs.com/package/@azure/video-analyzer-widgets). While in the above example the latest version was loaded at run time directly from the repository, you can also download and install the package locally using:
```bash npm install @azure/video-analyzer/widgets ```
-Or you can import it within your application code using this for Typescript:
+Or you can import it within your application code using this for TypeScript:
```typescript import { Player } from '@video-analyzer/widgets'; ```
-Or this for Javascript if you want to create a player widget dynamically:
+Or this for JavaScript if you want to create a player widget dynamically:
```javascript <script async type="module" src="https://unpkg.com/@azure/video-analyzer-widgets@latest/dist/global.min.js"></script> ```
-If you use this method to import, you will need to programatically create the player object after the import is complete. In the above example you added the module to the page using the `ava-player` HTML tag. To create a player object through code, you can do the following in either JavaScript:
+If you use this method to import, you will need to programatically create the player object after the import is complete. In the preceding example, you added the module to the page using the `ava-player` HTML tag. To create a player object through code, you can do the following in either JavaScript:
```javascript const avaPlayer = new ava.widgets.player(); ```
-Or in Typescript:
+Or in TypeScript:
```typescript const avaPlayer = new Player();
backup About Azure Vm Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-azure-vm-restore.md
This article describes how the [Azure Backup service](./backup-overview.md) rest
## Next steps -- [Frequently asked questions about VM restore](/azure/backup/backup-azure-vm-backup-faq.yml#restore)
+- [Frequently asked questions about VM restore](/azure/backup/backup-azure-vm-backup-faq#restore)
- [Supported restore methods](./backup-support-matrix-iaas.md#supported-restore-methods)-- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
+- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-restore-microsoft-azure-recovery-services.md
This article describes the restore options available with the Microsoft Azure Re
- Ensure that the latest version of the [MARS agent](https://aka.ms/azurebackup_agent) is installed. - Ensure that [network throttling](backup-windows-with-mars-agent.md#enable-network-throttling) is disabled.-- Ensure that high-speed storage with sufficient space for the [agent cache folder](/azure/backup/backup-azure-file-folder-backup-faq.yml#manage-the-backup-cache-folder) is available.
+- Ensure that high-speed storage with sufficient space for the [agent cache folder](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder) is available.
- Monitor memory and CPU resource, and ensure that sufficient resources are available for decompressing and decrypting data. - While using the **Instant Restore** feature to mount a recovery point as a disk, use **robocopy** with multi-threaded copy option (/MT switch) to copy files efficiently from the mounted recovery point.
Using the MARS agent you can:
## Next steps - For more frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).-- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
+- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-mars-troubleshoot.md
We recommend that you check the following before you start troubleshooting Micro
| Error code | Reasons | Recommendations | | - | | | | 0x80070570 | The file or directory is corrupted and unreadable. | Run **chkdsk** on the source volume. |
- | 0x80070002, 0x80070003 | The system cannot find the file specified. | [Ensure the scratch folder isn't full](/azure/backup/backup-azure-file-folder-backup-faq.yml#manage-the-backup-cache-folder) <br><br> Check if the volume where scratch space is configured exists (not deleted) <br><br> [Ensure the MARS agent is excluded from the antivirus installed on the machine](/azure/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup) |
+ | 0x80070002, 0x80070003 | The system cannot find the file specified. | [Ensure the scratch folder isn't full](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder) <br><br> Check if the volume where scratch space is configured exists (not deleted) <br><br> [Ensure the MARS agent is excluded from the antivirus installed on the machine](/azure/backup/backup-azure-troubleshoot-slow-backup-performance-issue#cause-another-process-or-antivirus-software-interfering-with-azure-backup) |
| 0x80070005 | Access Is Denied | [Check if antivirus or other third-party software is blocking access](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup) | | 0x8007018b | Access to the cloud file is denied. | OneDrive files, Git Files, or any other files that can be in offline state on the machine |
We recommend that you check the following before you start troubleshooting Micro
| Error | Possible causes | Recommended actions | ||||
-|<br />The activation did not complete successfully. The current operation failed due to an internal service error [0x1FC07]. Retry the operation after some time. If the issue persists, please contact Microsoft support. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](/azure/backup/backup-azure-file-folder-backup-faq.yml#manage-the-backup-cache-folder).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
+|<br />The activation did not complete successfully. The current operation failed due to an internal service error [0x1FC07]. Retry the operation after some time. If the issue persists, please contact Microsoft support. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
## Encryption passphrase not correctly configured | Error | Possible causes | Recommended actions | ||||
-| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](/azure/backup/backup-azure-file-folder-backup-faq.yml#manage-the-backup-cache-folder).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
+| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
## Backups don't run according to schedule
Unable to find changes in a file. This could be due to various reasons. Please r
## Next steps - Get more details on [how to back up Windows Server with the Azure Backup agent](tutorial-backup-windows-server-to-azure.md).-- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
+- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-encryption.md
Title: Back up and restore encrypted Azure VMs description: Describes how to back up and restore encrypted Azure VMs with the Azure Backup service. Previously updated : 06/03/2021 Last updated : 06/24/2021 # Back up and restore encrypted Azure virtual machines
Azure Backup can back up and restore Azure VMs using ADE with and without the Az
### Limitations -- You can back up and restore ADE encrypted VMs within the same subscription and region.
+- You can back up and restore ADE encrypted VMs within the same subscription.
- Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported. - You can back up and restore ADE encrypted VMs within the same subscription and region as the Recovery Services Backup vault. - ADE encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders.
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
Last updated 06/18/2021
This article describes how to back up [Azure Blobs](/azure/backup/blob-backup-overview) using Azure CLI. > [!IMPORTANT]
-> Support for Azure Blobs backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension isk automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
+> Support for Azure Blobs backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
In this article, you'll learn how to:
backup Backup Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-encryption.md
Azure Backup includes encryption on two levels:
## Next steps - [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)-- [Azure Backup FAQ](/azure/backup/backup-azure-backup-faq.yml#encryption) for any questions you may have about encryption
+- [Azure Backup FAQ](/azure/backup/backup-azure-backup-faq#encryption) for any questions you may have about encryption
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
For more information, see [Azure Resource Manager templates for Azure Backup](ba
Azure Backup now supports incremental backups for SAP HANA databases hosted on Azure VMs. This allows for faster and more cost-efficient backups of your SAP HANA data.
-For more information, see [various options available during creation of a backup policy](/azure/backup/sap-hana-faq-backup-azure-vm.yml#policy) and [how to create a backup policy for SAP HANA databases](tutorial-backup-sap-hana-db.md#creating-a-backup-policy).
+For more information, see [various options available during creation of a backup policy](/azure/backup/sap-hana-faq-backup-azure-vm#policy) and [how to create a backup policy for SAP HANA databases](tutorial-backup-sap-hana-db.md#creating-a-backup-policy).
## Backup Center (in preview)
For more information, see [Encryption for Azure Backup using customer-managed ke
## Next steps -- [Azure Backup guidance and best practices](guidance-best-practices.md)
+- [Azure Backup guidance and best practices](guidance-best-practices.md)
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
No, access to Windows Server VMs by Azure Bastion does not require an [RDS CAL](
Azure Bastion currently supports en-us-qwerty keyboard layout inside the VM. Support for other locales for keyboard layout is work in progress.
+### <a name="timezone"></a>Does Azure Bastion support timezone configuration or timezone redirection for target VMs?
+
+Azure Bastion currently does not support timezone redirection and is not timezone configurable.
+ ### <a name="udr"></a>Is user-defined routing (UDR) supported on an Azure Bastion subnet? No. UDR is not supported on an Azure Bastion subnet.
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/networkInterfaces/ipconfigurations/read|Gets a network interface IP configuration definition.|Action| |Microsoft.Network/virtualNetworks/read|Get the virtual network definition|Action| |Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action|
-|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
+|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
This documentation contains the following article types:
* [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
-* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values , keywords and configuration.
+* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values, keywords and configuration.
## Document Translation key features
The following document file types are supported by Document Translation:
| File type| File extension|Description| |||--| |Adobe PDF|.pdf|Adobe Acrobat portable document format|
-|Comma Separated Values |.csv| A comma-delimited raw-data file used by spreadsheet programs.|
+|Comma-Separated Values |.csv| A comma-delimited raw-data file used by spreadsheet programs.|
|HTML|.html, .htm|Hyper Text Markup Language.| |Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Markdown| .markdown, .mdown, .mkdn, .md, .mkd, .mdwn, .mdtxt, .mdtext, .rmd| A lightweight markup language for creating formatted text.|
+|MHTML|.mthml, .mht| A web page archive format used to combine HTML code and its companion resources.|
|Microsoft Excel|.xls, .xlsx|A spreadsheet file for data analysis and documentation.| |Microsoft Outlook|.msg|An email message created or saved within Microsoft Outlook.| |Microsoft PowerPoint|.ppt, .pptx| A presentation file used to display content in a slideshow format.| |Microsoft Word|.doc, .docx| A text document file.|
-|OpenDocument Text|.odt|An open source text document file.|
-|OpenDocument Presentation|.odp|An open source presentation file.|
-|OpenDocument Spreadsheet|.ods|An open source spreadsheet file.|
+|OpenDocument Text|.odt|An open-source text document file.|
+|OpenDocument Presentation|.odp|An open-source presentation file.|
+|OpenDocument Spreadsheet|.ods|An open-source spreadsheet file.|
|Rich Text Format|.rtf|A text document containing formatting.| |Tab Separated Values/TAB|.tsv/.tab| A tab-delimited raw-data file used by spreadsheet programs.| |Text|.txt| An unformatted text document.|
The following glossary file types are supported by Document Translation:
| File type| File extension|Description| |||--|
+|Comma-Separated Values| .csv |A comma-delimited raw-data file used by spreadsheet programs.|
|Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Tab Separated Values/TAB|.tsv/.tab| A tab-delimited raw-data file used by spreadsheet programs.|
+|Tab-Separated Values/TAB|.tsv, .tab| A tab-delimited raw-data file used by spreadsheet programs.|
## Next steps
cognitive-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-document-formats.md
Status code: 200
"application/vnd.oasis.opendocument.spreadsheet" ], "versions": []
+ },
+ {
+ "format": "Markdown",
+ "fileExtensions": [
+ ".markdown",
+ ".mdown",
+ ".mkdn",
+ ".md",
+ ".mkd",
+ ".mdwn",
+ ".mdtxt",
+ ".mdtext",
+ ".rmd"
+ ],
+ "contentTypes": [
+ "text/markdown",
+ "text/x-markdown",
+ "text/plain"
+ ],
+ "versions": []
+ },
+ {
+ "format": "Mhtml",
+ "fileExtensions": [
+ ".mhtml",
+ ".mht"
+ ],
+ "contentTypes": [
+ "message/rfc822",
+ "application/x-mimearchive",
+ "multipart/related"
+ ],
+ "versions": []
} ] }+ ``` ### Example error response
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
Translator detects the following languages for translation and transliteration.
| Afrikaans | `af` | | Albanian | `sq` | | Arabic | `ar` |
+|Armenian| `hy` |
| Bulgarian | `bg` | | Catalan | `ca` | | Chinese Simplified | `zh-Hans` |
Translator detects the following languages for translation and transliteration.
| Irish | `ga` | | Italian | `it` | | Japanese | `ja` |
+|Khmer|`km` |
| Klingon | `tlh-Latn` | | Korean | `ko` | | Kurdish (Central) | `ku-Arab` |
+|Lao|`lo` |
| Latvian | `lv` | | Lithuanian | `lt` | | Malay | `ms` | | Maltese | `mt` |
+|Myanmar|`my` |
| Norwegian | `nb` | | Pashto | `ps` | | Persian | `fa` |
The following languages are available for customization to or from English using
## Speech Translation Speech Translation is available by using Translator with Cognitive Services Speech service. View [Speech Service documentation](../speech-service/index.yml) to learn more about using speech translation and to view all of the [available language options](../speech-service/language-support.md).
-### Speech-to-text
-Convert speech into text in order to translate to the text language of your choice. Speech-to-text is used for speech to text translation, or for speech-to-speech translation when used in conjunction with speech synthesis.
-
-| Language |
-|:-- |
-|Arabic|
-|Cantonese (Traditional)|
-|Catalan|
-|Chinese Simplified|
-|Chinese Traditional|
-|Danish|
-|Dutch|
-|English|
-|Finnish|
-|French|
-|French (Canada)|
-|German|
-|Gujarati|
-|Hindi|
-|Italian|
-|Japanese|
-|Korean|
-|Marathi|
-|Norwegian|
-|Polish|
-|Portuguese (Brazil)|
-|Portuguese (Portugal)|
-|Russian|
-|Spanish|
-|Swedish|
-|Tamil|
-|Telugu|
-|Thai|
-|Turkish|
-
-### Text-to-speech
-Convert text to speech. Text-to-speech is used to add audible output of translation results, or for speech-to-speech translation when used with Speech-to-text.
-
-| Language |
-|:-|
-| Arabic |
-| Bulgarian |
-| Cantonese (Traditional) |
-| Catalan |
-| Chinese Simplified |
-| Chinese Traditional |
-| Croatian |
-| Czech |
-| Danish |
-| Dutch |
-| English |
-| Finnish |
-| French |
-| French (Canada) |
-| German |
-| Greek |
-| Hebrew |
-| Hindi |
-| Hungarian |
-| Indonesian |
-| Italian |
-| Japanese |
-| Korean |
-| Malay |
-| Norwegian |
-| Polish |
-| Portuguese (Brazil) |
-| Portuguese (Portugal) |
-| Romanian |
-| Russian |
-| Slovak |
-| Slovenian |
-| Spanish |
-| Swedish |
-| Tamil |
-| Telugu |
-| Thai |
-| Turkish |
-| Vietnamese |
- ## View the language list on the Microsoft Translator website For a quick look at the languages, the Microsoft Translator website shows all the languages supported by Translator for text translation and Speech service for speech translation. This list doesn't include developer-specific information such as language codes.
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-reference.md
Microsoft Translator is served out of multiple datacenter locations. Currently t
* **Asia Pacific:** Korea South, Japan East, Southeast Asia, and Australia East * **Europe:** North Europe, West Europe
-Requests to the Microsoft Translator are in most cases handled by the datacenter that is closest to where the request originated. In case of a datacenter failure, the request may be routed outside of the Azure geography.
+Requests to the Microsoft Translator are in most cases handled by the datacenter that is closest to where the request originated. In case of a datacenter failure, the request may be routed outside of the geography.
-To force the request to be handled by a specific Azure geography, change the Global endpoint in the API request to the desired geographical endpoint:
+To force the request to be handled by a specific geography, change the Global endpoint in the API request to the desired geographical endpoint:
-|Description|Azure geography|Base URL (geographical endpoint)|
-|:--|:--|:--|
-|Azure|Global (non-regional)| api.cognitive.microsofttranslator.com|
-|Azure|United States| api-nam.cognitive.microsofttranslator.com|
-|Azure|Europe| api-eur.cognitive.microsofttranslator.com|
-|Azure|Asia Pacific| api-apc.cognitive.microsofttranslator.com|
+|Geography|Base URL (geographical endpoint)|
+|:--|:--|
+|Global (non-regional)| api.cognitive.microsofttranslator.com|
+|United States| api-nam.cognitive.microsofttranslator.com|
+|Europe| api-eur.cognitive.microsofttranslator.com|
+|Asia Pacific| api-apc.cognitive.microsofttranslator.com|
<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the ΓÇÿResource regionΓÇÖ ΓÇÿSwitzerland NorthΓÇÖ or ΓÇÿSwitzerland WestΓÇÖ, then use the resourceΓÇÖs custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with ΓÇÿResource regionΓÇÖ as ΓÇÿSwitzerland NorthΓÇÖ and your resource name is ΓÇÿmy-ch-nΓÇÖ then your custom endpoint is ΓÇ£https://my-ch-n.cognitiveservices.azure.comΓÇ¥. And a sample request to translate is: ```curl
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Gated preview. [Request access][request-access]. | | [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Gated preview. [Request access][request-access]. | | [Face][fa-containers] | **Face** | Detects human faces in images, and identifies attributes, including face landmarks (such as noses and eyes), gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score, or compare faces against a database to see if a similar-looking or identical face already exists. It can also organize similar faces into groups, using shared visual traits. | Unavailable |
-| [Form Recognizer][fr-containers] | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. | Unavailable |
+| [Form Recognizer][fr-containers] | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. | Gated preview. [Request access][request-access]. |
<!--
cognitive-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-configuration.md
+
+ Title: How to configure a container for Form Recognizer
+
+description: Learn how to configure the Form Recognizer container to parse form and table data.
+++++ Last updated : 06/23/2021++
+# Configure Form Recognizer containers
+
+> [!IMPORTANT]
+>
+> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container) below for more information.
+
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, virtually-isolated environment that can be easily deployed on-premise and in the cloud. In this article, you will learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised**ΓÇöplus the **Read** OCR container. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+
+## Configuration settings
+
+Each container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[ApiKey](#apikey-and-billing-configuration-setting)|Tracks billing information.|
+|Yes|[Billing](#apikey-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information on obtaining _see_ [Billing]](form-recognizer-container-install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../cognitive-services-custom-subdomains.md).|
+|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
+|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
+|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-and-billing-configuration-setting), [`Billing`](#apikey-and-billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your containers won't start. For more information about using these configuration settings to instantiate a container, see [Billing](form-recognizer-container-install-run.md#billing).
+
+## ApiKey and Billing configuration setting
+
+The `ApiKey` setting specifies the Azure resource key that's used to track billing information for the container. The value for the ApiKey must be a valid key for the resource that's specified for `Billing` in the "Billing configuration setting" section.
+
+The `Billing` setting specifies the endpoint URI of the resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+ You can find these settings in the Azure portal on the **Keys and Endpoint* *page.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+## Eula setting
++
+## ApplicationInsights setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
++
+## Volume settings
+
+Use [**volumes**](https://docs.docker.com/storage/volumes/) to read and write data to and from the container. Volumes are the preferred for persisting data generated and used by Docker containers. You can specify an input mount or an output mount by including the `volumes` option and specifying `type` (bind), `source` (path to the folder) and `target` (file path parameter).
+
+The Form Recognizer container requires an input volume and an output volume. The input volume can be read-only (`ro`), and it's required for access to the data that's used for training and scoring. The output volume has to be writable, and you use it to store the models and temporary data.
+
+The exact syntax of the host volume location varies depending on the host operating system. Additionally, the volume location of the [host computer](form-recognizer-container-install-run.md#host-computer-requirements) might not be accessible because of a conflict between the Docker service account permissions and the host mount location permissions.
+
+## Example docker-compose.yml file
+
+The **docker compose** method is comprised of three steps:
+
+ 1. Create a Dockerfile.
+ 1. Define the services in a **docker-compose.yml** so they can be run together in an isolated environment.
+ 1. Run `docker-compose up` to start and run your services.
+
+### Single container example
+
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Layout container instance.
+
+#### **Layout container**
+
+```yml
+version: "3.9"
+
+azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+
+ ports:
+ - "5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+### Multiple containers example
+
+#### **Receipt and OCR Read containers**
+
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Receipt container and {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+
+```yml
+version: "3"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apikey={COMPUTER_VISION_API_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about running multiple containers and the docker compose command](form-recognizer-container-install-run.md)
cognitive-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-install-run.md
+
+ Title: Install and run Docker containers for Form Recognizer v2.1
+
+description: Use the Docker containers for Form Recognizer on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
+++++ Last updated : 06/23/2021+
+keywords: on-premises, Docker, container, identify
++
+# Install and run Form Recognizer v2.1-preview containers
+
+> [!IMPORTANT]
+>
+> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. See [**Request approval to run container**](#request-approval-to-run-the-container) below for more information.
+
+Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
+
+In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised**ΓÇöplus the **Read** OCR container. The **Read** container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](../../computer-vision/vision-api-how-to-topics/call-read-api.md).
+
+## Prerequisites
+
+To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+You'll also need the following to use Form Recognizer containers:
+
+| Required | Purpose |
+|-||
+| **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). |
+| **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
+|**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated API key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_API_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
+| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the API key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_API_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
+
+|Optional|Purpose|
+||-|
+|**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It is available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
+|||
+
+## Request approval to run the container
+
+Complete and submit the [Application for Gated Services form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu) to request approval to run the container.
+
+The form requests information about you, your company, and the user scenario for which you'll use the container. After you submit the form, the Azure Cognitive Services team will review it and email you with a decision.
+
+On the form, you must use an email address associated with an Azure subscription ID. The Azure resource you use to run the container must have been created with the approved Azure subscription ID. Check your email (both inbox and junk folders) for updates on the status of your application from Microsoft. After you're approved, you will be able to run the container after downloading it from the Microsoft Container Registry (MCR), described later in the article.
+
+## Host computer requirements
+
+The host is a x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
+
+* [Azure Kubernetes Service](../../../aks/index.yml).
+* [Azure Container Instances](../../../container-instances/index.yml).
+* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
+
+### Container requirements and recommendations
+
+#### Required containers
+
+The following table lists the additional supporting container(s) for each Form Recognizer container you download. Please refer to the [Billing](#billing) section for more information.
+
+| Feature container | Supporting container(s) |
+||--|
+| **Layout** | None |
+| **Business Card** | **Computer Vision Read**|
+| **ID Document** | **Computer Vision Read** |
+| **Invoice** | **Layout** |
+| **Receipt** |**Computer Vision Read** |
+| **Custom** | **Custom API**, **Custom Supervised**, **Layout**|
+
+#### Recommended CPU cores and memory
+
+> [!Note]
+> The minimum and recommended values are based on Docker limits and *not* the host machine resources.
+
+##### Read, Layout, and Prebuilt containers
+
+| Container | Minimum | Recommended |
+|--||-|
+| Read 3.2 | 8 cores, 16-GB memory | 8 cores, 24-GB memory|
+| Layout 2.1-preview | 8 cores, 16-GB memory | 4 core, 8-GB memory |
+| Business Card 2.1-preview | 2 cores, 4-GB memory | 4 cores, 4-GB memory |
+| ID Document 2.1-preview | 1 core, 2-GB memory |2 cores, 2-GB memory |
+| Invoice 2.1-preview | 4 cores, 8-GB memory | 8 cores, 8-GB memory |
+| Receipt 2.1-preview | 4 cores, 8-GB memory | 8 cores, 8-GB memory |
+
+##### Custom containers
+
+The following host machine requirements are applicable to **train and analyze** requests:
+
+| Container | Minimum | Recommended |
+|--||-|
+| Custom API| 0.3 cores, 0.5-GB memory| 0.6 cores, 1-GB memory |
+|Custom Supervised | 4 cores, 2-GB memory | 8 cores, 4-GB memory|
+
+If you are only making analyze calls, the host machine requirements are as follows:
+
+| Container | Minimum | Recommended |
+|--||-|
+|Custom Supervised (Analyze) | 1 core, 0.5-GB | 2 cores, 1-GB memory |
+
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+* Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
+
+> [!TIP]
+> You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. For example, the following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
+>
+> ```docker
+> docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
+>
+> IMAGE ID REPOSITORY TAG
+> <image-id> <repository-path/name> <tag-name>
+> ```
+
+## Run the container with the **docker-compose up** command
+
+* Replace the {ENDPOINT_URI} and {API_KEY} values with your resource Endpoint URI and the API Key from the Azure resource page.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+* Ensure that the EULA value is set to "accept".
+
+* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container won't start.
+
+> [!IMPORTANT]
+> The subscription keys are used to access your Form Recognizer resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+
+### [Layout](#tab/layout)
+
+Below is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_API_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ ports:
+ - "5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Business Card](#tab/business-card)
+
+Below is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} for your Computer Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-businesscard:
+ container_name: azure-cognitive-service-businesscard
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apikey={COMPUTER_VISION_API_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [ID Document](#tab/id-document)
+
+Below is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-id:
+ container_name: azure-cognitive-service-id
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apikey={COMPUTER_VISION_API_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Invoice](#tab/invoice)
+
+Below is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Invoice and Layout containers.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-invoice
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ user: root
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Receipt](#tab/receipt)
+
+Below is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apikey={FORM_RECOGNIZER_API_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apikey={COMPUTER_VISION_API_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Custom](#tab/custom)
+
+In addition to the [prerequisites](#prerequisites) mentioned above, you will need to do the following to process a custom document:
+
+#### &bullet; Create a folder to store the following files:
+
+ 1. [**.env**](#-create-an-environment-file)
+ 1. [**nginx.conf**](#-create-a-nginx-file)
+ 1. [**docker-compose.yml**](#-create-a-docker-compose-file)
+
+#### &bullet; Create a folder to store your input data
+
+ 1. Name this folder **shared**.
+ 1. We will reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below.
+
+#### &bullet; Create a folder to store the logs written by the Form Recognizer service on your local machine.
+
+ 1. Name this folder **output**.
+ 1. We will reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below.
+
+#### &bullet; Create an environment file
+
+ 1. Name this file **.env**.
+
+ 1. Declare the following environment variables:
+
+ ```text
+ SHARED_MOUNT_PATH="<file-path-to-shared-folder>"
+ OUTPUT_MOUNT_PATH="<file -path-to-output-folder>"
+ FORM_RECOGNIZER_ENDPOINT_URI="<your-form-recognizer-endpoint>"
+ FORM_RECOGNIZER_API_KEY="<your-form-recognizer-apiKey>"
+ RABBITMQ_HOSTNAME="rabbitmq"
+ RABBITMQ_PORT=5672
+ NGINX_CONF_FILE="<file-path>"
+ ```
+
+#### &bullet; Create a **nginx** file
+
+ 1. Name this file **nginx.conf**.
+
+ 1. Enter the following configuration:
+
+```text
+worker_processes 1;
+
+events {
+ worker_connections 1024;
+}
+
+http {
+
+ sendfile on;
+
+ upstream docker - api {
+ server azure - cognitive - service - custom - api: 5000;
+ }
+
+ upstream docker - layout {
+ server azure - cognitive - service - layout: 5000;
+ }
+
+ server {
+ listen 5000;
+
+ location / formrecognizer / v2 .1 / custom / {
+ proxy_pass http: //docker-api/formrecognizer/v2.1/custom/;
+
+ }
+
+ location / formrecognizer / v2 .1 / layout / {
+ proxy_pass http: //docker-layout/formrecognizer/v2.1/layout/;
+
+ }
+
+ }
+}
+```
+
+* Gather a set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created above.
+
+* If you want to label your data, download the [Form OCR Test Tool (FOTT) for Windows](https://github.com/microsoft/OCR-Form-Tools/releases/tag/v2.1-ga). The download will import the labeling tool .exe file that you'll use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
+
+#### Create a new FOTT project
+
+* Open the labeling tool but double-clicking on the FOTT .exe file.
+* On the left pane of the tool, select the connections tab.
+* Select to create a new project and give it a name and description.
+* For the provider, choose the local file system option. For the local folder, make sure you enter the path to the folder where you stored the sample data files.
+* Navigate back to the home tab and select the ΓÇ£Use custom to train a model with labels and key value pairs optionΓÇ¥.
+* Select the train button on the left pane to train the labeled model.
+* Save this connection and use it to label your requests.
+* You can choose to analyze the file of your choice against the trained model.
+
+#### &bullet; Create a **docker compose** file
+
+1. Name this file **docker-compose.yml**
+
+2. Below is a self-contained `docker compose` example to run Form Recognizer Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your applicationΓÇÖs services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+
+ ```yml
+ version: '3.3'
+
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ rabbitmq:
+ container_name: ${RABBITMQ_HOSTNAME}
+ image: rabbitmq:3
+ expose:
+ - "5672"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_API_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-api:
+ container_name: azure-cognitive-service-custom-api
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_API_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-supervised:
+ container_name: azure-cognitive-service-custom-supervised
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_API_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ CustomFormRecognizer:ContainerPhase: All
+ CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ ```
+
+### Ensure the service is running
+
+To ensure that the service is up and running. Run these commands in an Ubuntu shell.
+
+```bash
+$cd <folder containing the docker-compose file>
+
+$source .env
+
+$docker-compose up
+```
+
+### Create a new connection
+
+* On the left pane of the tool, select the **connections** tab.
+* Select **create a new project** and give it a name and description.
+* For the provider, choose the **local file system** option. For the local folder, make sure you enter the path to the folder where you stored the **sample data** files.
+* Navigate back to the home tab and select **Use custom to train a model with labels and key value pairs**.
+* Select the **train button** on the left pane to train the labeled model.
+* **Save** this connection and use it to label your requests.
+* You can choose to analyze the file of your choice against the trained model.
+++
+## Validate that the service is running
+
+There are several ways to validate that the container is running:
+
+* The container provides a homepage at `\` as a visual validation that the container is running.
+
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+
+ Request URL | Purpose
+ -- | --
+ |**http://<span></span>localhost:5000/** | The container provides a home page.
+ |**http://<span></span>localhost:5000/ready** | Requested with GET, this provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes liveness and readiness probes.
+ |**http://<span></span>localhost:5000/status** | Requested with GET, this verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes liveness and readiness probes.
+ |**http://<span></span>localhost:5000/swagger** | The container provides a full set of documentation for the endpoints and a Try it out feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required.
+ |
++
+## Stop the containers
+
+To stop the containers, use the following command:
+
+```console
+docker-compose down
+```
+
+## Billing
+
+The Form Recognizer containers send billing information to Azure by using a Form Recognizer resource on your Azure account.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey`. You will be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you will be billed for the Form Recognizer `BusinessCard` and `Compuer Vision Read` container instances. For the invoice feature, you will be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
+
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+
+### Connect to Azure
+
+The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Cognitive Services container FAQ](../../containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
+
+### Billing arguments
+
+The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline/compose_up/) command will start the container when all three of the following options are provided with valid values:
+
+| Option | Description |
+|--|-|
+| `ApiKey` | The API key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to an API key for the provisioned resource that's specified in `Billing`. |
+| `Billing` | The endpoint of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
+| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
+
+For more information about these options, see [Configure containers](form-recognizer-container-configuration.md).
+
+## Summary
+
+That's it! In this article, you learned concepts and workflows for downloading, installing, and running Form Recognizer containers. In summary:
+
+* Form Recognizer provides seven Linux containers for Docker.
+* Container images are downloaded from the private container registry in Azure.
+* Container images run in Docker.
+* You must specify the billing information when you instantiate a container.
+
+> [!IMPORTANT]
+> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* Review [Configure containers](form-recognizer-container-configuration.md) for configuration settings.
+* Use more [Cognitive Services Containers](../../cognitive-services-container-support.md).
cognitive-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/form-recognizer-container-configuration.md
- Title: How to configure a container for Form Recognizer-
-description: Learn how to configure the Form Recognizer container to parse form and table data.
----- Previously updated : 07/14/2020--
-# Configure Form Recognizer containers
--
-By using Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality.
-
-You configure the Form Recognizer container run-time environment by using the `docker run` command arguments. This container has several required settings and a few optional settings. For a few examples, see the ["Example docker run commands"](#example-docker-run-commands) section. The container-specific settings are the billing settings.
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](form-recognizer-container-howto.md#billing).
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key that's used to track billing information for the container. The value for the ApiKey must be a valid key for the _Form Recognizer_ resource that's specified for `Billing` in the "Billing configuration setting" section.
-
-You can find this setting in the Azure portal, in **Form Recognizer Resource Management**, under **Keys**.
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Form Recognizer_ resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a _Form Recognizer_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-You can find this setting in the Azure portal, in **Form Recognizer Overview**, under **Endpoint**.
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](form-recognizer-container-howto.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
-
-## Eula setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
---
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or an output mount by specifying the `--mount` option in the [`docker run` command](https://docs.docker.com/engine/reference/commandline/run/).
-
-The Form Recognizer container requires an input mount and an output mount. The input mount can be read-only, and it's required for access to the data that's used for training and scoring. The output mount has to be writable, and you use it to store the models and temporary data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the mount location of the [host computer](form-recognizer-container-howto.md#the-host-computer) might not be accessible because of a conflict between the Docker service account permissions and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Required| `Input` | String | The target of the input mount. The default value is `/input`. <br><br>Example:<br>`--mount type=bind,src=c:\input,target=/input`|
-|Required| `Output` | String | The target of the output mount. The default value is `/output`. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. When it's running, the container continues to run until you [stop it](form-recognizer-container-howto.md#stop-the-container).
-
-* **Line-continuation character**: The Docker commands in the following sections use a back slash (\\) as a line continuation character. Replace or remove this character, depending on your host operating system's requirements.
-* **Argument order**: Don't change the order of the arguments unless you're familiar with Docker containers.
-
-Replace {_argument_name_} in the following table with your own values:
-
-| Placeholder | Value |
-|-|-|
-| **{FORM_RECOGNIZER_API_KEY}** | The key that's used to start the container. It's available on the Azure portal Form Recognizer Keys page. |
-| **{FORM_RECOGNIZER_ENDPOINT_URI}** | The billing endpoint URI value is available on the Azure portal Form Recognizer Overview page.|
-| **{COMPUTER_VISION_API_KEY}** | The key is available on the Azure portal Computer Vision API Keys page.|
-| **{COMPUTER_VISION_ENDPOINT_URI}** | The billing endpoint. If you're using a cloud-based Computer Vision resource, the URI value is available on the Azure portal Computer Vision API Overview page. If you're using a *cognitive-services-recognize-text* container, use the billing endpoint URL that's passed to the container in the `docker run` command. |
-
-See [gathering required parameters](form-recognizer-container-howto.md#gathering-required-parameters) for details on how to obtain these values.
--
-> [!IMPORTANT]
-> To run the container, specify the `Eula`, `Billing`, and `ApiKey` options; otherwise, the container won't start. For more information, see [Billing](#billing-configuration-setting).
-
-## Form Recognizer container Docker examples
-
-The following Docker examples are for the Form Recognizer container.
-
-### Basic example for Form Recognizer
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 2 \
mount type=bind,source=c:\input,target=/input \mount type=bind,source=c:\output,target=/output \
-containerpreview.azurecr.io/microsoft/cognitive-services-form-recognizer \
-Eula=accept \
-Billing={FORM_RECOGNIZER_ENDPOINT_URI} \
-ApiKey={FORM_RECOGNIZER_API_KEY} \
-FormRecognizer:ComputerVisionApiKey={COMPUTER_VISION_API_KEY} \
-FormRecognizer:ComputerVisionEndpointUri={COMPUTER_VISION_ENDPOINT_URI}
-```
-
-### Logging example for Form Recognizer
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 2 \
mount type=bind,source=c:\input,target=/input \mount type=bind,source=c:\output,target=/output \
-containerpreview.azurecr.io/microsoft/cognitive-services-form-recognizer \
-Eula=accept \
-Billing={FORM_RECOGNIZER_ENDPOINT_URI} \
-ApiKey={FORM_RECOGNIZER_API_KEY} \
-FormRecognizer:ComputerVisionApiKey={COMPUTER_VISION_API_KEY} \
-FormRecognizer:ComputerVisionEndpointUri={COMPUTER_VISION_ENDPOINT_URI}
-Logging:Console:LogLevel:Default=Information
-```
-
-## Next steps
-
-* Review [Install and run containers](form-recognizer-container-howto.md).
cognitive-services Form Recognizer Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/form-recognizer-container-howto.md
- Title: How to install and run container for Form Recognizer-
-description: This article will explain how to use the Azure Form Recognizer container to parse form and table data.
----- Previously updated : 02/04/2021----
-# Install and run Form Recognizer containers (Retiring)
--
-Azure Form Recognizer applies machine learning technology to identify and extract key-value pairs and tables from forms. It associates values and table entries with the key-value pairs and then outputs structured data that includes the relationships in the original file.
-
-To reduce complexity and easily integrate a custom Form Recognizer model into your workflow automation process or other application, you can call the model by using a simple REST API. Only five form documents are needed, so you can get results quickly, accurately, and tailored to your specific content. No heavy manual intervention or extensive data science expertise is necessary. And it doesn't require data labeling or data annotation.
-
-| Function | Features |
-|-|-|
-| Form Recognizer | <li>Processes PDF, PNG, and JPG files<li>Trains custom models with a minimum of five forms of the same layout <li>Extracts key-value pairs and table information <li>Uses the Azure Cognitive Services Computer Vision API Recognize Text feature to detect and extract printed text from images inside forms<li>Doesn't require annotation or labeling |
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-Before you use Form Recognizer containers, you must meet the following prerequisites:
-
-| Required | Purpose |
-|-||
-| Docker Engine | You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> On Windows, Docker must also be configured to support Linux containers.<br><br> |
-| Familiarity with Docker | You should have a basic understanding of Docker concepts, such as registries, repositories, containers, and container images, and knowledge of basic `docker` commands. |
-| The Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli) on your host. |
-| Computer Vision API resource | To process scanned documents and images, you need a Computer Vision resource. You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a *cognitive-services-recognize-text* [container](../Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual billing fees apply. <br><br>Pass in both the API key and endpoints for your Computer Vision resource (Azure cloud or Cognitive Services container). Use this API key and the endpoint as **{COMPUTER_VISION_API_KEY}** and **{COMPUTER_VISION_ENDPOINT_URI}**.<br><br> If you use the *cognitive-services-recognize-text* container, make sure that:<br><br>Your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` command for the *cognitive-services-recognize-text* container.<br>Your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. |
-| Form Recognizer resource | To use these containers, you must have:<br><br>An Azure **Form Recognizer** resource to get the associated API key and endpoint URI. Both values are available on the Azure portal **Form Recognizer** Overview and Keys pages, and both values are required to start the container.<br><br>**{FORM_RECOGNIZER_API_KEY}**: One of the two available resource keys on the Keys page<br><br>**{FORM_RECOGNIZER_ENDPOINT_URI}**: The endpoint as provided on the Overview page |
-
-> [!NOTE]
-> The Computer Vision resource name should be a single word, without a hyphen `-` or any other special characters. This restriction is in place to ensure Form Recognizer and Recognize Text container compatibility.
-
-## Gathering required parameters
-
-There are three primary parameters for all Cognitive Services' containers that are required. The end-user license agreement (EULA) must be present with a value of `accept`. Additionally, both an Endpoint URL and API Key are needed.
-
-### Endpoint URI `{COMPUTER_VISION_ENDPOINT_URI}` and `{FORM_RECOGNIZER_ENDPOINT_URI}`
-
-The **Endpoint** URI value is available on the Azure portal *Overview* page of the corresponding Cognitive Service resource. Navigate to the *Overview* page, hover over the Endpoint, and a `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon will appear. Copy and use where needed.
-
-![Gather the endpoint uri for later use](../containers/media/overview-endpoint-uri.png)
-
-### Keys `{COMPUTER_VISION_API_KEY}` and `{FORM_RECOGNIZER_API_KEY}`
-
-This key is used to start the container, and is available on the Azure portal's Keys page of the corresponding Cognitive Service resource. Navigate to the *Keys* page, and click on the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
-
-![Get one of the two keys for later use](../containers/media/keys-copy-api-key.png)
-
-> [!IMPORTANT]
-> These subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
-
-## The host computer
--
-### Container requirements and recommendations
-
-The minimum and recommended CPU cores and memory to allocate for each Form Recognizer container are described in the following table:
-
-| Container | Minimum | Recommended |
-|--||-|
-| Form Recognizer | 2 core, 4-GB memory | 4 core, 8-GB memory |
-| Recognize Text | 1 core, 8-GB memory | 2 cores, 8-GB memory |
-
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
-* Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-> [!Note]
-> The minimum and recommended values are based on Docker limits and *not* the host machine resources.
-
-You will need both the Form Recognizer and Recognize Text containers, please note that the **Recognize Text** container is [detailed outside of this article.](../Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull)
--
-## How to use the container
-
-After the container is on the [host computer](#the-host-computer), use the following process to work with the container.
-
-1. [Run the container](#run-the-container-by-using-the-docker-run-command), with the required billing settings. More [examples](form-recognizer-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-
-## Run the container by using the docker run command
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{COMPUTER_VISION_ENDPOINT_URI}`, `{COMPUTER_VISION_API_KEY}`, `{FORM_RECOGNIZER_ENDPOINT_URI}` and `{FORM_RECOGNIZER_API_KEY}` values.
-
-[Examples](form-recognizer-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-
-### Form Recognizer
-
-> [!NOTE]
-> The directories use for `--mount` in these examples are Windows directory paths. If you're using Linux or macOS, change the parameter for your environment.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 2 \
mount type=bind,source=c:\input,target=/input \mount type=bind,source=c:\output,target=/output \
-containerpreview.azurecr.io/microsoft/cognitive-services-form-recognizer \
-Eula=accept \
-Billing={FORM_RECOGNIZER_ENDPOINT_URI} \
-ApiKey={FORM_RECOGNIZER_API_KEY} \
-FormRecognizer:ComputerVisionApiKey={COMPUTER_VISION_API_KEY} \
-FormRecognizer:ComputerVisionEndpointUri={COMPUTER_VISION_ENDPOINT_URI}
-```
-
-This command:
-
-* Runs a Form Recognizer container from the container image.
-* Allocates 2 CPU cores and 8 gigabytes (GB) of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-* Mounts an /input and an /output volume to the container.
--
-### Run separate containers as separate docker run commands
-
-For the Form Recognizer and Text Recognizer combination that's hosted locally on the same host, use the following two example Docker CLI commands:
-
-Run the first container on port 5000.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
mount type=bind,source=c:\input,target=/input \mount type=bind,source=c:\output,target=/output \
-containerpreview.azurecr.io/microsoft/cognitive-services-form-recognizer \
-Eula=accept \
-Billing={FORM_RECOGNIZER_ENDPOINT_URI} \
-ApiKey={FORM_RECOGNIZER_API_KEY}
-FormRecognizer:ComputerVisionApiKey={COMPUTER_VISION_API_KEY} \
-FormRecognizer:ComputerVisionEndpointUri={COMPUTER_VISION_ENDPOINT_URI}
-```
-
-Run the second container on port 5001.
-
-```bash
-docker run --rm -it -p 5001:5000 --memory 4g --cpus 1 \
-containerpreview.azurecr.io/microsoft/cognitive-services-recognize-text \
-Eula=accept \
-Billing={COMPUTER_VISION_ENDPOINT_URI} \
-ApiKey={COMPUTER_VISION_API_KEY}
-```
-Each subsequent container should be on a different port.
-
-### Run separate containers with Docker Compose
-
-For the Form Recognizer and Text Recognizer combination that's hosted locally on the same host, see the following example Docker Compose YAML file. The Text Recognizer `{COMPUTER_VISION_API_KEY}` must be the same for both the `formrecognizer` and `ocr` containers. The `{COMPUTER_VISION_ENDPOINT_URI}` is used only in the `ocr` container, because the `formrecognizer` container uses the `ocr` name and port.
-
-```docker
-version: '3.3'
-
- ocr:
- image: "containerpreview.azurecr.io/microsoft/cognitive-services-recognize-text"
- deploy:
- resources:
- limits:
- cpus: '2'
- memory: 8g
- reservations:
- cpus: '1'
- memory: 4g
- environment:
- eula: accept
- billing: "{COMPUTER_VISION_ENDPOINT_URI}"
- apikey: "{COMPUTER_VISION_API_KEY}"
-
- formrecognizer:
- image: "containerpreview.azurecr.io/microsoft/cognitive-services-form-recognizer"
- deploy:
- resources:
- limits:
- cpus: '2'
- memory: 8g
- reservations:
- cpus: '1'
- memory: 4g
- environment:
- eula: accept
- billing: "{FORM_RECOGNIZER_ENDPOINT_URI}"
- apikey: "{FORM_RECOGNIZER_API_KEY}"
- FormRecognizer__ComputerVisionApiKey: {COMPUTER_VISION_API_KEY}
- FormRecognizer__ComputerVisionEndpointUri: "http://ocr:5000"
- FormRecognizer__SyncProcessTaskCancelLimitInSecs: 75
- links:
- - ocr
- volumes:
- - type: bind
- source: c:\output
- target: /output
- - type: bind
- source: c:\input
- target: /input
- ports:
- - "5000:5000"
-```
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey`, as well as the `FormRecognizer:ComputerVisionApiKey` and `FormRecognizer:ComputerVisionEndpointUri` options, must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-## Query the container's prediction endpoint
-
-|Container|Endpoint|
-|--|--|
-|form-recognizer|http://localhost:5000
-
-### Form Recognizer
-
-The container provides websocket-based query endpoint APIs, which you access through [Form Recognizer services SDK documentation](./index.yml).
-
-By default, the Form Recognizer SDK uses the online services. To use the container, you need to change the initialization method. See the examples below.
-
-#### For C#
-
-Change from using this Azure-cloud initialization call:
-
-```csharp
-var config =
- FormRecognizerConfig.FromSubscription(
- "YourSubscriptionKey",
- "YourServiceRegion");
-```
-to this call, which uses the container endpoint:
-
-```csharp
-var config =
- FormRecognizerConfig.FromEndpoint(
- "ws://localhost:5000/formrecognizer/v1.0-preview/custom",
- "YourSubscriptionKey");
-```
-
-#### For Python
-
-Change from using this Azure-cloud initialization call:
-
-```python
-formrecognizer_config =
- formrecognizersdk.FormRecognizerConfig(
- subscription=formrecognizer_key, region=service_region)
-```
-
-to this call, which uses the container endpoint:
-
-```python
-formrecognizer_config =
- formrecognizersdk.FormRecognizerConfig(
- subscription=formrecognizer_key,
- endpoint="ws://localhost:5000/formrecognizer/v1.0-preview/custom"
-```
-
-### Form Recognizer
-
-The container provides REST endpoint APIs, which you can find on the [Form Recognizer API]https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) reference page.
----
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](form-recognizer-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-The Form Recognizer containers send billing information to Azure by using a _Form Recognizer_ resource on your Azure account.
--
-For more information about these options, see [Configure containers](form-recognizer-container-configuration.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Form Recognizer containers. In summary:
-
-* Form Recognizer provides one Linux container for Docker.
-* Container images are downloaded from the private container registry in Azure.
-* Container images run in Docker.
-* You can use either the REST API or the REST SDK to call operations in Form Recognizer container by specifying the host URI of the container.
-* You must specify the billing information when you instantiate a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* Review [Configure containers](form-recognizer-container-configuration.md) for configuration settings.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md).
container-registry Container Registry Get Started Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-azure-cli.md
# Quickstart: Create a private container registry using the Azure CLI
-Azure Container Registry is a managed Docker container registry service used for storing private Docker container images. This guide details creating an Azure Container Registry instance using the Azure CLI. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
+Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with the Azure CLI. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
This quickstart requires that you are running the Azure CLI (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli].
When the registry is created, the output is similar to the following:
Take note of `loginServer` in the output, which is the fully qualified registry name (all lowercase). Throughout the rest of this quickstart `<registry-name>` is a placeholder for the container registry name, and `<login-server>` is a placeholder for the registry's login server name. + ## Log in to registry
-Before pushing and pulling container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry name when logging in with the Azure CLI. Don't use the login server name, which includes a domain suffix like `azurecr.io`.
+Before pushing and pulling container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry resource name when logging in with the Azure CLI. Don't use the fully qualified login server name.
```azurecli az acr login --name <registry-name>
container-registry Container Registry Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-portal.md
Title: Quickstart - Create registry in portal description: Quickly learn to create a private Azure container registry using the Azure portal. Previously updated : 08/04/2020 Last updated : 06/23/2021 - mvc - mode-portal
+ - contperf-fy21q4
# Quickstart: Create an Azure container registry using the Azure portal
-An Azure container registry is a private Docker registry in Azure where you can store and manage private Docker container images and related artifacts. In this quickstart, you create a container registry with the Azure portal. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
+Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with the Azure portal. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
To log in to the registry to work with container images, this quickstart requires that you are running the Azure CLI (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli].
In the **Basics** tab, enter values for **Resource group** and **Registry name**
Accept default values for the remaining settings. Then select **Review + create**. After reviewing the settings, select **Create**.
-In this quickstart you create a *Basic* registry, which is a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers (SKUs), see [Container registry service tiers][container-registry-skus].
When the **Deployment succeeded** message appears, select the container registry in the portal. :::image type="content" source="media/container-registry-get-started-portal/qs-portal-05.png" alt-text="Container registry Overview in the portal":::
-Take note of the registry name and the value of the **Login server**. You use these values in the following steps when you push and pull images with Docker.
+Take note of the registry name and the value of the **Login server**, which is a fully qualified name ending with `azurecr.io` in the Azure cloud. You use these values in the following steps when you push and pull images with Docker.
## Log in to registry
-Before pushing and pulling container images, you must log in to the registry instance. [Sign into the Azure CLI][get-started-with-azure-cli] on your local machine, then run the [az acr login][az-acr-login] command. Specify only the registry name when logging in with the Azure CLI. Don't use the login server name, which includes a domain suffix like `azurecr.io`.
+Before pushing and pulling container images, you must log in to the registry instance. [Sign into the Azure CLI][get-started-with-azure-cli] on your local machine, then run the [az acr login][az-acr-login] command. Specify only the registry resource name when logging in with the Azure CLI. Don't use the fully qualified login server name.
```azurecli az acr login --name <registry-name>
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-powershell.md
# Quickstart: Create a private container registry using Azure PowerShell
-Azure Container Registry is a managed, private Docker container registry service for building, storing, and serving Docker container images. In this quickstart, you learn how to create an Azure container registry using PowerShell. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
+Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with Azure PowerShell. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.
## Prerequisites
The registry name must be unique within Azure, and contain 5-50 alphanumeric cha
$registry = New-AzContainerRegistry -ResourceGroupName "myResourceGroup" -Name "myContainerRegistry007" -EnableAdminUser -Sku Basic ```
-In this quickstart you create a *Basic* registry, which is a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus].
## Log in to registry
cosmos-db Mongodb Pre Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-pre-migration.md
Finally, now that you have a view of your existing data estate and a design for
|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)|&bull; Easy to set up and supports multiple sources <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process <br/>&bull; Needs custom code to increase read throughput for certain data sources| |Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](https://azure.microsoft.com/resources/videos/using-mongodb-tools-with-azure-cosmos-db/)|&bull; Easy to set up and integration <br/>&bull; Needs custom handling for throttles|-
+
* If your resource can tolerate an offline migration, use the diagram below to choose the appropriate migration tool: ![Offline migration tools.](./media/mongodb-pre-migration/offline-tools.png)
Finally, now that you have a view of your existing data estate and a design for
* If your resource requires an online migration, use the diagram below to choose the appropriate migration tool: ![Online migration tools.](./media/mongodb-pre-migration/online-tools.png)
+
+ Watch this video for an [overview and demo of the migration solutions](https://www.youtube.com/watch?v=WN9h80P4QJM) mentioned above.
* Once you have chosen migration tools for each resource, the next step is to prioritize the resources you will migrate. Good prioritization can help keep your migration on schedule. A good practice is to prioritize migrating those resources which need the most time to be moved; migrating these resources first will bring the greatest progress toward completion. Furthermore, since these time-consuming migrations typically involve more data, they are usually more resource-intensive for the migration tool and therefore are more likely to expose any problems with your migration pipeline early on. This minimizes the chance that your schedule will slip due to any difficulties with your migration pipeline. * Plan how you will monitor the progress of migration once it has started. If you are coordinating your data migration effort among a team, plan a regular cadence of team syncs to so that you have a comprehensive view of how the high-priority migrations are going.
+
### Supported migration scenarios
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
If you have questions or issues in using source control or DevOps techniques, he
- Refer to [Source Control in ADF](source-control.md) to learn how source control is practiced in ADF. - Refer to [CI-CD in ADF](continuous-integration-deployment.md) to learn more about how DevOps CI-CD is practiced in ADF.
-
+ ## Common errors and messages ### Connect to Git repository failed due to different tenant #### Issue
-
+ Sometimes you encounter Authentication issues like HTTP status 401. Especially when you have multiple tenants with guest account, things could become more complicated. #### Cause
The error occurs because we often delete a trigger, which is parameterized, ther
CI/CD release pipeline failing with the following error:
-`
+```output
2020-07-06T09:50:50.8716614Z There were errors in your deployment. Error code: DeploymentFailed. 2020-07-06T09:50:50.8760242Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details. 2020-07-06T09:50:50.8771655Z ##[error]Details:
CI/CD release pipeline failing with the following error:
2020-07-06T09:50:50.8774148Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported. 2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting 2020-07-06T09:50:50.8776801Z ##[error]Task failed while creating or updating the template deployment.
-`
+```
#### Cause
This error is due to an integration runtime with the same name in the target fac
#### Recommendation -- Refer to this Best Practices for CI/CD below:
+- Refer to the [Best Practices for CI/CD](continuous-integration-deployment.md#best-practices-for-cicd)
- https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment#best-practices-for-cicd
- Integration runtimes don't change often and are similar across all stages in your CI/CD, so Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If the name and types & properties are different, make sure to match the source and target integration runtime configuration and then deploy the release pipeline.+ - If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. ### Document creation or update failed because of invalid reference
data-factory Compute Optimized Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-retire.md
Last updated 06/09/2021
Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mechanism to transform data in ETL jobs at scale using a graphical design paradigm. Data flows execute on the Azure Data Factory and Azure Synapse Analytics serverless Integration Runtime facility. The scalable nature of Azure Data Factory and Azure Synapse Analytics Integration Runtimes enabled three different compute options for the Azure Databricks Spark environment that is utilized to execute data flows at scale: Memory Optimized, General Purpose, and Compute Optimized. Memory Optimized and General Purpose are the recommended classes of data flow compute to use with your Integration Runtime for production workloads. Because Compute Optimized will often not suffice for common use cases with data flows, we recommend using General Purpose or Memory Optimized data flows in production workloads.
+## Migration steps
+
+1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type.
+2. Set your data flow activity using either of those compute types.
+
+ ![Compute types](media/data-flow/compute-types.png)
+ ## Comparison between different compute options | Compute Option | Performance |
Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mec
| Memory Optimized Data Flows | Best performing runtime for data flows when working with large datasets and many calculations | | Compute Optimized Data Flows | Not recommended for production workloads |
-## Migration steps
-
-From now through 31 August 2024, your Compute Optimized data flows will continue to work in your existing pipelines. To avoid service disruption, please remove your existing Compute Optimized data flows before 31 August 2024 and follow the steps below to create a new Azure Integration Runtime and data flow activity. When creating a new data flow activity:
-
-1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type.
-2. Set your data flow activity using either of those compute types.
-
- ![Compute types](media/data-flow/compute-types.png)
- [Find more detailed information at the data flows FAQ here](https://aka.ms/dataflowsqa) [Post questions and find answers on data flows on Microsoft Q&A](https://aka.ms/datafactoryqa)
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
See the following tutorials for step-by-step instructions for creating pipelines
- [Build a pipeline with a data transformation activity](tutorial-transform-data-spark-powershell.md) How to achieve CI/CD (continuous integration and delivery) using Azure Data Factory-- [Continuous integration and delivery in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment)
+- [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md)
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
description: Learn how to troubleshoot connector issues in Azure Data Factory.
Previously updated : 06/07/2021 Last updated : 06/24/2021
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check the port of the target server. FTP uses port 21.
+### Error code: FtpFailedToReadFtpData
+
+- **Message**: `Failed to read data from ftp: The remote server returned an error: 227 Entering Passive Mode (*,*,*,*,*,*).`
+
+- **Cause**: Port range between 1024 to 65535 is not open for data transfer under passive mode that ADF supports.
+
+- **Recommendation**: Check the port of the target server. Open port 1024-65535 or port range in between 1024-65535 to SHIR/Azure IR IP address.
+ ## HTTP
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
* If you're using Self-hosted IR, add the Self-hosted IR machine's IP to the allowlist. * If you're using Azure IR, add [Azure Integration Runtime IP addresses](./azure-integration-runtime-ip-addresses.md). If you don't want to add a range of IPs to the SFTP server allowlist, use Self-hosted IR instead. +
+#### Error code: SftpPermissionDenied
+
+- **Message**: `Permission denied to access '%path;'`
+
+- **Cause**: The specified user does not have read or write permission to the folder or file when operating.
+
+- **Recommendation**: Grant the user with permission to read or write to the folder or files on SFTP server.
+
+
## SharePoint Online list ### Error code: SharePointOnlineAuthFailed
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
Two commands are currently available in the package:
Run `npm run start export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example:
-```
+```dos
npm run start export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput ``` - `RootFolder` is a mandatory field that represents where the Data Factory resources are located. - `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. - `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
-
+ > [!NOTE] > The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline.
-
+ ### Validate Run `npm run start validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
-```
+```dos
npm run start validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ```
npm run start validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-x
## Create an Azure pipeline
-While npm packages can be consumed in various ways, one of the primary benefits is being consumed via [Azure Pipeline](https://nam06.safelinks.protection.outlook.com/?url=https:%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fget-started%2Fwhat-is-azure-pipelines%3Fview%3Dazure-devops%23:~:text%3DAzure%2520Pipelines%2520is%2520a%2520cloud%2Cit%2520available%2520to%2520other%2520users.%26text%3DAzure%2520Pipelines%2520combines%2520continuous%2520integration%2Cship%2520it%2520to%2520any%2520target.&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000268277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jo%2BkIvSBiz6f%2B7kmgqDN27TUWc6YoDanOxL9oraAbmA%3D&reserved=0). On each merge into your collaboration branch, a pipeline can be triggered that first validates all of the code and then exports the ARM template into a [build artifact](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fartifacts%2Fbuild-artifacts%3Fview%3Dazure-devops%26tabs%3Dyaml%23how-do-i-consume-artifacts&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000278113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=dN3t%2BF%2Fzbec4F28hJqigGANvvedQoQ6npzegTAwTp1A%3D&reserved=0) that can be consumed by a release pipeline. How it differs from the current CI/CD process is that you will *point your release pipeline at this artifact instead of the existing `adf_publish` branch*.
+While npm packages can be consumed in various ways, one of the primary benefits is being consumed via [Azure Pipeline](/azure/devops/pipelines/get-started/). On each merge into your collaboration branch, a pipeline can be triggered that first validates all of the code and then exports the ARM template into a [build artifact](/azure/devops/pipelines/artifacts/build-artifacts) that can be consumed by a release pipeline. How it differs from the current CI/CD process is that you will *point your release pipeline at this artifact instead of the existing `adf_publish` branch*.
Follow these steps to get started:
-1. Open an Azure DevOps project, and go to **Pipelines**. Select **New Pipeline**.
-
- ![Screenshot that shows the New pipeline button.](media/continuous-integration-deployment-improvements/new-pipeline.png)
-
-1. Select the repository where you want to save your pipeline YAML script. We recommend saving it in a build folder in the same repository of your Data Factory resources. Ensure there's a *package.json* file in the repository that contains the package name, as shown in the following example:
-
- ```json
- {
- "scripts":{
- "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index"
- },
- "dependencies":{
- "@microsoft/azure-data-factory-utilities":"^0.1.5"
- }
- }
- ```
-
-1. Select **Starter pipeline**. If you've uploaded or merged the YAML file, as shown in the following example, you can also point directly at that and edit it.
-
- ![Screenshot that shows Starter pipeline.](media/continuous-integration-deployment-improvements/starter-pipeline.png)
-
- ```yaml
- # Sample YAML file to validate and export an ARM template into a build artifact
- # Requires a package.json file located in the target repository
-
- trigger:
- - main #collaboration branch
-
- pool:
- vmImage: 'ubuntu-latest'
-
- steps:
-
- # Installs Node and the npm packages saved in your package.json file in the build
-
- - task: NodeTool@0
- inputs:
- versionSpec: '10.x'
- displayName: 'Install Node.js'
-
- - task: Npm@1
- inputs:
- command: 'install'
- workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- verbose: true
- displayName: 'Install npm package'
-
- # Validates all of the Data Factory resources in the repository. You'll get the same validation errors as when "Validate All" is selected.
- # Enter the appropriate subscription and name for the source factory.
-
- - task: Npm@1
- inputs:
- command: 'custom'
- workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build validate $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'
- displayName: 'Validate'
-
- # Validate and then generate the ARM template into the destination folder, which is the same as selecting "Publish" from the UX.
- # The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline.
-
- - task: Npm@1
- inputs:
- command: 'custom'
- workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build export $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName "ArmTemplate"'
- displayName: 'Validate and Generate ARM template'
-
- # Publish the artifact to be used as a source for a release pipeline.
-
- - task: PublishPipelineArtifact@1
- inputs:
- targetPath: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>/ArmTemplate' #replace with the package.json folder
- artifact: 'ArmTemplates'
- publishLocation: 'pipeline'
- ```
-
-1. Enter your YAML code. We recommend that you use the YAML file as a starting point.
-1. Save and run. If you used the YAML, it gets triggered every time the main branch is updated.
+1. Open an Azure DevOps project, and go to **Pipelines**. Select **New Pipeline**.
+
+ ![Screenshot that shows the New pipeline button.](media/continuous-integration-deployment-improvements/new-pipeline.png)
+
+2. Select the repository where you want to save your pipeline YAML script. We recommend saving it in a build folder in the same repository of your Data Factory resources. Ensure there's a *package.json* file in the repository that contains the package name, as shown in the following example:
+
+ ```json
+ {
+ "scripts":{
+ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index"
+ },
+ "dependencies":{
+ "@microsoft/azure-data-factory-utilities":"^0.1.5"
+ }
+ }
+ ```
+
+3. Select **Starter pipeline**. If you've uploaded or merged the YAML file, as shown in the following example, you can also point directly at that and edit it.
+
+ ![Screenshot that shows Starter pipeline.](media/continuous-integration-deployment-improvements/starter-pipeline.png)
+
+ ```yaml
+ # Sample YAML file to validate and export an ARM template into a build artifact
+ # Requires a package.json file located in the target repository
+
+ trigger:
+ - main #collaboration branch
+
+ pool:
+ vmImage: 'ubuntu-latest'
+
+ steps:
+
+ # Installs Node and the npm packages saved in your package.json file in the build
+
+ - task: NodeTool@0
+ inputs:
+ versionSpec: '10.x'
+ displayName: 'Install Node.js'
+
+ - task: Npm@1
+ inputs:
+ command: 'install'
+ workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
+ verbose: true
+ displayName: 'Install npm package'
+
+ # Validates all of the Data Factory resources in the repository. You'll get the same validation errors as when "Validate All" is selected.
+ # Enter the appropriate subscription and name for the source factory.
+
+ - task: Npm@1
+ inputs:
+ command: 'custom'
+ workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
+ customCommand: 'run build validate $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'
+ displayName: 'Validate'
+
+ # Validate and then generate the ARM template into the destination folder, which is the same as selecting "Publish" from the UX.
+ # The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline.
+
+ - task: Npm@1
+ inputs:
+ command: 'custom'
+ workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
+ customCommand: 'run build export $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName "ArmTemplate"'
+ displayName: 'Validate and Generate ARM template'
+
+ # Publish the artifact to be used as a source for a release pipeline.
+
+ - task: PublishPipelineArtifact@1
+ inputs:
+ targetPath: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>/ArmTemplate' #replace with the package.json folder
+ artifact: 'ArmTemplates'
+ publishLocation: 'pipeline'
+ ```
+
+4. Enter your YAML code. We recommend that you use the YAML file as a starting point.
+
+5. Save and run. If you used the YAML, it gets triggered every time the main branch is updated.
## Next steps Learn more information about continuous integration and delivery in Data Factory:--- [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
+[Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Installation of the self-hosted integration runtime on a domain controller isn't
- Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the `JAVA_HOME` environment variable is set to the JRE folder (and not just the JDK folder). >[!NOTE]
->If you are running in government cloud, please review [Connect to government cloud.](https://docs.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps)
+>If you are running in government cloud, please review [Connect to government cloud.](../azure-government/documentation-government-get-started-connect-with-ps.md)
## Setting up a self-hosted integration runtime
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
The following table applies to Azure Batch.
- **Recommendation**: This error occurs when ADF doesn't receive a response from HDInsight cluster when attempting to request the status of the running job. This issue might be on the cluster itself, or HDInsight service might have an outage.
- Refer to HDInsight troubleshooting documentation at https://docs.microsoft.com/azure/hdinsight/hdinsight-troubleshoot-guide, or contact their support for further assistance.
+ Refer to [HDInsight troubleshooting documentation](../hdinsight/hdinsight-troubleshoot-guide.md), or contact Microsoft support for further assistance.
### Error code: 2302
The following table applies to Azure Batch.
- **Recommendation**: 1. Verify that the credentials are correct by opening the HDInsight cluster's Ambari UI in a browser.
- 1. If the cluster is in Virtual Network (VNet) and a self-hosted IR is being used, the HDI URL must be the private URL in VNets, and should have '-int' listed after the cluster name.
+ 1. If the cluster is in Virtual Network (VNet) and a self-hosted IR is being used, the HDI URL must be the private URL in VNets, and should have `-int` listed after the cluster name.
For example, change `https://mycluster.azurehdinsight.net/` to `https://mycluster-int.azurehdinsight.net/`. Note the `-int` after `mycluster`, but before `.azurehdinsight.net` 1. If the cluster is in VNet, the self-hosted IR is being used, and the private URL was used, and yet the connection still failed, then the VM where the IR is installed had problems connecting to the HDI.
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-connector-format.md
Previously updated : 06/21/2021 Last updated : 06/24/2021
The RWX permission or the dataset property is not set correctly.
- If the target folder has the correct permission and you use the file name property in the data flow to target to the right folder and file name, but the file path property of the dataset is not set to the target file path (usually leave not set), as the example shown in the following pictures, you will encounter this failure because the backend system tries to create files based on the file path of the dataset, and the file path of the dataset doesn't have the correct permission.
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-path-property.png" alt-text="Screenshot that shows the file path property":::
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-path-property.png" alt-text="Screenshot that shows the file path property.":::
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-name-property.png" alt-text="Screenshot that shows the file name property":::
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-name-property.png" alt-text="Screenshot that shows the file name property.":::
There are two methods to solve this issue:
Create an Azure Data Lake Gen2 linked service for the storage, and select the Ge
## Common Data Model format
-### Model.Json files with special characters
+### Model.json files with special characters
#### Symptoms You may encounter an issue that the final name of the model.json file contains special characters.  
data-factory Ssis Integration Runtime Diagnose Connectivity Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
Use the following sections to learn about the most common errors that occur when
## Next steps -- [Migrate SSIS jobs with SSMS](https://docs.microsoft.com/azure/data-factory/how-to-migrate-ssis-job-ssms)-- [Run SSIS packages in Azure with SSDT](https://docs.microsoft.com/azure/data-factory/how-to-invoke-ssis-package-ssdt)-- [Schedule SSIS packages in Azure](https://docs.microsoft.com/azure/data-factory/how-to-schedule-azure-ssis-integration-runtime)
+- [Migrate SSIS jobs with SSMS](how-to-migrate-ssis-job-ssms.md)
+- [Run SSIS packages in Azure with SSDT](how-to-invoke-ssis-package-ssdt.md)
+- [Schedule SSIS packages in Azure](how-to-schedule-azure-ssis-integration-runtime.md)
data-factory Transform Data Using Stored Procedure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-stored-procedure.md
The following table describes these JSON properties:
| storedProcedureParameters | Specify the values for stored procedure parameters. Use `"param1": { "value": "param1Value","type":"param1Type" }` to pass parameter values and their type supported by the data source. If you need to pass null for a parameter, use `"param1": { "value": null }` (all lower case). | No | ## Parameter data type mapping
-The data type you specify for the parameter is the Azure Data Factory type that maps to the data type in the data source you are using. You can find the data type mappings for your data source in the connectors area. Some examples are
-
-| Data Source | Data Type Mapping |
-| |-|
-| Azure Synapse Analytics | https://docs.microsoft.com/azure/data-factory/connector-azure-sql-data-warehouse#data-type-mapping-for-azure-sql-data-warehouse |
-| Azure SQL Database | https://docs.microsoft.com/azure/data-factory/connector-azure-sql-database#data-type-mapping-for-azure-sql-database |
-| Oracle | https://docs.microsoft.com/azure/data-factory/connector-oracle#data-type-mapping-for-oracle |
-| SQL Server | https://docs.microsoft.com/azure/data-factory/connector-sql-server#data-type-mapping-for-sql-server |
--
+The data type you specify for the parameter is the Azure Data Factory type that maps to the data type in the data source you are using. You can find the data type mappings for your data source described in the connectors documentation. For example:
+- [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#data-type-mapping-for-azure-synapse-analytics)
+- [Azure SQL Database data type mapping](connector-azure-sql-database.md#data-type-mapping-for-azure-sql-database)
+- [Oracle data type mapping](connector-oracle.md#data-type-mapping-for-oracle)
+- [SQL Server data type mapping](connector-sql-server.md#data-type-mapping-for-sql-server)
## Next steps See the following articles that explain how to transform data in other ways:
data-factory Data Factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-compute-linked-services.md
Microsoft updates the list of supported HDInsight versions with the latest Hado
After December 15, 2017: - You can no longer create Linux-based HDInsight version 3.3 (or earlier versions) clusters by using an on-demand HDInsight linked service in Data Factory version 1. -- If the [**osType** and **Version** properties](#azure-hdinsight-on-demand-linked-service) are not explicitly specified in the JSON definition for an existing Data Factory version 1 on-demand HDInsight linked service, the default value is changed from **Version=3.1, osType=Windows** to **Version=\<latest HDI default version\>(https://docs.microsoft.com/azure/hdinsight/hdinsight-component-versioning), osType=Linux**.
+- If the [**osType** and **Version** properties](#azure-hdinsight-on-demand-linked-service) are not explicitly specified in the JSON definition for an existing Data Factory version 1 on-demand HDInsight linked service, the default value is changed from **Version=3.1, osType=Windows** to **Version=\<latest HDI default version\>, osType=Linux**.
After July 31, 2018:
data-share Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/security.md
Access controls to Azure Data Share can be set on the Data Share resource level
Once a share is created or received, users with proper permission to the Data Share resource can make changes. When a user who creates or receives a share leaves the organization, it does not terminate the share or stop flow of data. Other users with proper permission to the Data Share resource can continue to manage the share. ## Share data from or to data stores with firewall enabled
-To share data from or to storage accounts with firewall turned on, you need to enable **Allow trusted Microsoft services** in your storage account. See [Configure Azure Storage firewalls and virtual networks](
-https://docs.microsoft.com/azure/storage/common/storage-network-security#trusted-microsoft-services) for details.
+To share data from or to storage accounts with firewall turned on, you need to enable **Allow trusted Microsoft services** in your storage account. See [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md#trusted-microsoft-services) for details.
## Next steps
databox-online Azure Stack Edge Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-security.md
Last updated 08/21/2019
-# Azure Stack Edge Pro FPGA security and data protection
+# Azure Stack Edge security and data protection
-Security is a major concern when you're adopting a new technology, especially if the technology is used with confidential or proprietary data. Azure Stack Edge Pro FPGA helps you ensure that only authorized entities can view, modify, or delete your data.
+Security is a major concern when you're adopting a new technology, especially if the technology is used with confidential or proprietary data. Azure Stack Edge helps you ensure that only authorized entities can view, modify, or delete your data.
-This article describes the Azure Stack Edge Pro FPGA security features that help protect each of the solution components and the data stored in them.
+This article describes the Azure Stack Edge security features that help protect each of the solution components and the data stored in them.
-Azure Stack Edge Pro FPGA consists of four main components that interact with each other:
+Azure Stack Edge consists of four main components that interact with each other:
- **Azure Stack Edge service, hosted in Azure**. The management resource that you use to create the device order, configure the device, and then track the order to completion. - **Azure Stack Edge Pro FPGA device**. The transfer device that's shipped to you so you can import your on-premises data into Azure.
The Azure Stack Edge Pro FPGA device is an on-premises device that helps transfo
### Protect the device via activation key
-Only an authorized Azure Stack Edge Pro FPGA device is allowed to join the Azure Stack Edge service that you create in your Azure subscription. To authorize a device, you need to use an activation key to activate the device with the Azure Stack Edge service.
+Only an authorized Azure Stack Edge device is allowed to join the Azure Stack Edge service that you create in your Azure subscription. To authorize a device, you need to use an activation key to activate the device with the Azure Stack Edge service.
[!INCLUDE [data-box-edge-gateway-data-rest](../../includes/data-box-edge-gateway-activation-key.md)]
For more information, see [Get an activation key](azure-stack-edge-deploy-prep.m
### Protect the device via password
-Passwords ensure that only authorized users can access your data. Azure Stack Edge Pro FPGA devices boot up in a locked state.
+Passwords ensure that only authorized users can access your data. Azure Stack Edge devices boot up in a locked state.
You can:
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
Models are not necessarily returned in exactly the document form they were uploa
## Update models
-Once a model is uploaded to your Azure Digital Twins instance, the entire model interface is immutable. This means there is no traditional "editing" of models. Azure Digital Twins also does not allow re-upload of the same model.
+Once a model is uploaded to your Azure Digital Twins instance, the model interface is immutable. This means there's no traditional "editing" of models. Azure Digital Twins also does not allow re-upload of the same exact model while a matching model is already present in the instance.
-Instead, if you want to make changes to a modelΓÇösuch as updating `displayName` or `description`ΓÇöthe way to do this is to upload a **newer version** of the model.
+Instead, if you want to make changes to a modelΓÇösuch as updating `displayName` or `description`, or adding and removing propertiesΓÇöyou'll need to replace the original model.
-### Model versioning
+There are two strategies to choose from when replacing a model:
+* [Option 1: Upload new model version](#option-1-upload-new-model-version): Upload the model, with a new version number, and update your twins to use that new model. Both the new and old versions of the model will exist in your instance until you delete one.
+ - **Use this strategy when** you want to make sure twins stay valid at all times through the model transition, or you want to keep a record of what versions a model has gone through. This is also a good choice if you have many models that depend on the model you want to update.
+* [Option 2: Delete old model and re-upload](#option-2-delete-old-model-and-re-upload): Delete the original model and upload the new model with the same name and ID (DTMI value) in its place. Completely replaces the old model with the new one.
+ - **Use this strategy when** you want to remove all record of the older model. Twins will be invalid for a short time while you're transitioning them from the old model to the new one.
+
+### Option 1: Upload new model version
+
+This option involves creating a new version of the model and uploading it to your instance.
+
+This **does not** overwrite earlier versions of the model, so multiple versions of the model will coexist in your instance until you [remove them](#remove-models). Since the new model version and the old model version coexist, twins can use either the new version of the model or the older version. This also means that uploading a new version of a model does not automatically affect existing twins. The existing twins will remain as instances of the old model version, and you can update these twins to the new model version by patching them.
+
+To use this strategy, follow the steps below.
+
+#### 1. Create and upload new model version
To create a new version of an existing model, start with the DTDL of the original model. Update, add, or remove the fields you want to change.
For example, if your previous model ID looked like this:
"@id": "dtmi:com:contoso:PatientRoom;1", ```
-version 2 of this model might look like this:
+Version 2 of this model might look like this:
```json "@id": "dtmi:com:contoso:PatientRoom;2", ```
-Then, upload the new version of the model to your instance.
+Then, [upload](#upload-models) the new version of the model to your instance.
+
+This version of the model will then be available in your instance to use for digital twins. It **does not** overwrite earlier versions of the model, so multiple versions of the model now coexist in your instance.
+
+#### 2. Update graph elements as needed
+
+Next, update the **twins and relationships** in your instance to use the new model version instead of the old. You can use the following instructions to [update twins](how-to-manage-twin.md#update-a-digital-twins-model) and [update relationships](how-to-manage-graph.md#update-relationships). The patch operation to update a twin's model will look something like this:
++
+>[!IMPORTANT]
+>When updating twins, use the **same patch** to update both the model ID (to the new model version) and any fields that must be altered on the twin to make it conform to the new model.
+
+You may also need to update **relationships** and other **models** in your instance that reference this model, to make them refer to the new model version. This will be another model update operation, so return to the beginning of this section and repeat the process for any additional models that need updating.
+
+#### 3. (Optional) Decommission or delete old model version
+
+If you won't be using the old model version anymore, you can [decommission](#decommissioning) the older model. This will allow it to keep existing in the instance, but it can't be used to create new digital twins.
+
+You can also [delete](#deletion) the old model completely if you don't want it in the instance anymore at all.
-This version of the model will then be available in your instance to use for digital twins. It **does not** overwrite earlier versions of the model, so multiple versions of the model will coexist in your instance until you [remove them](#remove-models).
+The sections linked above contain example code and considerations for decommissioning and deleting models.
-### Impact on twins
+### Option 2: Delete old model and re-upload
-When you create a new twin, since the new model version and the old model version coexist, the new twin can use either the new version of the model or the older version.
+Instead of incrementing the version of a model, you can delete a model completely and re-upload an edited model to the instance.
-This also means that uploading a new version of a model does not automatically affect existing twins. The existing twins will simply remain instances of the old model version.
+Azure Digital Twins doesn't remember the old model was ever uploaded, so this will be like uploading a completely new model. Twins in the graph that use the model will automatically switch over to the new definition once it's available. Depending on how the new definition differs from the old one, these twins may have properties and relationships that match the deleted definition and are not valid with the new one, so you may need to patch them to make sure they remain valid.
-You can update these existing twins to the new model version by patching them, as described in the [Update a digital twin's model](how-to-manage-twin.md#update-a-digital-twins-model) section of *How-to: Manage digital twins*. Within the same patch, you must update both the **model ID** (to the new version) and **any fields that must be altered on the twin to make it conform to the new model**.
+To use this strategy, follow the steps below.
+
+### 1. Delete old model
+
+Since Azure Digital Twins does not allow two models with the same ID, start by deleting the original model from your instance.
+
+>[!NOTE]
+> If you have other models that depend on this model (through inheritance or components), you'll need to remove those references before you can delete the model. You can update those dependent models first to temporarily remove the references, or delete the dependent models and reupload them in a later step.
+
+Use the following instructions to [delete your original model](#deletion). This will leave your twins that were using that model temporarily "orphaned," as they're now using a model that no longer exists. This state will be repaired in the next step when you reupload the updated model.
+
+### 2. Create and upload new model
+
+Start with the DTDL of the original model. Update, add, or remove the fields you want to change.
+
+Then, [upload the model](#upload-models) to the instance, as though it were a new model being uploaded for the first time.
+
+### 3. Update graph elements as needed
+
+Now that your new model has been uploaded in place of the old one, the twins in your graph will automatically begin to use the new model definition once the caching in your instance expires and resets. **This process may take 10-15 minutes or longer**, depending on the size of your graph. After that, new and changed properties on your model should be accessible, and removed properties won't be accessible anymore.
+
+>[!NOTE]
+> If you removed other dependent models earlier in order to delete the original model, reupload them now after the cache has reset. If you updated the dependent models to temporarily remove references to the original model, you can update them again to put the reference back.
+
+Next, update the **twins and relationships** in your instance so their properties match the properties defined by the new model. There are two ways to do this:
+* Patch the twins and relationships as needed so they fit the new model. You can use the following instructions to [update twins](how-to-manage-twin.md#update-a-digital-twin) and [update relationships](how-to-manage-graph.md#update-relationships).
+ - **If you've added properties**: Updating twins and relationships to have the new values isn't required, since twins missing the new values will still be valid twins. You can patch them as desired to add values for the new properties.
+ - **If you've removed properties**: You must patch twins to remove the properties that are now invalid with the new model.
+ - **If you've updated properties**: You must patch twins to update the values of changed properties to be valid with the new model.
+* Delete twins and relationships that use the model, and recreate them. You can use the following instructions to [delete twins](how-to-manage-twin.md#delete-a-digital-twin) and [recreate twins](how-to-manage-twin.md#create-a-digital-twin), and [delete relationships](how-to-manage-graph.md#delete-relationships) and [recreate relationships](how-to-manage-graph.md#create-relationships).
+ - You might want to do this if you're making a lot of changes to the model, and it will be difficult to update the existing twins to match it. However, recreation can be complicated if you have a lot of twins that are interconnected by many relationships.
## Remove models
-Models can also be removed from the service, in one of two ways:
+Models can be removed from the service in one of two ways:
* **Decommissioning** : Once a model is decommissioned, you can no longer use it to create new digital twins. Existing digital twins that already use this model aren't affected, so you can still update them with things like property changes and adding or deleting relationships. * **Deletion** : This will completely remove the model from the solution. Any twins that were using this model are no longer associated with any valid model, so they're treated as though they don't have a model at all. You can still read these twins, but won't be able to make any updates on them until they're reassigned to a different model.
These are separate features and they do not impact each other, although they may
### Decommissioning
-Here is the code to decommission a model:
+To decommission a model, you can use the [DecommissionModel](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient.decommissionmodel?view=azure-dotnet&preserve-view=true) method from the SDK:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DecommissionModel":::
+You can also decommission a model using the REST API call [DigitalTwinModels Update](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_update). The `decommissioned` property is the only property that can be replaced with this API call. The JSON Patch document will look something like this:
++ A model's decommissioning status is included in the `ModelData` records returned by the model retrieval APIs. ### Deletion You can delete all models in your instance at once, or you can do it on an individual basis.
-For an example of how to delete all models, download the sample app used in the [Tutorial: Explore the basics with a sample client app](tutorial-command-line-app.md). The *CommandLoop.cs* file does this in a `CommandDeleteAllModels` function.
+For an example of how to delete all models at the same time, see the [End-to-end samples for Azure Digital Twins](https://github.com/Azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/CommandLoop.cs) repository in GitHub. The *CommandLoop.cs* file contains a `CommandDeleteAllModels` function with code to delete all of the models in the instance.
-The rest of this section breaks down model deletion into closer detail, and shows how to do it for an individual model.
+To delete an individual model, follow the instructions and considerations from the rest of this section.
#### Before deletion: Deletion requirements
Even if a model meets the requirements to delete it immediately, you may want to
5. Wait for another few minutes to make sure the changes have percolated through 6. Delete the model
-To delete a model, use this call:
+To delete a model, you can use the [DeleteModel]/dotnet/api/azure.digitaltwins.core.digitaltwinsclient.deletemodel?view=azure-dotnet&preserve-view=true) SDK call:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DeleteModel":::
+You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_delete) REST API call.
+ #### After deletion: Twins without models Once a model is deleted, any digital twins that were using the model are now considered to be without a model. Note that there is no query that can give you a list of all the twins in this stateΓÇöalthough you *can* still query the twins by the deleted model to know what twins are affected.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete this tutorial, you need to:
![Cloud Shell button in the Azure portal](media/tutorial-postgresql-to-azure-postgresql-online/cloud-shell-button.png)
- * Install and run the CLI locally. CLI 2.0 is the command-line tool for managing Azure resources.
+ * Install and run the CLI locally. CLI 2.18 or above version of the command-line tool is required for managing the Azure resources needed for this migration.
- To download the CLI, follow the instructions in the article [Install Azure CLI 2.0](/cli/azure/install-azure-cli). The article also lists the platforms that support CLI 2.0.
+ To download the CLI, follow the instructions in the article [Install Azure CLI](/cli/azure/install-azure-cli). The article also lists the platforms that support Azure CLI.
To set up Windows Subsystem for Linux (WSL), follow the instructions in the [Windows 10 Installation Guide](/windows/wsl/install-win10)
-* Enable logical replication in the postgresql.config file, and set the following parameters:
+* Enable logical replication on the source server, by editing the postgresql.config file and setting the following parameters:
* wal_level = **logical** * max_replication_slots = [number of slots], recommend setting to **five slots**
To complete all the database objects like table schemas, indexes and stored proc
* When prompted, open a web browser and enter a code to authenticate your device. Follow the instructions as listed. * Add the dms extension:
- * To list the available extensions, run the following command:
-
- ```azurecli
- az extension list-available ΓÇôotable
- ```
+ * To list the available extensions, run the following command:
+
+ ```azurecli
+ az extension list-available -o table
+ ```
+ * To verify if you have dms extension already installed, run the following command:
+
+ ```azurecli
+ az extension list -o table
+ ```
- * To install the extension, run the following command:
+ * If an older version of dms extension is installed (version less than 0.14.0), then to uninstall the older version run the following command:
+
+ ```azurecli
+ az extension remove --name dms-preview
+ ```
- ```azurecli
- az extension add ΓÇôn dms-preview
- ```
+ * To install the latest version of the extension, run the following command:
+
+ ```azurecli
+ az extension add ΓÇô-name dms-preview
+ ```
* To verify you have dms extension installed correct, run the following command: ```azurecli
- az extension list -otable
+ az extension list -o table
```
- You should see the following output:
+ You should see a similar output as below:
```output
- ExtensionType Name
-
- whl dms
+ ExtensionType Name Path Preview Version
+ -- - --
+ whl dms-preview C:\....\dms-preview True 0.14.0
``` > [!IMPORTANT]
- > Make sure that your extension version is above 0.11.0.
+ > Make sure that your extension version is 0.14.0 or above.
* At any time, view all commands supported in DMS by running:
To complete all the database objects like table schemas, indexes and stored proc
2. Provision an instance of DMS by running the following command: ```azurecli
- az dms create -l [location] -n <newServiceName> -g <yourResourceGroupName> --sku-name Premium_4vCores --subnet/subscriptions/{vnet subscription id}/resourceGroups/{vnet resource group}/providers/Microsoft.Network/virtualNetworks/{vnet name}/subnets/{subnet name} ΓÇôtags tagName1=tagValue1 tagWithNoValue
+ az dms create -l <location> -n <newServiceName> -g <yourResourceGroupName> --sku-name Premium_4vCores --subnet/subscriptions/{vnet subscription id}/resourceGroups/{vnet resource group}/providers/Microsoft.Network/virtualNetworks/{vnet name}/subnets/{subnet name} ΓÇôtags tagName1=tagValue1 tagWithNoValue
``` For example the following command will create a service in:
To complete all the database objects like table schemas, indexes and stored proc
For example, the following command creates a project using these parameters:
- * Location: West Central US
- * Resource Group Name: PostgresDemo
- * Service Name: PostgresCLI
- * Project name: PGMigration
- * Source platform: PostgreSQL
- * Target platform: AzureDbForPostgreSql
+ * Location: West Central US
+ * Resource Group Name: PostgresDemo
+ * Service Name: PostgresCLI
+ * Project name: PGMigration
+ * Source platform: PostgreSQL
+ * Target platform: AzureDbForPostgreSql
```azurecli az dms project create -l westcentralus -n PGMigration -g PostgresDemo --service-name PostgresCLI --source-platform PostgreSQL --target-platform AzureDbForPostgreSql
To complete all the database objects like table schemas, indexes and stored proc
This step includes using the source IP, UserID and password, destination IP, UserID, password, and task type to establish connectivity.
- * To see a full list of options, run the command:
+ * To see a full list of options, run the command:
- ```azurecli
- az dms project task create -h
- ```
+ ```azurecli
+ az dms project task create -h
+ ```
- For both source and target connection, the input parameter is referring to a json file that has the object list.
+ For both source and target connection, the input parameter is referring to a json file that has the object list.
- The format of the connection JSON object for PostgreSQL connections.
+ The format of the connection JSON object for PostgreSQL connections.
- ```json
- {
- "userName": "user name", // if this is missing or null, you will be prompted
- "password": null, // if this is missing or null (highly recommended) you will
- be prompted
- "serverName": "server name",
- "databaseName": "database name", // if this is missing, it will default to the 'postgres'
- server
- "port": 5432 // if this is missing, it will default to 5432
- }
- ```
+ ```json
+ {
+ // if this is missing or null, you will be prompted
+ "userName": "user name",
+ // if this is missing or null (highly recommended) you will be prompted
+ "password": null,
+ "serverName": "server name",
+ // if this is missing, it will default to the 'postgres' database
+ "databaseName": "database name",
+ // if this is missing, it will default to 5432
+ "port": 5432
+ }
+ ```
- * There's also a database option json file that lists the json objects. For PostgreSQL, the format of the database options JSON object is shown below:
+ There's also a database option json file that lists the json objects. For PostgreSQL, the format of the database options JSON object is shown below:
- ```json
- [
- {
- "name": "source database",
- "target_database_name": "target database",
- },
- ...n
- ]
- ```
+ ```json
+ [
+ {
+ "name": "source database",
+ "target_database_name": "target database",
+ "selectedTables": [
+ "schemaName1.tableName1",
+ ...n
+ ]
+ },
+ ...n
+ ]
+ ```
- * Create a json file with Notepad, copy the following commands and paste them into the file, and then save the file in C:\DMS\source.json.
+ * To create the source connection json, open Notepad and copy the following json and paste it into the file. Save the file in C:\DMS\source.json after modifying it according to your source server.
```json
- {
- "userName": "postgres",
- "password": null,
- be prompted
- "serverName": "13.51.14.222",
- "databaseName": "dvdrental",
- "port": 5432
- }
+ {
+ "userName": "postgres",
+ "password": null,
+ "serverName": "13.51.14.222",
+ "databaseName": "dvdrental",
+ "port": 5432
+ }
```
- * Create another file named target.json and save as C:\DMS\target.json. Include the following commands:
+ * To create the target connection json, open Notepad and copy the following json and paste it into the file. Save the file in C:\DMS\target.json after modifying it according to your target server.
- ```json
- {
- "userName": " dms@builddemotarget",
- "password": null,
- "serverName": " builddemotarget.postgres.database.azure.com",
- "databaseName": "inventory",
- "port": 5432
- }
- ```
+ ```json
+ {
+ "userName": " dms@builddemotarget",
+ "password": null,
+ "serverName": " builddemotarget.postgres.database.azure.com",
+ "databaseName": "inventory",
+ "port": 5432
+ }
+ ```
- * Create a database options json file that lists inventory as the database to migrate:
+ * Create a database options json file that lists inventory and mapping of the databases to migrate:
- ```json
- [
- {
- "name": "dvdrental",
- "target_database_name": "dvdrental",
- }
- ]
- ```
+ * Create a list of tables to be migrated, or you can use a SQL query to generate the list from the source database. A sample query to generate the list of tables is given below just as an example. If using this query, please remember to remove the last comma at the end of the last table name to make it a valid JSON array.
+
+ ```sql
+ SELECT
+ FORMAT('%s,', REPLACE(FORMAT('%I.%I', schemaname, tablename), '"', '\"')) AS SelectedTables
+ FROM
+ pg_tables
+ WHERE
+ schemaname NOT IN ('pg_catalog', 'information_schema');
+ ```
- * Run the following command, which takes in the source, destination, and the DB option json files.
+ * Create the database options json file with one entry for each database with the source and target database names and the list of selected tables to be migrated. You can use the output of the SQL query above to populate the *"selectedTables"* array. **Please note that if the selected tables list is empty, then no tables will be migrated**.
+
+ ```json
+ [
+ {
+ "name": "dvdrental",
+ "target_database_name": "dvdrental",
+ "selectedTables": [
+ "schemaName1.tableName1",
+ "schemaName1.tableName2",
+ ...
+ "schemaNameN.tableNameM"
+ ]
+ },
+ ... n
+ ]
+ ```
- ```azurecli
- az dms project task create -g PostgresDemo --project-name PGMigration --source-platform postgresql --target-platform azuredbforpostgresql --source-connection-json c:\DMS\source.json --database-options-json C:\DMS\option.json --service-name PostgresCLI --target-connection-json c:\DMS\target.json ΓÇôtask-type OnlineMigration -n runnowtask
- ```
+ * Run the following command, which takes in the source connection, target connection, and the database options json files.
- At this point, you've successfully submitted a migration task.
+ ```azurecli
+ az dms project task create -g PostgresDemo --project-name PGMigration --source-connection-json c:\DMS\source.json --database-options-json C:\DMS\option.json --service-name PostgresCLI --target-connection-json c:\DMS\target.json --task-type OnlineMigration -n runnowtask
+ ```
+
+ At this point, you've successfully submitted a migration task.
7. To show progress of the task, run the following command:
- ```azurecli
- az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
- ```
+ * To see the general task status in short
+ ```azurecli
+ az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask
+ ```
- OR
+ * To see the detailed task status including the migration progress information
- ```azurecli
- az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask --expand output
- ```
+ ```azurecli
+ az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask --expand output
+ ```
-8. You can also query for the migrationState from the expand output:
+8. You can also use use [JMESPATH](https://jmespath.org/) query format to only extract the migrationState from the expand output:
```azurecli
- az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask --expand output --query 'properties.output[].migrationState | [0]' "READY_TO_COMPLETE"
+ az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask --expand output --query 'properties.output[].migrationState'
```
-In the output file, there are several parameters that indicate progress of migration. For example, see the output file below:
-
- ```output
- "output": [ // Database Level
- {
- "appliedChanges": 0, // Total incremental sync applied after full load
- "cdcDeleteCounter": 0 // Total delete operation applied after full load
- "cdcInsertCounter": 0, // Total insert operation applied after full load
- "cdcUpdateCounter": 0, // Total update operation applied after full load
+In the output, there are several parameters that indicate progress of different migration steps. For example, see the output below:
+
+```json
+{
+ "output": [
+ // Database Level
+ {
+ "appliedChanges": 0, // Total incremental sync applied after full load
+ "cdcDeleteCounter": 0, // Total delete operation applied after full load
+ "cdcInsertCounter": 0, // Total insert operation applied after full load
+ "cdcUpdateCounter": 0, // Total update operation applied after full load
"databaseName": "inventory", "endedOn": null,
- "fullLoadCompletedTables": 2, //Number of tables completed full load
- "fullLoadErroredTables": 0, //Number of tables that contain migration error
- "fullLoadLoadingTables": 0, //Number of tables that are in loading status
- "fullLoadQueuedTables": 0, //Number of tables that are in queued status
+ "fullLoadCompletedTables": 2, //Number of tables completed full load
+ "fullLoadErroredTables": 0, //Number of tables that contain migration error
+ "fullLoadLoadingTables": 0, //Number of tables that are in loading status
+ "fullLoadQueuedTables": 0, //Number of tables that are in queued status
"id": "db|inventory",
- "incomingChanges": 0, //Number of changes after full load
+ "incomingChanges": 0, //Number of changes after full load
"initializationCompleted": true, "latency": 0,
- "migrationState": "READY_TO_COMPLETE", //Status of migration task. READY_TO_COMPLETE means the database is ready for cutover
+ //Status of migration task
+ "migrationState": "READY_TO_COMPLETE", //READY_TO_COMPLETE => the database is ready for cutover
"resultType": "DatabaseLevelOutput", "startedOn": "2018-07-05T23:36:02.27839+00:00"
- },
- {
+ }, {
"databaseCount": 1, "endedOn": null, "id": "dd27aa3a-ed71-4bff-ab34-77db4261101c",
In the output file, there are several parameters that indicate progress of migra
"state": "PENDING", "targetServer": "builddemotarget.postgres.database.azure.com", "targetVersion": "Azure Database for PostgreSQL"
- },
- { // Table 1
+ },
+ // Table 1
+ {
"cdcDeleteCounter": 0, "cdcInsertCounter": 0, "cdcUpdateCounter": 0, "dataErrorsCount": 0, "databaseName": "inventory",
- "fullLoadEndedOn": "2018-07-05T23:36:20.740701+00:00", //Full load completed time
+ "fullLoadEndedOn": "2018-07-05T23:36:20.740701+00:00", //Full load completed time
"fullLoadEstFinishTime": "1970-01-01T00:00:00+00:00",
- "fullLoadStartedOn": "2018-07-05T23:36:15.864552+00:00", //Full load started time
- "fullLoadTotalRows": 10, //Number of rows loaded in full load
- "fullLoadTotalVolumeBytes": 7056, //Volume in Bytes in full load
+ "fullLoadStartedOn": "2018-07-05T23:36:15.864552+00:00", //Full load started time
+ "fullLoadTotalRows": 10, //Number of rows loaded in full load
+ "fullLoadTotalVolumeBytes": 7056, //Volume in Bytes in full load
"id": "or|inventory|public|actor", "lastModifiedTime": "2018-07-05T23:36:16.880174+00:00", "resultType": "TableLevelOutput",
- "state": "COMPLETED", //State of migration for this table
- "tableName": "public.catalog", //Table name
- "totalChangesApplied": 0 //Total sync changes that applied after full load
- },
- { //Table 2
+ "state": "COMPLETED", //State of migration for this table
+ "tableName": "public.catalog", //Table name
+ "totalChangesApplied": 0 //Total sync changes that applied after full load
+ },
+ //Table 2
+ {
"cdcDeleteCounter": 0, "cdcInsertCounter": 50, "cdcUpdateCounter": 0,
In the output file, there are several parameters that indicate progress of migra
"state": "COMPLETED", "tableName": "public.orders", "totalChangesApplied": 0
- }
- ], // DMS migration task state
- "state": "Running", //Migration task state ΓÇô Running means it is still listening to any changes that might come in
- "taskType": null
- },
- "resourceGroup": "PostgresDemo",
- "type": "Microsoft.DataMigration/services/projects/tasks"
- ```
+ }
+ ],
+ // DMS migration task state
+ "state": "Running", //Running => service is still listening to any changes that might come in
+ "taskType": null
+}
+```
## Cutover migration task The database is ready for cutover when full load is complete. Depending on how busy the source server is with new transactions is coming in, the DMS task might be still applying changes after the full load is complete.
-To ensure all data is caught up, validate row counts between the source and target databases. For example, you can use the following command:
+To ensure all data is caught up, validate row counts between the source and target databases. For example, you can validate the following details from the status output:
```
-"migrationState": "READY_TO_COMPLETE", //Status of migration task. READY_TO_COMPLETE means database is ready for cutover
- "incomingChanges": 0, //continue to check for a period of 5-10 minutes to make sure no new incoming changes that need to be applied to the target server
- "fullLoadTotalRows": 10, //full load for table 1
- "cdcDeleteCounter": 0, //delete, insert and update counter on incremental sync after full load
- "cdcInsertCounter": 50,
- "cdcUpdateCounter": 0,
- "fullLoadTotalRows": 112, //full load for table 2
+Database Level
+"migrationState": "READY_TO_COMPLETE" => Status of migration task. READY_TO_COMPLETE means database is ready for cutover
+"incomingChanges": 0 => Check for a period of 5-10 minutes to ensure no new incoming changes need to be applied to the target server
+
+Table Level (for each table)
+"fullLoadTotalRows": 10 => The row count matches the initial row count of the table
+"cdcDeleteCounter": 0 => Number of deletes after the full load
+"cdcInsertCounter": 50 => Number of inserts after the full load
+"cdcUpdateCounter": 0 => Number of updates after the full load
``` 1. Perform the cutover database migration task by using the following command:
To ensure all data is caught up, validate row counts between the source and targ
az dms project task cutover -h ```
- For example:
+ For example, the following command will initiate the cut-over for the 'Inventory' database:
```azurecli
- az dms project task cutover --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask --object-name Inventory
+ az dms project task cutover --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask --object-name Inventory
``` 2. To monitor the cutover progress, run the following command: ```azurecli
- az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
+ az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask
``` 3. When the database migration status shows **Completed**, [recreate sequences](https://wiki.postgresql.org/wiki/Fixing_Sequences) (if applicable), and connect your applications to the new target instance of Azure Database for PostgreSQL.
If you need to cancel or delete any DMS task, project, or service, perform the c
1. To cancel a running task, use the following command: ```azurecli
- az dms project task cancel --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
+ az dms project task cancel --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask
``` 2. To delete a running task, use the following command: ```azurecli
- az dms project task delete --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
+ az dms project task delete --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask
``` 3. To cancel a running project, use the following command:
event-hubs Configure Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/configure-customer-managed-key.md
In this step, you will update the Event Hubs namespace with key vault informatio
"clusterArmId":"[resourceId('Microsoft.EventHub/clusters', parameters('clusterName'))]", "encryption":{ "keySource":"Microsoft.KeyVault",
+ "requireInfrastructureEncryption":"boolean",
"keyVaultProperties":[ { "keyName":"[parameters('keyName')]",
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-scalability.md
The throughput capacity of Event Hubs is controlled by *throughput units*. Throu
Beyond the capacity of the purchased throughput units, ingress is throttled and a [ServerBusyException](/dotnet/api/microsoft.azure.eventhubs.serverbusyexception) is returned. Egress does not produce throttling exceptions, but is still limited to the capacity of the purchased throughput units. If you receive publishing rate exceptions or are expecting to see higher egress, be sure to check how many throughput units you have purchased for the namespace. You can manage throughput units on the **Scale** blade of the namespaces in the [Azure portal](https://portal.azure.com). You can also manage throughput units programmatically using the [Event Hubs APIs](./event-hubs-samples.md).
-Throughput units are pre-purchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 20 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace.
+Throughput units are pre-purchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 40 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace.
The **Auto-inflate** feature of Event Hubs automatically scales up by increasing the number of throughput units, to meet usage needs. Increasing throughput units prevents throttling scenarios, in which:
expressroute Monitor Expressroute Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/monitor-expressroute-reference.md
This section lists all the automatically collected platform metrics for ExpressR
> Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established. >
-## Metric Dimensions
+## Metric dimensions
For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
When reviewing any metrics through Log Analytics, the output will contain the fo
|Average|real|Equal to (Minimum + Maximum)/2| |Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)|
-## See Also
+## See also
- See [Monitoring Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring Azure ExpressRoute. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/monitor-expressroute.md
For reference, you can see a list of [all resource metrics supported in Azure Mo
Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
-### Aggregation Types:
+### Aggregation types
Metrics explorer supports SUM, MAX, MIN, AVG, and COUNT as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). Use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
You can view metrics across all peerings on a given ExpressRoute circuit.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/ermetricspeering.jpg" alt-text="circuit metrics":::
-#### Bits In and Out - Metrics per peering
+#### Bits in and out - metrics per peering
Aggregation type: *Avg*
You can view metrics for private, public, and Microsoft peering in bits/second.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erpeeringmetrics.jpg" alt-text="metrics per peering":::
-#### BGP Availability - Split by Peer
+#### BGP availability - split by peer
Aggregation type: *Avg*
You can view near to real-time availability of BGP (Layer-3 connectivity) across
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erBgpAvailabilityMetrics.jpg" alt-text="BGP availability per peer":::
-### ARP Availability - Split by Peering
+### ARP availability - split by peering
Aggregation type: *Avg*
You can view near to real-time availability of [ARP](./expressroute-troubleshoot
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erArpAvailabilityMetrics.jpg" alt-text="ARP availability per peer":::
-### ExpressRoute Direct Metrics
+### ExpressRoute Direct metrics
#### Admin State - Split by link
You can view the Admin state for each link of the ExpressRoute Direct port pair.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/adminstate-per-link.jpg" alt-text="ER Direct admin state":::
-#### Bits In Per Second - Split by link
+#### Bits in per second - split by link
Aggregation type: *Avg*
You can view the bits in per second across both links of the ExpressRoute Direct
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/bits-in-per-second-per-link.jpg" alt-text="ER Direct bits in per second":::
-#### Bits Out Per Second - Split by link
+#### Bits out per second - split by link
Aggregation type: *Avg*
You can also view the bits out per second across both links of the ExpressRoute
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/bits-out-per-second-per-link.jpg" alt-text="ER Direct bits out per second":::
-#### Line Protocol - Split by link
+#### Line protocol - split by link
Aggregation type: *Avg*
You can view the line protocol across each link of the ExpressRoute Direct port
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/line-protocol-per-link.jpg" alt-text="ER Direct line protocol":::
-#### Rx Light Level - Split by link
+#### Rx light level - split by link
Aggregation type: *Avg*
You can view the Rx light level (the light level that the ExpressRoute Direct po
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/rxlight-level-per-link.jpg" alt-text="ER Direct line Rx Light Level":::
-#### Tx Light Level - Split by link
+#### Tx light level - Split by link
Aggregation type: *Avg*
You can view the Tx light level (the light level that the ExpressRoute Direct po
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/txlight-level-per-link.jpg" alt-text="ER Direct line Tx Light Level":::
-### ExpressRoute Virtual Network Gateway Metrics
+### ExpressRoute virtual network gateway metrics
Aggregation type: *Avg*
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
-#### CPU Utilization - Split Instance
+#### CPU utilization - split instance
Aggregation type: *Avg*
You can view the CPU utilization of each gateway instance. The CPU utilization m
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/cpu-split.jpg" alt-text="Screenshot of CPU utilization - split metrics.":::
-#### Packets Per Second - Split by Instance
+#### Packets per second - split by instance
Aggregation type: *Avg*
This metric captures the number of inbound packets traversing the ExpressRoute g
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/pps-split.jpg" alt-text="Screenshot of packets per second - split metrics.":::
-#### Count of Routes Advertised to Peer - Split by Instance
+#### Count of routes advertised to peer - split by instance
Aggregation type: *Count*
This metric is the count for the number of routes the ExpressRoute gateway is ad
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
-#### Count of Routes Learned from Peer - Split by Instance
+#### Count of routes learned from peer - split by instance
Aggregation type: *Max*
This metric shows the number of routes the ExpressRoute gateway is learning from
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-learned-from-peer.png" alt-text="Screenshot of count of routes learned from peer.":::
-#### Frequency of Routes change - Split by Instance
+#### Frequency of routes change - split by instance
Aggregation type: *Sum*
This metric shows the frequency of routes being learned from or advertised to re
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/frequency-of-routes-changed.png" alt-text="Screenshot of frequency of routes changed metric.":::
-#### Number of VMs in the Virtual Network
+#### Number of VMs in the virtual network
Aggregation type: *Max*
hdinsight Apache Hadoop Deep Dive Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
Follow [this tutorial](../spark/apache-spark-microsoft-cognitive-toolkit.md) to
Apache Hive and Azure Machine Learning
-* [Apache Hive and Azure Machine Learning end-to-end](../../machine-learning/team-data-science-process/hive-walkthrough.md)
-* [Using an Azure HDInsight Hadoop Cluster on a 1-TB dataset](../../machine-learning/team-data-science-process/hive-criteo-walkthrough.md)
+* [Apache Hive and Azure Machine Learning end-to-end](/azure/architecture/data-science-process/hive-walkthrough)
+* [Using an Azure HDInsight Hadoop Cluster on a 1-TB dataset](/azure/architecture/data-science-process/hive-criteo-walkthrough)
Apache Spark and MLLib
-* [Machine learning with Apache Spark on HDInsight](../../machine-learning/team-data-science-process/spark-overview.md)
+* [Machine learning with Apache Spark on HDInsight](/azure/architecture/data-science-process/spark-overview)
* [Apache Spark with Machine Learning: Use Apache Spark in HDInsight for analyzing building temperature using HVAC data](../spark/apache-spark-ipython-notebook-machine-learning.md) * [Apache Spark with Machine Learning: Use Apache Spark in HDInsight to predict food inspection results](../spark/apache-spark-machine-learning-mllib-ipython.md)
hdinsight Hdinsight Machine Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-machine-learning-overview.md
Azure Machine Learning provides tools to model predictive analytics, and a fully
:::image type="content" source="./media/hdinsight-machine-learning-overview/azure-machine-learning.png" alt-text="Microsoft Azure machine learning overview" border="false":::
-Create features for data in an HDInsight Hadoop cluster using [Hive queries](../machine-learning/team-data-science-process/create-features-hive.md). *Feature engineering* attempts to increase the predictive power of learning algorithms by creating features from raw data that facilitate the learning process. You can run HiveQL queries from Azure Machine Learning Studio (classic), and access data processed in Hive and stored in blob storage, by using the [Import Data module](../machine-learning/classic/import-data.md).
+Create features for data in an HDInsight Hadoop cluster using [Hive queries](/azure/architecture/data-science-process/create-features-hive). *Feature engineering* attempts to increase the predictive power of learning algorithms by creating features from raw data that facilitate the learning process. You can run HiveQL queries from Azure Machine Learning Studio (classic), and access data processed in Hive and stored in blob storage, by using the [Import Data module](../machine-learning/classic/import-data.md).
## Microsoft Cognitive Toolkit
To help advance its own work in deep learning, Microsoft developed the free, ea
* [Apache Spark with Machine Learning: Use Spark in HDInsight for analyzing building temperature using HVAC data](spark/apache-spark-ipython-notebook-machine-learning.md) * [Apache Spark with Machine Learning: Use Spark in HDInsight to predict food inspection results](spark/apache-spark-machine-learning-mllib-ipython.md) * [Generate movie recommendations with Apache Mahout](hadoop/apache-hadoop-mahout-linux-mac.md)
-* [Apache Hive and Azure Machine Learning](../machine-learning/team-data-science-process/create-features-hive.md)
-* [Apache Hive and Azure Machine Learning end-to-end](../machine-learning/team-data-science-process/hive-walkthrough.md)
-* [Machine learning with Apache Spark on HDInsight](../machine-learning/team-data-science-process/spark-overview.md)
+* [Apache Hive and Azure Machine Learning](/azure/architecture/data-science-process/create-features-hive)
+* [Apache Hive and Azure Machine Learning end-to-end](/azure/architecture/data-science-process/hive-walkthrough)
+* [Machine learning with Apache Spark on HDInsight](/azure/architecture/data-science-process/spark-overview)
### Deep learning resources
hdinsight Apache Spark Creating Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-creating-ml-pipelines.md
The `model` object can now be used to make predictions. For the full sample of
## See also
-* [Data Science using Scala and Apache Spark on Azure](../../machine-learning/team-data-science-process/scala-walkthrough.md)
+* [Data Science using Scala and Apache Spark on Azure](/azure/architecture/data-science-process/scala-walkthrough)
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
In an IoT Central application, a device template uses a device model to describe
> IoT Central requires the full model with all the referenced interfaces in the same file, when you import a model from the model repository use the keyword ΓÇ£expandedΓÇ¥ to get the full version. For example. https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json -- Author a device model using the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Visual Studio code has an extension that supports authoring DTDL models. To learn more, see [Install and use the DTDL authoring tools](../../iot-pnp/howto-use-dtdl-authoring-tools.md). Then publish the model to the public model repository. To learn more, see [Device model repository](../../iot-pnp/concepts-model-repository.md). Implement your device code from the model, and connect your real device to your IoT Central application. IoT Central finds and imports the device model from the public repository for you and generates a device template. You can then add any cloud properties, customizations, and views your IoT Central application needs to the device template.
+- Author a device model using the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Visual Studio code has an extension that supports authoring DTDL models. To learn more, see [Lifecycle and tools](../../iot-pnp/concepts-modeling-guide.md#lifecycle-and-tools). Then publish the model to the public model repository. To learn more, see [Device model repository](../../iot-pnp/concepts-model-repository.md). Implement your device code from the model, and connect your real device to your IoT Central application. IoT Central finds and imports the device model from the public repository for you and generates a device template. You can then add any cloud properties, customizations, and views your IoT Central application needs to the device template.
- Author a device model using the DTDL. Implement your device code from the model. Manually import the device model into your IoT Central application, and then add any cloud properties, customizations, and views your IoT Central application needs. > [!TIP]
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
As well as enabling downstream devices to communicate with your IoT Central appl
* Respond to writable property updates made by an operator. For example, an operator could changes the telemetry send interval. * Respond to commands, such as rebooting the device.
+In this tutorial, you learn how to:
+ > [!div class="checklist"]
-> Create downstream device templates
-> Create a gateway device template
-> Publish the device template
-> Create the simulated devices
+>
+> * Create downstream device templates
+> * Create a gateway device template
+> * Publish the device template
+> * Create the simulated devices
## Prerequisites
To add a new gateway device template to your application:
1. Select **Save**. - ### Add relationships Next you add relationships to the templates for the downstream device templates:
Both your simulated downstream devices are now connected to your simulated gatew
![Downstream devices view](./media/tutorial-define-gateway-device-type/downstream-device-view.png)
+## Connect real downstream devices
+
+In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends. The model ID lets IoT Central associate the device with the correct device template. For example:
+
+```python
+async def provision_device(provisioning_host, id_scope, registration_id, symmetric_key, model_id):
+ provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
+ provisioning_host=provisioning_host,
+ registration_id=registration_id,
+ id_scope=id_scope,
+ symmetric_key=symmetric_key,
+ )
+
+ provisioning_device_client.provisioning_payload = {"modelId": model_id}
+ return await provisioning_device_client.register()
+```
+
+When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central associate the device with the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
+
+```json
+{
+ "iotcModelId": "dtmi:rigado:S1Sensor;2",
+ "iotcGateway":{
+ "iotcGatewayId": "gateway-device-001"
+ }
+}
+```
## Clean up resources
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Create the application using following steps:
:::image type="content" source="media/tutorial-iot-central-connected-logistics/iotc-retail-homepage.png" alt-text="Connected logistics template":::
-1. Select **Create app** under **Connected Logistics Application**.
+1. Select **Create app** under **Connected Logistics**.
1. **Create app** opens the **New application** form. Enter the following details:
Create the application using following steps:
## Walk through the application
-Below is the screenshot showing how to select the connected logistics application template.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing how to select the connected logistics application template](./media/tutorial-iot-central-connected-logistics/iotc-retail-homepage.png)
- The following sections walk you through the key features of the application. ### Dashboard
The dashboard enables two different gateway device management operations:
* View the logistics routes for truck shipments and the location details of ocean shipments. * View the gateway status and other relevant information. * You can track the total number of gateways, active, and unknown tags. * You can do device management operations such as: update firmware, disable and enable sensors, update a sensor threshold, update telemetry intervals, and update device service contracts. * View device battery consumption. #### Device Template
Select **Device templates** to see the gateway capability model. A capability mo
**Gateway Telemetry & Property** - This interface defines all the telemetry related to sensors, location, and device information. The interface also defines device twin property capabilities such as sensor thresholds and update intervals. **Gateway Commands** - This interface organizes all the gateway command capabilities: ### Rules
Select the **Rules** tab to the rules in this application template. These rules
**Gateway theft alert**: This rule triggers when there's unexpected light detection by the sensors during the journey. Operators must be notified immediately to investigate potential theft.
-**Unresponsive Gateway**: This rule triggers if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of low battery, loss of connectivity, or device damage.
+**Lost gateway alert**: This rule triggers if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of low battery, loss of connectivity, or device damage.
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-rules.png" alt-text="Rule definitions"::: ### Jobs
-Select the **Jobs** tab to see the jobs in this application:
+Select the **Jobs** tab to create the jobs in this application. The following screenshot shows an example of jobs created.
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-jobs.png" alt-text="Jobs to run":::
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Your signing certificates are now trusted on the Windows-based device and the fu
| **Attestation Type** | Select **Certificate** | | **IoT Edge device** | Select **False** | | **Certificate Type** | Select **Intermediate Certificate** |
- | **Primary certificate .pem or .cer file** | Navigate to the intermediate you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*) |
+ | **Primary certificate .pem or .cer file** | Navigate to the intermediate you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it is verified. DPS can verify the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. |
## Configure the provisioning device code
iot-edge How To Auto Provision Simulated Device Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
Once the runtime is installed on your device, configure the device with the info
1. Know your DPS **ID Scope** and device **Registration ID** that were gathered in the previous sections.
+1. Create a configuration file for your device based on a template file that is provided as part of the IoT Edge installation.
+
+ ```bash
+ sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
+ ```
+ 1. Open the configuration file on the IoT Edge device. ```bash
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Make sure that the user **iotedge** has read permissions for the directory holdi
``` >[!TIP]
- >If the config file doesn't exist on your device yet, use `/etc/aziot/config.toml.edge.template` as a template to create one.
+ >If the config file doesn't exist on your device yet, use the following command to create it based on the template file:
+ >
+ >```bash
+ >sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
+ >```
1. Find the **Hostname** section in the config file. Uncomment the line that contains the `hostname` parameter, and update the value to be the fully qualified domain name (FQDN) or the IP address of the IoT Edge device.
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure
| Generate deployment manifest | Takes a deployment.template.json file and the variables, then generates the final IoT Edge deployment manifest file. | | Deploy to IoT Edge devices | Creates IoT Edge deployments to one or more IoT Edge devices. |
-Unless otherwise specified, the procedures in this article do not explore all the functionality available through task parameters. For more information, see the following:
+Unless otherwise specified, the procedures in this article do not explore all the functionality available through task parameters. For more information, see the following resources:
* [Task version](/azure/devops/pipelines/process/tasks?tabs=classic#task-versions) * **Advanced** - If applicable, specify modules that you do not want built.
Unless otherwise specified, the procedures in this article do not explore all th
## Create a build pipeline for continuous integration
-In this section, you create a new build pipeline. You configure the pipeline to run automatically when you check in any changes to the sample IoT Edge solution and to publish build logs.
+In this section, you create a new build pipeline. You configure the pipeline to run automatically and publish build logs whenever you check in changes to the IoT Edge solution.
1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository.
In this section, you create a new build pipeline. You configure the pipeline to
7. Select the first **Azure IoT Edge** task to edit it. This task builds all modules in the solution with the target platform that you specify. Edit the task with the following values:
- | Parameter | Description |
- | | |
- | Display name | The display name is automatically updated when the Action field changes. |
- | Action | Select **Build module images**. |
- | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
- | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
- | Output variables | Provide a reference name to associate with the file path where your deployment.json file generates, such as **edge**. |
+ | Parameter | Description |
+ | | |
+ | Display name | The display name is automatically updated when the Action field changes. |
+ | Action | Select **Build module images**. |
+ | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
+ | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
+ | Output variables | Provide a reference name to associate with the file path where your deployment.json file generates, such as **edge**. |
+
+ For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
These configurations use the image repository and tag that are defined in the `module.json` file to name and tag the module image. **Build module images** also helps replace the variables with the exact value you define in the `module.json` file. In Visual Studio or Visual Studio Code, you are specifying the actual value in a `.env` file. In Azure Pipelines, you set the value on the **Pipeline Variables** tab. Select the **Variables** tab on the pipeline editor menu and configure the name and value as following:
- * **ACR_ADDRESS**: Your Azure Container Registry **Login server** value. You can retrieve the Login server from the Overview page of your container registry in the Azure portal.
+ * **ACR_ADDRESS**: Your Azure Container Registry **Login server** value. You can retrieve the login server value from the overview page of your container registry in the Azure portal.
- If you have other variables in your project, you can specify the name and value on this tab. **Build module images** recognizes only variables that are in `${VARIABLE}` format. Make sure you use this format in your `**/module.json` files.
+ If you have other variables in your project, you can specify the name and value on this tab. **Build module images** recognizes only variables that are in `${VARIABLE}` format. Make sure you use this format in your `**/module.json` files.
8. Select the second **Azure IoT Edge** task to edit it. This task pushes all module images to the container registry that you select.
- | Parameter | Description |
- | | |
- | Display name | The display name is automatically updated when the Action field changes. |
- | Action | Select **Push module images**. |
- | Container registry type | Use the default type: `Azure Container Registry`. |
- | Azure subscription | Choose your subscription. |
- | Azure Container Registry | Select the type of container registry that you use to store your module images. Depending on which registry type you choose, the form changes. If you choose **Azure Container Registry**, use the dropdown lists to select the Azure subscription and the name of your container registry. If you choose **Generic Container Registry**, select **New** to create a registry service connection. |
- | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
- | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
- | Add registry credential to deployment manifest | Specify true to add the registry credential for pushing docker images to deployment manifest. |
+ | Parameter | Description |
+ | | |
+ | Display name | The display name is automatically updated when the Action field changes. |
+ | Action | Select **Push module images**. |
+ | Container registry type | Use the default type: `Azure Container Registry`. |
+ | Azure subscription | Choose your subscription. |
+ | Azure Container Registry | Select the type of container registry that you use to store your module images. Depending on which registry type you choose, the form changes. If you choose **Azure Container Registry**, use the dropdown lists to select the Azure subscription and the name of your container registry. If you choose **Generic Container Registry**, select **New** to create a registry service connection. |
+ | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
+ | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
+ | Add registry credential to deployment manifest | Specify true to add the registry credential for pushing docker images to deployment manifest. |
+
+ For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
If you have multiple container registries to host your module images, you need to duplicate this task, select different container registry, and use **Bypass module(s)** in the **Advanced** settings to bypass the images that are not for this specific registry. 9. Select the **Copy Files** task to edit it. Use this task to copy files to the artifact staging directory.
- | Parameter | Description |
- | | |
- | Display name | Use the default name or customize |
- | Source folder | The folder with the files to be copied. |
- | Contents | Add two lines: `deployment.template.json` and `**/module.json`. These two files serve as inputs to generate the IoT Edge deployment manifest. |
- | Target Folder | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables#build-variables) to learn about the description. |
+ | Parameter | Description |
+ | | |
+ | Display name | Use the default name or customize |
+ | Source folder | The folder with the files to be copied. |
+ | Contents | Add two lines: `deployment.template.json` and `**/module.json`. These two files serve as inputs to generate the IoT Edge deployment manifest. |
+ | Target Folder | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables#build-variables) to learn about the description. |
+
+ For more information about this task and its parameters, see [Copy files task](/azure/devops/pipelines/tasks/utility/copy-files?tabs=classic).
10. Select the **Publish Build Artifacts** task to edit it. Provide artifact staging directory path to the task so that the path can be published to release pipeline.
In this section, you create a new build pipeline. You configure the pipeline to
| Artifact name | Use the default name: **drop** | | Artifact publish location | Use the default location: **Azure Pipelines** |
+ For more information about this task and its parameters, see [Publish build artifacts task](/azure/devops/pipelines/tasks/utility/publish-build-artifacts).
+ 11. Open the **Triggers** tab and check the box to **Enable continuous integration**. Make sure the branch containing your code is included. ![Turn on continuous integration trigger](./media/how-to-continuous-integration-continuous-deployment-classic/configure-trigger.png)
This pipeline is now configured to run automatically when you push new code to y
[!INCLUDE [iot-edge-create-release-pipeline-for-continuous-deployment](../../includes/iot-edge-create-release-pipeline-for-continuous-deployment.md)] >[!NOTE]
->If you wish to use **layered deployments** in your pipeline, layered deployments are not yet supported in Azure IoT Edge tasks in Azure DevOps.
+>Layered deployments are not yet supported in Azure IoT Edge tasks in Azure DevOps.
> >However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/iot/edge/deployment): >
-> ```azurecli-interactive
-> az iot edge deployment create -d {deployment_name} -n {hub_name} --content modules_content.json --layered true
-> ```
+>```azurecli-interactive
+>az iot edge deployment create -d {deployment_name} -n {hub_name} --content modules_content.json --layered true
+>```
[!INCLUDE [iot-edge-verify-iot-edge-continuous-integration-continuous-deployment](../../includes/iot-edge-verify-iot-edge-continuous-integration-continuous-deployment.md)] ## Next steps
-* IoT Edge DevOps best practices sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
+* IoT Edge DevOps sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md) * Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-edge How To Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-You can easily adopt DevOps with your Azure IoT Edge applications with the built-in Azure IoT Edge tasks in Azure Pipelines. This article demonstrates how you can use the continuous integration and continuous deployment features of Azure Pipelines to build, test, and deploy applications quickly and efficiently to your Azure IoT Edge using YAML. Alternatively, you can [use the classic editor](how-to-continuous-integration-continuous-deployment-classic.md).
+You can easily adopt DevOps with your Azure IoT Edge applications with the built-in Azure IoT Edge tasks in Azure Pipelines. This article demonstrates how you can use Azure Pipelines to build, test, and deploy Azure IoT Edge modules using YAML. Alternatively, you can [use the classic editor](how-to-continuous-integration-continuous-deployment-classic.md).
![Diagram - CI and CD branches for development and production](./media/how-to-continuous-integration-continuous-deployment/model.png)
In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure
| Generate deployment manifest | Takes a deployment.template.json file and the variables, then generates the final IoT Edge deployment manifest file. | | Deploy to IoT Edge devices | Creates IoT Edge deployments to one or more IoT Edge devices. |
-Unless otherwise specified, the procedures in this article do not explore all the functionality available through task parameters. For more information, see the following:
+Unless otherwise specified, the procedures in this article do not explore all the functionality available through task parameters. For more information, see the following resources:
* [Task version](/azure/devops/pipelines/process/tasks?tabs=yaml#task-versions) * **Advanced** - If applicable, specify modules that you do not want built.
In this section, you create a new build pipeline. You configure the pipeline to
![Select Starter pipeline or Existing Azure Pipelines YAML file to begin your build pipeline](./media/how-to-continuous-integration-continuous-deployment/configure-pipeline.png)
-6. On the **Review your pipeline YAML** page, you can click the default name `azure-pipelines.yml` to rename your pipeline's configuration file.
+6. On the **Review your pipeline YAML** page, you can select the default name `azure-pipelines.yml` to rename your pipeline's configuration file.
Select **Show assistant** to open the **Tasks** palette.
In this section, you create a new build pipeline. You configure the pipeline to
| .template.json file | Provide the path to the **deployment.template.json** file in the repository that contains your IoT Edge solution. | | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
- ![Use Tasks palette to add tasks to your pipeline](./media/how-to-continuous-integration-continuous-deployment/add-build-task.png)
+ For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
+
+ ![Use Tasks palette to add tasks to your pipeline](./media/how-to-continuous-integration-continuous-deployment/add-build-task.png)
>[!TIP] > After each task is added, the editor will automatically highlight the added lines. To prevent accidental overwriting, deselect the lines and provide a new space for your next task before adding additional tasks.
In this section, you create a new build pipeline. You configure the pipeline to
* Task: **Azure IoT Edge**
- | Parameter | Description |
- | | |
- | Action | Select **Push module images**. |
- | Container registry type | Use the default type: **Azure Container Registry**. |
- | Azure subscription | Select your subscription. |
- | Azure Container Registry | Choose the registry that you want to use for the pipeline. |
- | .template.json file | Provide the path to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
- | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
+ | Parameter | Description |
+ | | |
+ | Action | Select **Push module images**. |
+ | Container registry type | Use the default type: **Azure Container Registry**. |
+ | Azure subscription | Select your subscription. |
+ | Azure Container Registry | Choose the registry that you want to use for the pipeline. |
+ | .template.json file | Provide the path to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
+ | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
+
+ For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
* Task: **Copy Files**
- | Parameter | Description |
- | | |
- | Source Folder | The source folder to copy from. Empty is the root of the repo. Use variables if files are not in the repo. Example: `$(agent.builddirectory)`.
- | Contents | Add two lines: `deployment.template.json` and `**/module.json`. |
- | Target Folder | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables?tabs=yaml#build-variables) to learn about the description. |
+ | Parameter | Description |
+ | | |
+ | Source Folder | The source folder to copy from. Empty is the root of the repo. Use variables if files are not in the repo. Example: `$(agent.builddirectory)`.
+ | Contents | Add two lines: `deployment.template.json` and `**/module.json`. |
+ | Target Folder | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables?tabs=yaml#build-variables) to learn about the description. |
+
+ For more information about this task and its parameters, see [Copy files task](/azure/devops/pipelines/tasks/utility/copy-files).
* Task: **Publish Build Artifacts**
- | Parameter | Description |
- | | |
- | Path to publish | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables?tabs=yaml#build-variables) to learn about the description. |
- | Artifact name | Specify the default name: `drop` |
- | Artifact publish location | Use the default location: `Azure Pipelines` |
+ | Parameter | Description |
+ | | |
+ | Path to publish | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables?tabs=yaml#build-variables) to learn about the description. |
+ | Artifact name | Specify the default name: `drop` |
+ | Artifact publish location | Use the default location: `Azure Pipelines` |
+
+ For more information about this task and its parameters, see [Publish build artifacts task](/azure/devops/pipelines/tasks/utility/publish-build-artifacts).
9. Select **Save** from the **Save and run** dropdown in the top right.
Continue to the next section to build the release pipeline.
## Next steps
-* IoT Edge DevOps best practices sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
+* IoT Edge DevOps sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md) * Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
Get-EflowVM | Select -ExpandProperty VmConfiguration | Format-List
For more information, use the command `Get-Help Get-EflowVm -full`.
+## Get-EflowVmAddr
+
+The **Get-EflowVmAddr** command is used to query the virtual machine's current IP and MAC address. This command exists to account for the fact that the IP and MAC address can change over time.
+
+For additional information, use the command `Get-Help Get-EflowVmAddr -full`.
++ ## Get-EflowVmFeature The **Get-EflowVmFeature** command returns the status of the enablement of IoT Edge for Linux on Windows features.
The **Get-EflowVmFeature** command returns the status of the enablement of IoT E
For more information, use the command `Get-Help Get-EflowVmFeature -full`. + ## Get-EflowVmName
-The **Get-EflowVmName** command returns the virtual machine's current hostname. This command exists to account for the fact that the Windows hostname can change over time. It takes only common parameters.
+The **Get-EflowVmName** command returns the virtual machine's current hostname. This command exists to account for the fact that the Windows hostname can change over time.
For more information, use the command `Get-Help Get-EflowVmName -full`.
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Previously updated : 06/23/2021 Last updated : 06/24/2021
The following tables describe the permissions available for IoT Hub service API
> - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read` while [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write` > - Both [Invoke Component Command](/rest/api/iothub/service/digitaltwin/invokecomponentcommand) and [Invoke Root Level Command](/rest/api/iothub/service/digitaltwin/invokerootlevelcommand) require `Microsoft.Devices/IotHubs/directMethods/invoke/action`.
+> [!NOTE]
+> To get data from IoT Hub using Azure AD, [set up routing to a separate Event Hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
+ ## Azure AD access from Azure portal When you try to access IoT Hub, the Azure portal first checks whether you've been assigned an Azure role with **Microsoft.Devices/iotHubs/listkeys/action**. If so, then Azure portal uses the keys from shared access policies for accessing IoT Hub. If not, Azure portal tries to access data using your Azure AD account.
To ensure an account doesn't have access outside of assigned permissions, *don't
Then, make sure the account doesn't have any other roles that have the **Microsoft.Devices/iotHubs/listkeys/action** permission - such as [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). To let the account have resource access and can navigate the portal, assign [Reader](../role-based-access-control/built-in-roles.md#reader).
-## Built-in Event Hub compatible endpoint doesn't support Azure AD authentication
+## Azure IoT extension for Azure CLI
+
+Most commands against IoT Hub support Azure AD authentication. The type of auth used to execute commands can be controlled with the `--auth-type` parameter which accepts the values key or login. The value of `key` is set by default.
+
+- When `--auth-type` has the value of `key`, like before the CLI automatically discovers a suitable policy when interacting with IoT Hub.
+
+- When `--auth-type` has the value `login`, an access token from the Azure CLI logged in principal is used for the operation.
-The [the built-in endpoint](iot-hub-devguide-messages-read-builtin.md) doesn't support Azure AD integration. Accessing it with a security principal or managed identity isn't possible. To access the built-in endpoint, use the connection string (shared access key) method as before.
+To learn more, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12)
## SDK samples
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-sas.md
A protocol gateway could use the same token for all devices simply setting the r
You can use the IoT Hub [identity registry](iot-hub-devguide-identity-registry.md) to configure per-device/module security credentials and access control using [tokens](iot-hub-dev-guide-sas.md#security-tokens). If an IoT solution already has a custom identity registry and/or authentication scheme, consider creating a *token service* to integrate this infrastructure with IoT Hub. In this way, you can use other IoT features in your solution.
-A token service is a custom cloud service. It uses an IoT Hub *shared access policy* with **DeviceConnect** or **ModuleConnect** permissions to create *device-scoped* or *module-scoped* tokens. These tokens enable a device and module to connect to your IoT hub.
+A token service is a custom cloud service. It uses an IoT Hub *shared access policy* with the **DeviceConnect** permission to create *device-scoped* or *module-scoped* tokens. These tokens enable a device and module to connect to your IoT hub.
![Steps of the token service pattern](./media/iot-hub-devguide-security/tokenservice.png) Here are the main steps of the token service pattern:
-1. Create an IoT Hub shared access policy with **DeviceConnect** or **ModuleConnect** permissions for your IoT hub. You can create this policy in the [Azure portal](https://portal.azure.com) or programmatically. The token service uses this policy to sign the tokens it creates.
+1. Create an IoT Hub shared access policy with the **DeviceConnect** permission for your IoT hub. You can create this policy in the [Azure portal](https://portal.azure.com) or programmatically. The token service uses this policy to sign the tokens it creates.
2. When a device/module needs to access your IoT hub, it requests a signed token from your token service. The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-module-twins.md
The module app operates on the module twin using the following atomic operations
* **Observe desired properties**. The currently connected module can choose to be notified of updates to the desired properties when they happen. The module receives the same form of update (partial or full replacement) executed by the solution back end.
-All the preceding operations require the **ModuleConnect** permission, as defined in the [Control Access to IoT Hub](iot-hub-devguide-security.md) article.
+All the preceding operations require the **DeviceConnect** permission, as defined in the [Control Access to IoT Hub](iot-hub-devguide-security.md) article.
The [Azure IoT device SDKs](iot-hub-devguide-sdks.md) make it easy to use the preceding operations from many languages and platforms.
iot-pnp Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-developer-guide-device.md
Now that you've learned about IoT Plug and Play device development, here are som
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md)-- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md) - [IoT Plug and Play service developer guide](concepts-developer-guide-service.md)
iot-pnp Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-developer-guide-service.md
Now that you've learned about device modeling, here are some additional resource
- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device)-- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)-- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md)
+- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-pnp Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-modeling-guide.md
Adding localized strings is optional. The following example has only a single, d
## Lifecycle and tools
-The four lifecycle stages for a device model are authoring, publication, use, and versioning:
+The four lifecycle stages for a device model are *author*, *publish*, *use*, and *version*:
### Author
DTML device models are JSON documents that you can create in a text editor. Howe
To learn more, see [Define a new IoT device type in your Azure IoT Central application](../iot-central/core/howto-set-up-template.md).
-The [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) gives you a text-based editing environment with syntax validation and autocomplete for finer control over the model authoring experience.
+There are DTDL authoring extensions for both VS Code and Visual Studio 2019.
+
+To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code.
+
+When you've installed the extension, use it to help you author DTDL model files in VS code:
+
+- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:
+
+ :::image type="content" source="media/concepts-modeling-guide/model-validation.png" alt-text="Model validation in VS Code":::
+
+- Use intellisense and autocomplete when you're editing DTDL models:
+
+ :::image type="content" source="media/concepts-modeling-guide/model-intellisense.png" alt-text="Use intellisense for DTDL models in VS Code":::
+
+- Create a new DTDL interface. The **DTDL: Create Interface** command creates a JSON file with a new interface. The interface includes example telemetry, property, and command definitions.
+
+To install the DTDL extension for Visual Studio 2019, go to [DTDL Language Support for VS 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16dtdllanguagesupport). You can also search for **DTDL** in **Manage Extensions** in Visual Studio.
+
+When you've installed the extension, use it to help you author DTDL model files in Visual Studio:
+
+- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:
+
+ :::image type="content" source="media/concepts-modeling-guide/model-validation-2.png" alt-text="Model validation in Visual Studio":::
+
+- Use intellisense and autocomplete when you're editing DTDL models:
+
+ :::image type="content" source="media/concepts-modeling-guide/model-intellisense-2.png" alt-text="Use intellisense for DTDL models in Visual Studio":::
### Publish
The following list summarizes some key constraints and limits on models:
Now that you've learned about device modeling, here are some additional resources: -- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md) - [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [Model repositories](./concepts-model-repository.md)
iot-pnp Howto Author Pnp Bridge Adapter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-author-pnp-bridge-adapter.md
Title: How to build an adapter for the IoT Plug and Play bridge | Microsoft Docs
description: Identify the IoT Plug and Play bridge adapter components. Learn how to extend the bridge by writing your own adapter. Previously updated : 1/20/2021 Last updated : 06/24/2021
#Customer intent: As a device builder, I want to understand the IoT Plug and Play bridge, learn how to build and IoT Plug and Play bridge adapter. # Extend the IoT Plug and Play bridge
-The [IoT Plug and Play bridge](concepts-iot-pnp-bridge.md#iot-plug-and-play-bridge-architecture) lets you connect the existing devices attached to a gateway to your IoT hub. You use the bridge to map IoT Plug and Play interfaces to the attached devices. An IoT Plug and Play interface defines the telemetry that a device sends, the properties synchronized between the device and the cloud, and the commands that the device responds to. You can install and configure the open-source bridge application on Windows or Linux gateways. Additionally, the bridge can be run as an Azure IoT Edge runtime module.
+
+The [IoT Plug and Play bridge](concepts-iot-pnp-bridge.md#iot-plug-and-play-bridge-architecture) lets you connect the existing devices attached to a gateway to your IoT hub. You use the bridge to map IoT Plug and Play interfaces to the attached devices. An IoT Plug and Play interface defines the telemetry that a device sends, the properties it synchronizes with the cloud, and the commands that it responds to. You can install and configure the open-source bridge application on Windows or Linux gateways. Additionally, the bridge can be run as an Azure IoT Edge runtime module.
This article explains in detail how to: - Extend the IoT Plug and Play bridge with an adapter. - Implement common callbacks for a bridge adapter.
-For a simple example that shows how to use the bridge, see [How to connect the IoT Plug and Play bridge sample that runs on Linux or Windows to IoT Hub](howto-use-iot-pnp-bridge.md).
+For an example that shows how to get started with the bridge, see [How to connect the IoT Plug and Play bridge sample that runs on Linux or Windows to IoT Hub](howto-use-iot-pnp-bridge.md).
The guidance and samples in this article assume basic familiarity with [Azure Digital Twins](../digital-twins/overview.md) and [IoT Plug and Play](overview-iot-plug-and-play.md). Additionally, this article assumes familiarity with how to [Build, and deploy the IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md).
-## Design Guide to extend the IoT Plug and Play bridge with an adapter
-
-To extend the capabilities of the bridge, you can author your own bridge adapters.
+## Overview
-The bridge uses adapters to:
+To extend the capabilities of the bridge, you can author your own bridge adapters. The bridge uses adapters to:
- Establish a connection between a device and the cloud. - Enable data flow between a device and the cloud.
Every bridge adapter must:
- Use the interface to bind device-side functionality to cloud-based capabilities such as telemetry, properties, and commands. - Establish control and data communication with the device hardware or firmware.
-Each bridge adapter interacts with a specific type of device based on how the adapter connects to and interacts with the device. Even if communication with a device uses a handshaking protocol, a bridge adapter may have multiple ways to interpret the data from the device. In this scenario, the bridge adapter uses information for the adapter in the configuration file to determine the *interface configuration* the adapter should use to parse the data.
+Each bridge adapter interacts with a specific type of device based on how the adapter connects to and interacts with the device. Even if communication with a device uses a handshaking protocol, a bridge adapter may have several ways to interpret the data from the device. In this scenario, the bridge adapter uses adapter information in the configuration file to determine the *interface configuration* it should use to parse the data.
To interact with the device, a bridge adapter uses a communication protocol supported by the device and APIs provided either by the underlying operating system, or the device vendor.
-To interact with the cloud, a bridge adapter uses APIs provided by the Azure IoT Device C SDK to send telemetry, create digital twin interfaces, send property updates, and create callback functions for property updates and commands.
+To interact with the cloud, a bridge adapter uses APIs provided by the Azure IoT Device C SDK. The adapter uses these APIs to send telemetry, create digital twin interfaces, send property updates, and create callback functions for property updates and commands.
-### Create a bridge adapter
+## Create an adapter
The bridge expects a bridge adapter to implement the APIs defined in the [_PNP_ADAPTER](https://github.com/Azure/iot-plug-and-play-bridge/blob/9964f7f9f77ecbf4db3b60960b69af57fd83a871/pnpbridge/src/pnpbridge/inc/pnpadapter_api.h#L296) interface:
In this interface:
- `PNPBRIDGE_COMPONENT_CREATE` creates the digital twin client interfaces and binds the callback functions. The adapter initiates the communication channel to the device. The adapter may set up the resources to enable the telemetry flow but doesn't start reporting telemetry until `PNPBRIDGE_COMPONENT_START` is called. This function is called once for each interface component in the configuration file. - `PNPBRIDGE_COMPONENT_START` is called to let the bridge adapter start forwarding telemetry from the device to the digital twin client. This function is called once for each interface component in the configuration file. - `PNPBRIDGE_COMPONENT_STOP` stops the telemetry flow.-- `PNPBRIDGE_COMPONENT_DESTROY` destroys the digital twin client and associated interface resources. This function is called once for each interface component in the configuration file when the bridge is torn down or when a fatal error occurs.
+- `PNPBRIDGE_COMPONENT_DESTROY` destroys the digital twin client and associated interface resources. When the bridge is torn down or when a fatal error occurs, the bridge calls this function once for each interface component in the configuration file.
- `PNPBRIDGE_ADAPTER_DESTROY` cleans up the bridge adapter resources.
-### Bridge core interaction with bridge adapters
+## Bridge core interaction with adapters
The following list outlines what happens when the bridge starts:
The following list outlines what happens when the bridge starts:
1. After the bridge adapter manger creates all the interface components specified in the configuration file, it registers all the interfaces with Azure IoT Hub. Registration is a blocking, asynchronous call. When the call completes, it triggers a callback in the bridge adapter that can then start handling property and command callbacks from the cloud. 1. The bridge adapter manager then calls `PNPBRIDGE_INTERFACE_START` on each component and the bridge adapter starts reporting telemetry to the digital twin client.
-### Design guidelines
+## Design guidelines
Follow these guidelines when you develop a new bridge adapter:
Follow these guidelines when you develop a new bridge adapter:
- Implement the bridge adapter interface described previously. - Add the new adapter to the adapter manifest and build the bridge.
-### Enable a new bridge adapter
+## Enable a new bridge adapter
You enable adapters in the bridge by adding a reference in [adapter_manifest.c](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/shared/adapter_manifest.c): ```c
- extern PNP_ADAPTER MyPnpAdapter;
- PPNP_ADAPTER PNP_ADAPTER_MANIFEST[] = {
- .
- .
- &MyPnpAdapter
- }
+extern PNP_ADAPTER MyPnpAdapter;
+PPNP_ADAPTER PNP_ADAPTER_MANIFEST[] = {
+ .
+ .
+ &MyPnpAdapter
+}
``` > [!IMPORTANT]
You enable adapters in the bridge by adding a reference in [adapter_manifest.c](
The [Camera adapter readme](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/Camer) describes a sample camera adapter that you can enable.
-## Code examples for common adapter scenarios/callbacks
+## Code samples for common adapter scenarios
+
+The following section details how an adapter for the bridge implements callbacks for some common scenarios and usages. This section covers the following callbacks:
-The following section will provide details on how an adapter for the bridge would implement callbacks for a number of common scenarios and usages This section covers the following callbacks:
- [Receive property update (cloud to device)](#receive-property-update-cloud-to-device) - [Report a property update (device to cloud)](#report-a-property-update-device-to-cloud) - [Send telemetry (device to cloud)](#send-telemetry-device-to-cloud)-- [Receive command update callback from the cloud and process it on the device side (cloud to device)](#receive-command-update-callback-from-the-cloud-and-process-it-on-the-device-side-cloud-to-device)-- [Respond to command update on the device side (device to cloud)](#respond-to-command-update-on-the-device-side-device-to-cloud)
+- [Receive command update callback from the cloud and process it on the device (cloud to device)](#receive-a-command-update-callback-from-the-cloud-and-process-it-on-the-device-cloud-to-device)
+- [Respond to command update on the device (device to cloud)](#respond-to-command-update-on-the-device-device-to-cloud)
-The examples below are based on the [environmental sensor sample adapter](https://github.com/Azure/iot-plug-and-play-bridge/tree/master/pnpbridge/src/adapters/samples/environmental_sensor).
+The following examples are based on the [environmental sensor sample adapter](https://github.com/Azure/iot-plug-and-play-bridge/tree/master/pnpbridge/src/adapters/samples/environmental_sensor).
### Receive property update (cloud to device)+ The first step is to register a callback function: ```c PnpComponentHandleSetPropertyUpdateCallback(BridgeComponentHandle, EnvironmentSensor_ProcessPropertyUpdate); ```+ The next step is to implement the callback function to read the property update on the device: ```c
static void SampleEnvironmentalSensor_BrightnessCallback(
} } }- ``` ### Report a property update (device to cloud)
-At any point after your component is created, your device can report properties to the cloud with status:
+
+At any point after your component is created, your device can report properties to the cloud with status:
+ ```c
-// Environmental sensor's read-only property, device state indiciating whether its online or not
+// Environmental sensor's read-only property, device state indicating whether its online or not
// static const char sampleDeviceStateProperty[] = "state"; static const unsigned char sampleDeviceStateData[] = "true";
IOTHUB_CLIENT_RESULT SampleEnvironmentalSensor_RouteReportedState(
exit: return iothubClientResult; }- ``` ### Send telemetry (device to cloud)+ ```c // // SampleEnvironmentalSensor_SendTelemetryMessagesAsync is periodically invoked by the caller to
exit:
} ```
-### Receive command update callback from the cloud and process it on the device side (cloud to device)
+
+### Receive a command update callback from the cloud and process it on the device (cloud to device)
+ ```c // SampleEnvironmentalSensor_ProcessCommandUpdate receives commands from the server. This implementation acts as a simple dispatcher // to the functions to perform the actual processing.
static int SampleEnvironmentalSensor_BlinkCallback(
} ```
-### Respond to command update on the device side (device to cloud)
+
+### Respond to command update on the device (device to cloud)
```c
- static int SampleEnvironmentalSensor_BlinkCallback(
- PENVIRONMENT_SENSOR EnvironmentalSensor,
- JSON_Value* CommandValue,
- unsigned char** CommandResponse,
- size_t* CommandResponseSize)
- {
- int result = PNP_STATUS_SUCCESS;
- int BlinkInterval = 0;
-
- LogInfo("Environmental Sensor Adapter:: Blink command invoked. It has been invoked %d times previously", EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled);
-
- if (json_value_get_type(CommandValue) != JSONNumber)
- {
- LogError("Cannot retrieve blink interval for blink command");
- result = PNP_STATUS_BAD_FORMAT;
- }
- else
- {
- BlinkInterval = (int)json_value_get_number(CommandValue);
- LogInfo("Environmental Sensor Adapter:: Blinking with interval=%d second(s)", BlinkInterval);
- EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled++;
- EnvironmentalSensor->SensorState->blinkInterval = BlinkInterval;
-
- result = SampleEnvironmentalSensor_SetCommandResponse(CommandResponse, CommandResponseSize, sampleEnviromentalSensor_BlinkResponse);
- }
-
- return result;
- }
-
- // SampleEnvironmentalSensor_SetCommandResponse is a helper that fills out a command response
- static int SampleEnvironmentalSensor_SetCommandResponse(
- unsigned char** CommandResponse,
- size_t* CommandResponseSize,
- const unsigned char* ResponseData)
- {
- int result = PNP_STATUS_SUCCESS;
- if (ResponseData == NULL)
- {
- LogError("Environmental Sensor Adapter:: Response Data is empty");
- *CommandResponseSize = 0;
- return PNP_STATUS_INTERNAL_ERROR;
- }
-
- *CommandResponseSize = strlen((char*)ResponseData);
- memset(CommandResponse, 0, sizeof(*CommandResponse));
-
- // Allocate a copy of the response data to return to the invoker. Caller will free this.
- if (mallocAndStrcpy_s((char**)CommandResponse, (char*)ResponseData) != 0)
- {
- LogError("Environmental Sensor Adapter:: Unable to allocate response data");
- result = PNP_STATUS_INTERNAL_ERROR;
- }
-
- return result;
+static int SampleEnvironmentalSensor_BlinkCallback(
+ PENVIRONMENT_SENSOR EnvironmentalSensor,
+ JSON_Value* CommandValue,
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize)
+{
+ int result = PNP_STATUS_SUCCESS;
+ int BlinkInterval = 0;
+
+ LogInfo("Environmental Sensor Adapter:: Blink command invoked. It has been invoked %d times previously", EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled);
+
+ if (json_value_get_type(CommandValue) != JSONNumber)
+ {
+ LogError("Cannot retrieve blink interval for blink command");
+ result = PNP_STATUS_BAD_FORMAT;
+ }
+ else
+ {
+ BlinkInterval = (int)json_value_get_number(CommandValue);
+ LogInfo("Environmental Sensor Adapter:: Blinking with interval=%d second(s)", BlinkInterval);
+ EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled++;
+ EnvironmentalSensor->SensorState->blinkInterval = BlinkInterval;
+
+ result = SampleEnvironmentalSensor_SetCommandResponse(CommandResponse, CommandResponseSize, sampleEnviromentalSensor_BlinkResponse);
+ }
+
+ return result;
+}
+
+// SampleEnvironmentalSensor_SetCommandResponse is a helper that fills out a command response
+static int SampleEnvironmentalSensor_SetCommandResponse(
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize,
+ const unsigned char* ResponseData)
+{
+ int result = PNP_STATUS_SUCCESS;
+ if (ResponseData == NULL)
+ {
+ LogError("Environmental Sensor Adapter:: Response Data is empty");
+ *CommandResponseSize = 0;
+ return PNP_STATUS_INTERNAL_ERROR;
+ }
+
+ *CommandResponseSize = strlen((char*)ResponseData);
+ memset(CommandResponse, 0, sizeof(*CommandResponse));
+
+ // Allocate a copy of the response data to return to the invoker. Caller will free this.
+ if (mallocAndStrcpy_s((char**)CommandResponse, (char*)ResponseData) != 0)
+ {
+ LogError("Environmental Sensor Adapter:: Unable to allocate response data");
+ result = PNP_STATUS_INTERNAL_ERROR;
+ }
+
+ return result;
} ```
iot-pnp Howto Build Deploy Extend Pnp Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-build-deploy-extend-pnp-bridge.md
az group delete -n bridge-edge-resources
## Next steps
-To learn more about the IoT Plug and Play bridge, visit the [IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge) GitHub repository.
+To learn how to extend the IoT Plug and Play bridge to support additional device protocols, see [Extend the IoT Plug and Play bridge](howto-author-pnp-bridge-adapter.md).
iot-pnp Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-convert-to-pnp.md
rc = mosquitto_connect(mosq, HOST, PORT, 10);
## Next steps
-Now that you know how to convert an existing device to be an IoT Plug and Play device, a suggested next step is to [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md) to help you build a DTDL model.
+Now that you know how to convert an existing device to be an IoT Plug and Play device, a suggested next step is to read the [IoT Plug and Play modeling guide](concepts-modeling-guide.md).
iot-pnp Howto Use Dtdl Authoring Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-use-dtdl-authoring-tools.md
- Title: Use a tool to author and validate DTDL models | Microsoft Docs
-description: Install the DTDL editor for Visual Studio Code or Visual Studio 2019 and use it to author IoT Plug and Play models.
-- Previously updated : 09/14/2020----
-#Customer intent: As a solution builder, I want to use a DTDL editor to author and validate DTDL model files to use in my IoT Plug and Play solution.
--
-# Install and use the DTDL authoring tools
-
-[Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) models are JSON files. You can use an extension for Visual Studio code or Visual Studio 2019 to author and validate these model files.
-
-## Install and use the VS Code extension
-
-The DTDL extension for VS Code adds the following DTDL authoring features:
--- DTDL v2 syntax validation.-- Intellisense, including autocomplete, to help you with the language syntax.-- The ability to create interfaces from the command palette.-
-To install the DTDL extension, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the Extensions view in VS Code.
-
-When you've installed the extension, use it to help you author DTDL model files in VS code:
--- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:-
- :::image type="content" source="media/howto-use-dtdl-authoring-tools/model-validation.png" alt-text="Model validation in VS Code":::
--- Use intellisense and autocomplete when you're editing DTDL models:-
- :::image type="content" source="media/howto-use-dtdl-authoring-tools/model-intellisense.png" alt-text="Use intellisense for DTDL models in VS Code":::
--- Create a new DTDL interface. The **DTDL: Create Interface** command creates a JSON file with a new interface. The interface includes example telemetry, property, and command definitions.-
-## Install and use the Visual Studio extension
-
-The DTDL extension for Visual Studio 2019 adds the following DTDL authoring features:
--- DTDL v2 syntax validation.-- Intellisense, including autocomplete, to help you with the language syntax.-
-To install the DTDL extension, go to [DTDL Language Support for VS 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16dtdllanguagesupport). You can also search for **DTDL** in **Manage Extensions** in Visual Studio.
-
-When you've installed the extension, use it to help you author DTDL model files in Visual Studio:
--- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:-
- :::image type="content" source="media/howto-use-dtdl-authoring-tools/model-validation-2.png" alt-text="Model validation in Visual Studio":::
--- Use intellisense and autocomplete when you're editing DTDL models:-
- :::image type="content" source="media/howto-use-dtdl-authoring-tools/model-intellisense-2.png" alt-text="Use intellisense for DTDL models in Visual Studio":::
-
-## Next steps
-
-In this how-to article, you've learned how to use the DTDL extensions for Visual Studio Code and Visual Studio 2019 to author and validate DTDL model files. A suggested next step is to learn how to use the [Azure IoT explorer with your models and devices](./howto-use-iot-explorer.md).
iot-pnp Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-use-iot-explorer.md
For a list of the IoT features supported by the latest version of the tool, see
## Next steps
-In this how-to article, you learned how to install and use Azure IoT explorer to interact with your IoT Plug and Play devices. A suggested next step is to learn how to [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md).
+In this how-to article, you learned how to install and use Azure IoT explorer to interact with your IoT Plug and Play devices. A suggested next step is to learn how to [Manage IoT Plug and Play digital twins](howto-manage-digital-twin.md).
iot-pnp Overview Iot Plug And Play Current Release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/overview-iot-plug-and-play-current-release.md
To learn more about how IoT Plug and Play devices work with DTDL, see [IoT Plug
- VS Code extension 1.0.0.
- To learn more, see [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md).
+ To learn more, see [Lifecycle and tools](concepts-modeling-guide.md#lifecycle-and-tools).
- Visual Studio 2019 extension 1.0.0.
- To learn more, see [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md).
+ To learn more, see [Lifecycle and tools](concepts-modeling-guide.md#lifecycle-and-tools).
- Azure CLI IoT extension 0.10.0.
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/azure-policy.md
Example Usage Scenarios:
## Available "Built-In" Policy Definitions
-Key Vault has created a set of policies, which can be used to manage key, certificate, and secret objects. These policies are 'Built-In', which means they don't require you to write any custom JSON to enable them and they are available in the Azure portal for you to assign. You can still customize certain parameters to fit your organization's needs.
+Key Vault has created a set of policies, which can be used to manage key vaults and its key, certificate, and secret objects. These policies are 'Built-In', which means they don't require you to write any custom JSON to enable them and they are available in the Azure portal for you to assign. You can still customize certain parameters to fit your organization's needs.
# [Certificate Policies](#tab/certificates)
If a secret is too close to expiration, an organizational delay to rotate the se
Manage your organizational compliance requirements by specifying the maximum amount of time in days that a secret can be valid within your key vault. Secrets that are valid longer than the threshold you set will be marked as non-compliant. You can also use this policy to block the creation of new secrets that have an expiration date set longer than the maximum validity period you specify.
+# [Key Vault Policies](#tab/keyvault)
+
+### Key Vault should use a virtual network service endpoint
+
+This policy audits any Key Vault not configured to use a virtual network service endpoint.
+
+### Resource logs in Key Vault should be enabled
+
+Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised
+
+### Key vaults should have purge protection enabled
+
+Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period.
+ ## Example Scenario
If the compliance results show up as "Not Started" it may be due to the followin
- Learn more about the [Azure Policy service](../../governance/policy/overview.md) - See Key Vault samples: [Key Vault built-in policy definitions](../../governance/policy/samples/built-in-policies.md#key-vault)
+- Learn about [Azure Security Benchmark guidance on Key vault](https://docs.microsoft.com/security/benchmark/azure/baselines/key-vault-security-baseline?source=docs#network-security)
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/troubleshoot-outbound-connection.md
Title: Troubleshoot outbound connections in Azure Load Balancer description: Resolutions for common problems with outbound connectivity through the Azure Load Balancer. -+ Last updated 05/7/2020-+ # <a name="obconnecttsg"></a> Troubleshooting outbound connections failures
-This article is intended to provide resolutions for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience are due to SNAT port exhaustion and connection timeouts leading to dropped packets. This article provides steps for mitigating each of these issues.
+This article is intended to provide resolutions for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience are due to souce network address translation (SNAT) port exhaustion and connection timeouts leading to dropped packets. This article provides steps for mitigating each of these issues.
+
+## Avoid SNAT
+
+The best way to avoid SNAT port exhaustion is to eliminate the need for SNAT in the first place, if possible. In some cases this might not be possible. For example, when connecting to public endpoints. However, in some cases this is possible and can be achieved by connecting privately to resources. If connecting to Azure services like Storage, SQL, Cosmos DB, or any other of the [Azure services listed here](../private-link/availability.md), leveraging Azure Private Link eliminates the need for SNAT. As a result, you will not risk a potential connectivity issue due to SNAT port exhaustion.
+
+Private Link Service is also supported by Snowflake, MongoDB, Confluent, Elastic and other such services.
## <a name="snatexhaust"></a> Managing SNAT (PAT) port exhaustion [Ephemeral ports](load-balancer-outbound-connections.md) used for [PAT](load-balancer-outbound-connections.md) are an exhaustible resource, as described in [Standalone VM without a Public IP address](load-balancer-outbound-connections.md) and [Load-balanced VM without a Public IP address](load-balancer-outbound-connections.md). You can monitor your usage of ephemeral ports and compare with your current allocation to determine the risk of or to confirm SNAT exhaustion using [this](./load-balancer-standard-diagnostics.md#how-do-i-check-my-snat-port-usage-and-allocation) guide.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 06/23/2021 Last updated : 06/24/2021 # Limits and configuration reference for Azure Logic Apps
Before you set up your firewall with IP addresses, review these considerations:
* For [Azure China 21Vianet](/azure/chin), such as Azure Storage, SQL Server, Office 365 Outlook, and so on.
-* If your logic app workflows run in single-tenant Azure Logic Apps, you need to find the fully qualified domain names (FQDNs) for your connections For more information, review the corresponding sections in these topics:
+* If your logic app workflows run in single-tenant Azure Logic Apps, you need to find the fully qualified domain names (FQDNs) for your connections. For more information, review the corresponding sections in these topics:
* [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup) * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
This section lists the outbound IP addresses for the Azure Logic Apps service an
| Canada Central | 52.233.29.92, 52.228.39.244, 40.85.250.135, 40.85.250.212, 13.71.186.1, 40.85.252.47, 13.71.184.150 | 52.237.32.212, 52.237.24.126, 13.71.170.208 - 13.71.170.223, 13.71.175.160 - 13.71.175.191 | | Canada East | 52.232.128.155, 52.229.120.45, 52.229.126.25, 40.86.203.228, 40.86.228.93, 40.86.216.241, 40.86.226.149, 40.86.217.241 | 52.242.30.112, 52.242.35.152, 40.69.106.240 - 40.69.106.255, 40.69.111.0 - 40.69.111.31 | | Central India | 52.172.154.168, 52.172.186.159, 52.172.185.79, 104.211.101.108, 104.211.102.62, 104.211.90.169, 104.211.90.162, 104.211.74.145 | 52.172.212.129, 52.172.211.12, 20.43.123.0 - 20.43.123.31, 104.211.81.192 - 104.211.81.207 |
-| Central US | 13.67.236.125, 104.208.25.27, 40.122.170.198, 40.113.218.230, 23.100.86.139, 23.100.87.24, 23.100.87.56, 23.100.82.16 | 52.173.241.27, 52.173.245.164, 13.89.171.80 - 13.89.171.95, 13.89.178.64 - 13.89.178.95 |
+| Central US | 13.67.236.125, 104.208.25.27, 40.122.170.198, 40.113.218.230, 23.100.86.139, 23.100.87.24, 23.100.87.56, 23.100.82.16 | 52.173.241.27, 52.173.245.164, 13.89.171.80 - 13.89.171.95, 13.89.178.64 - 13.89.178.95, 40.77.68.110 |
| East Asia | 13.75.94.173, 40.83.127.19, 52.175.33.254, 40.83.73.39, 65.52.175.34, 40.83.77.208, 40.83.100.69, 40.83.75.165 | 13.75.110.131, 52.175.23.169, 13.75.36.64 - 13.75.36.79, 104.214.164.0 - 104.214.164.31 |
-| East US | 13.92.98.111, 40.121.91.41, 40.114.82.191, 23.101.139.153, 23.100.29.190, 23.101.136.201, 104.45.153.81, 23.101.132.208 | 40.71.249.139, 40.71.249.205, 40.114.40.132, 40.71.11.80 - 40.71.11.95, 40.71.15.160 - 40.71.15.191 |
-| East US 2 | 40.84.30.147, 104.208.155.200, 104.208.158.174, 104.208.140.40, 40.70.131.151, 40.70.29.214, 40.70.26.154, 40.70.27.236 | 52.225.129.144, 52.232.188.154, 104.209.247.23, 40.70.146.208 - 40.70.146.223, 40.70.151.96 - 40.70.151.127 |
+| East US | 13.92.98.111, 40.121.91.41, 40.114.82.191, 23.101.139.153, 23.100.29.190, 23.101.136.201, 104.45.153.81, 23.101.132.208 | 40.71.249.139, 40.71.249.205, 40.114.40.132, 40.71.11.80 - 40.71.11.95, 40.71.15.160 - 40.71.15.191, 52.188.157.160 |
+| East US 2 | 40.84.30.147, 104.208.155.200, 104.208.158.174, 104.208.140.40, 40.70.131.151, 40.70.29.214, 40.70.26.154, 40.70.27.236 | 52.225.129.144, 52.232.188.154, 104.209.247.23, 40.70.146.208 - 40.70.146.223, 40.70.151.96 - 40.70.151.127, 40.65.220.25 |
| France Central | 52.143.164.80, 52.143.164.15, 40.89.186.30, 20.188.39.105, 40.89.191.161, 40.89.188.169, 40.89.186.28, 40.89.190.104 | 40.89.186.239, 40.89.135.2, 40.79.130.208 - 40.79.130.223, 40.79.148.96 - 40.79.148.127 | | France South | 52.136.132.40, 52.136.129.89, 52.136.131.155, 52.136.133.62, 52.136.139.225, 52.136.130.144, 52.136.140.226, 52.136.129.51 | 52.136.142.154, 52.136.133.184, 40.79.178.240 - 40.79.178.255, 40.79.180.224 - 40.79.180.255 | | Germany North | 51.116.211.168, 51.116.208.165, 51.116.208.175, 51.116.208.192, 51.116.208.200, 51.116.208.222, 51.116.208.217, 51.116.208.51 | 51.116.60.192, 51.116.211.212, 51.116.59.16 - 51.116.59.31, 51.116.60.192 - 51.116.60.223 |
This section lists the outbound IP addresses for the Azure Logic Apps service an
| Korea Central | 52.231.14.11, 52.231.14.219, 52.231.15.6, 52.231.10.111, 52.231.14.223, 52.231.77.107, 52.231.8.175, 52.231.9.39 | 52.141.1.104, 52.141.36.214, 20.44.29.64 - 20.44.29.95, 52.231.18.208 - 52.231.18.223 | | Korea South | 52.231.204.74, 52.231.188.115, 52.231.189.221, 52.231.203.118, 52.231.166.28, 52.231.153.89, 52.231.155.206, 52.231.164.23 | 52.231.201.173, 52.231.163.10, 52.231.147.0 - 52.231.147.15, 52.231.148.224 - 52.231.148.255 | | North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225 | 52.162.126.4, 52.162.242.161, 52.162.107.160 - 52.162.107.175, 52.162.111.192 - 52.162.111.223 |
-| North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181 | 52.169.28.181, 52.178.150.68, 94.245.91.93, 13.69.227.208 - 13.69.227.223, 13.69.231.192 - 13.69.231.223 |
+| North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181 | 52.169.28.181, 52.178.150.68, 94.245.91.93, 13.69.227.208 - 13.69.227.223, 13.69.231.192 - 13.69.231.223, 40.115.108.29 |
| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248 | 51.120.100.192, 51.120.92.27, 51.120.98.224 - 51.120.98.239, 51.120.100.192 - 51.120.100.223 | | South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51 | 102.133.168.167, 40.127.2.94, 102.133.155.0 - 102.133.155.15, 102.133.253.0 - 102.133.253.31 | | South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191 | 102.133.72.85, 102.133.75.194, 102.37.64.0 - 102.37.64.31, 102.133.27.0 - 102.133.27.15 |
This section lists the outbound IP addresses for the Azure Logic Apps service an
| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24 | 51.140.74.150, 51.140.80.51, 51.140.61.124, 51.105.77.96 - 51.105.77.127, 51.140.148.0 - 51.140.148.15 | | UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63 | 51.141.52.185, 51.141.47.105, 51.141.124.13, 51.140.211.0 - 51.140.211.15, 51.140.212.224 - 51.140.212.255 | | West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75 | 52.161.101.204, 52.161.102.22, 13.78.132.82, 13.71.195.32 - 13.71.195.47, 13.71.199.192 - 13.71.199.223 |
-| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223 |
+| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223, 13.93.36.78 |
| West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | 104.211.189.124, 104.211.189.218, 20.38.128.224 - 20.38.128.255, 104.211.146.224 - 104.211.146.239 | | West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5 | 13.93.148.62, 104.42.122.49, 40.112.195.87, 13.86.223.32 - 13.86.223.63, 40.112.243.160 - 40.112.243.175 |
-| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219 | 52.191.164.250, 52.183.78.157, 13.66.140.128 - 13.66.140.143, 13.66.145.96 - 13.66.145.127 |
+| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219 | 52.191.164.250, 52.183.78.157, 13.66.140.128 - 13.66.140.143, 13.66.145.96 - 13.66.145.127, 13.66.164.219 |
|||| <a name="azure-government-outbound"></a>
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/designer-error-codes.md
The error message from Hive is normally reported back in the Error Log so that y
See the following articles for help with Hive queries for machine learning:
-+ [Create Hive tables and load data from Azure Blob Storage](../team-data-science-process/move-hive-tables.md)
-+ [Explore data in tables with Hive queries](../team-data-science-process/explore-data-hive-tables.md)
-+ [Create features for data in an Hadoop cluster using Hive queries](../team-data-science-process/create-features-hive.md)
++ [Create Hive tables and load data from Azure Blob Storage](/azure/architecture/data-science-process/move-hive-tables)++ [Explore data in tables with Hive queries](/azure/architecture/data-science-process/explore-data-hive-tables)++ [Create features for data in an Hadoop cluster using Hive queries](/azure/architecture/data-science-process/create-features-hive) + [Hive for SQL Users Cheat Sheet (PDF)](http://hortonworks.com/wp-content/uploads/2013/05/hql_cheat_sheet.pdf)
machine-learning Algorithm Parameters Optimize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/algorithm-parameters-optimize.md
Title: 'ML Studio (classic): Optimize algorithms - Azure'
-description: Explains how to choose the optimal parameter set for an algorithm in Azure Machine Learning Studio (classic).
+description: Explains how to choose the optimal parameter set for an algorithm in Machine Learning Studio (classic).
Last updated 11/29/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-This topic describes how to choose the right hyperparameter set for an algorithm in Azure Machine Learning Studio (classic). Most machine learning algorithms have parameters to set. When you train a model, you need to provide values for those parameters. The efficacy of the trained model depends on the model parameters that you choose. The process of finding the optimal set of parameters is known as *model selection*.
+This topic describes how to choose the right hyperparameter set for an algorithm in Machine Learning Studio (classic). Most machine learning algorithms have parameters to set. When you train a model, you need to provide values for those parameters. The efficacy of the trained model depends on the model parameters that you choose. The process of finding the optimal set of parameters is known as *model selection*.
-There are various ways to do model selection. In machine learning, cross-validation is one of the most widely used methods for model selection, and it is the default model selection mechanism in Azure Machine Learning Studio (classic). Because Azure Machine Learning Studio (classic) supports both R and Python, you can always implement their own model selection mechanisms by using either R or Python.
+There are various ways to do model selection. In machine learning, cross-validation is one of the most widely used methods for model selection, and it is the default model selection mechanism in Machine Learning Studio (classic). Because Machine Learning Studio (classic) supports both R and Python, you can always implement their own model selection mechanisms by using either R or Python.
There are four steps in the process of finding the best parameter set:
There are four steps in the process of finding the best parameter set:
3. **Define the metric**: Decide what metric to use for determining the best set of parameters, such as accuracy, root mean squared error, precision, recall, or f-score. 4. **Train, evaluate, and compare**: For each unique combination of the parameter values, cross-validation is carried out by and based on the error metric you define. After evaluation and comparison, you can choose the best-performing model.
-The following image illustrates how this can be achieved in Azure Machine Learning Studio (classic).
+The following image illustrates how this can be achieved in Machine Learning Studio (classic).
![Find the best parameter set](./media/algorithm-parameters-optimize/fig1.png)
machine-learning Azure Ml Netsharp Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/azure-ml-netsharp-reference-guide.md
Last updated 03/01/2018
Net# is a language developed by Microsoft that is used to define complex neural network architectures such as deep neural networks or convolutions of arbitrary dimensions. You can use complex structures to improve learning on data such as image, video, or audio.
-You can use a Net# architecture specification in these contexts:
-
-+ All neural network modules in Microsoft Azure Machine Learning Studio (classic): [Multiclass Neural Network](/azure/machine-learning/studio-module-reference/multiclass-neural-network), [Two-Class Neural Network](/azure/machine-learning/studio-module-reference/two-class-neural-network), and [Neural Network Regression](/azure/machine-learning/studio-module-reference/neural-network-regression)
-+ Neural network functions in Microsoft ML Server: [NeuralNet](/machine-learning-server/r-reference/microsoftml/neuralnet) and [rxNeuralNet](/machine-learning-server/r-reference/microsoftml/rxneuralnet)for the R language, and [rx_neural_network](/machine-learning-server/python-reference/microsoftml/rx-neural-network) for Python.
+You can use a Net# architecture specification in all neural network modules in Machine Learning Studio (classic):
+* [Multiclass Neural Network](/azure/machine-learning/studio-module-reference/multiclass-neural-network)
+* [Two-Class Neural Network](/azure/machine-learning/studio-module-reference/two-class-neural-network)
+* [Neural Network Regression](/azure/machine-learning/studio-module-reference/neural-network-regression)
This article describes the basic concepts and syntax needed to develop a custom neural network using Net#:
This article describes the basic concepts and syntax needed to develop a custom
+ Examples of custom neural networks created using Net# - ## Neural network basics A neural network structure consists of nodes that are organized in layers, and weighted connections (or edges) between the nodes. The connections are directional, and each connection has a source node and a destination node.
Additionally, Net# supports the following four kinds of advanced connection bund
## Supported customizations
-The architecture of neural network models that you create in Azure Machine Learning Studio (classic) can be extensively customized by using Net#. You can:
+The architecture of neural network models that you create in Machine Learning Studio (classic) can be extensively customized by using Net#. You can:
+ Create hidden layers and control the number of nodes in each layer. + Specify how layers are to be connected to each other.
from P1 response norm {
} ```
-+ The source layer includes five maps, each with aof dimension of 12x12, totaling in 1440 nodes.
++ The source layer includes five maps, each with a dimension of 12x12, totaling in 1440 nodes. + The value of **KernelShape** indicates that this is a same map normalization layer, where the neighborhood is a 3x3 rectangle. + The default value of **Padding** is False, thus the destination layer has only 10 nodes in each dimension. To include one node in the destination layer that corresponds to every node in the source layer, add Padding = [true, true, true]; and change the size of RN1 to [5, 12, 12].
output Digit [10] from Hid3 all;
+ The total number of nodes can be calculated by using the declared dimensionality of the layer, [50, 5, 5], as follows: `MapCount * NodeCount\[0] * NodeCount\[1] * NodeCount\[2] = 10 * 5 * 5 * 5` + Because `Sharing[d]` is False only for `d == 0`, the number of kernels is `MapCount * NodeCount\[0] = 10 * 5 = 50`.
-## Acknowledgements
+## Acknowledgments
-The Net# language for customizing the architecture of neural networks was developed at Microsoft by Shon Katzenberger (Architect, Machine Learning) and Alexey Kamenev (Software Engineer, Microsoft Research). It is used internally for machine learning projects and applications ranging from image detection to text analytics. For more information, see [Neural Nets in Azure Machine Learning studio - Introduction to Net#](/archive/blogs/machinelearning/neural-nets-in-azure-ml-introduction-to-net)
+The Net# language for customizing the architecture of neural networks was developed at Microsoft by Shon Katzenberger (Architect, Machine Learning) and Alexey Kamenev (Software Engineer, Microsoft Research). It is used internally for machine learning projects and applications ranging from image detection to text analytics. For more information, see [Neural Nets in Machine Learning studio - Introduction to Net#](/archive/blogs/machinelearning/neural-nets-in-azure-ml-introduction-to-net)
machine-learning Consume Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consume-web-services.md
Last updated 05/29/2020
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Once you deploy an Azure Machine Learning Studio (classic) predictive model as a Web service, you can use a REST API to send it data and get predictions. You can send the data in real-time or in batch mode.
+Once you deploy an Machine Learning Studio (classic) predictive model as a Web service, you can use a REST API to send it data and get predictions. You can send the data in real-time or in batch mode.
You can find more information about how to create and deploy a Machine Learning Web service using Machine Learning Studio (classic) here:
You can find more information about how to create and deploy a Machine Learning
## Overview
-With the Azure Machine Learning Web service, an external application communicates with a Machine Learning workflow scoring model in real time. A Machine Learning Web service call returns prediction results to an external application. To make a Machine Learning Web service call, you pass an API key that is created when you deploy a prediction. The Machine Learning Web service is based on REST, a popular architecture choice for web programming projects.
+With the Machine Learning Web service, an external application communicates with a Machine Learning workflow scoring model in real time. A Machine Learning Web service call returns prediction results to an external application. To make a Machine Learning Web service call, you pass an API key that is created when you deploy a prediction. The Machine Learning Web service is based on REST, a popular architecture choice for web programming projects.
-Azure Machine Learning Studio (classic) has two types of
+Machine Learning Studio (classic) has two types of
* Request-Response Service (RRS) ΓÇô A low latency, highly scalable service that provides an interface to the stateless models created and deployed from the Machine Learning Studio (classic). * Batch Execution Service (BES) ΓÇô An asynchronous service that scores a batch for data records.
For more information about Machine Learning Web services, see [Deploy a Machine
## Get an authorization key When you deploy your experiment, API keys are generated for the Web service. You can retrieve the keys from several locations.
-### From the Microsoft Azure Machine Learning Web Services portal
-Sign in to the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net) portal.
+### From the Machine Learning Web Services portal
+Sign in to the [Machine Learning Web Services](https://services.azureml.net) portal.
To retrieve the API key for a New Machine Learning Web service:
-1. In the Azure Machine Learning Web Services portal, click **Web Services** the top menu.
+1. In the Machine Learning Web Services portal, click **Web Services** the top menu.
2. Click the Web service for which you want to retrieve the key. 3. On the top menu, click **Consume**. 4. Copy and save the **Primary Key**.
The Machine Learning API help contains details about a prediction Web service.
**To view Machine Learning API help for a New Web service**
-In the [Azure Machine Learning Web Services Portal](https://services.azureml.net/):
+In the [Machine Learning Web Services Portal](https://services.azureml.net/):
1. Click **WEB SERVICES** on the top menu. 2. Click the Web service for which you want to retrieve the key.
machine-learning Consuming From Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consuming-from-excel.md
Title: 'ML Studio (classic): Consume web service in Excel - Azure'
-description: Azure Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code.
+description: Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code.
Last updated 02/01/2018
-# Consuming an Azure Machine Learning Studio (classic) Web Service from Excel
+# Consuming an Machine Learning Studio (classic) Web Service from Excel
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Azure Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code.
+Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code.
If you are using Excel 2013 (or later) or Excel Online, then we recommend that you use the Excel [Excel add-in](excel-add-in-for-web-services.md).
Once you have a web service, click on the **WEB SERVICES** section on the left o
**New Web Service**
-1. In the Azure Machine Learning Web Service portal, select **Consume**.
+1. In the Machine Learning Web Service portal, select **Consume**.
2. On the Consume page, in the **Web service consumption options** section, click the Excel icon. **Using the workbook**
machine-learning Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-endpoint.md
Title: 'ML Studio (classic): Create web service endpoints - Azure'
-description: Create web service endpoints in Azure Machine Learning Studio (classic). Each endpoint in the web service is independently addressed, throttled, and managed.
+description: Create web service endpoints in Machine Learning Studio (classic). Each endpoint in the web service is independently addressed, throttled, and managed.
Last updated 02/15/2019
-# Create endpoints for deployed Azure Machine Learning Studio (classic) web services
+# Create endpoints for deployed Machine Learning Studio (classic) web services
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
Each endpoint in the web service is independently addressed, throttled, and mana
## Add endpoints to a web service
-You can add an endpoint to a web service using the Azure Machine Learning Web Services portal. Once the endpoint is created, you can consume it through synchronous APIs, batch APIs, and excel worksheets.
+You can add an endpoint to a web service using the Machine Learning Web Services portal. Once the endpoint is created, you can consume it through synchronous APIs, batch APIs, and excel worksheets.
> [!NOTE] > If you have added additional endpoints to the web service, you cannot delete the default endpoint. 1. In Machine Learning Studio (classic), on the left navigation column, click Web Services.
-2. At the bottom of the web service dashboard, click **Manage endpoints**. The Azure Machine Learning Web Services portal opens to the endpoints page for the web service.
+2. At the bottom of the web service dashboard, click **Manage endpoints**. The Machine Learning Web Services portal opens to the endpoints page for the web service.
3. Click **New**. 4. Type a name and description for the new endpoint. Endpoint names must be 24 character or less in length, and must be made up of lower-case alphabets or numbers. Select the logging level and whether sample data is enabled. For more information on logging, see [Enable logging for Machine Learning web services](web-services-logging.md). ## <a id="scaling"></a> Scale a web service by adding additional endpoints
-By default, each published web service is configured to support 20 concurrent requests and can be as high as 200 concurrent requests. Azure Machine Learning Studio (classic) automatically optimizes the setting to provide the best performance for your web service and the portal value is ignored.
+By default, each published web service is configured to support 20 concurrent requests and can be as high as 200 concurrent requests. Machine Learning Studio (classic) automatically optimizes the setting to provide the best performance for your web service and the portal value is ignored.
If you plan to call the API with a higher load than a Max Concurrent Calls value of 200 will support, you should create multiple endpoints on the same web service. You can then randomly distribute your load across all of them.
-The scaling of a web service is a common task. Some reasons to scale are to support more than 200 concurrent requests, increase availability through multiple endpoints, or provide separate endpoints for the web service. You can increase the scale by adding additional endpoints for the same web service through the [Azure Machine Learning Web Service](https://services.azureml.net/) portal.
+The scaling of a web service is a common task. Some reasons to scale are to support more than 200 concurrent requests, increase availability through multiple endpoints, or provide separate endpoints for the web service. You can increase the scale by adding additional endpoints for the same web service through the [Machine Learning Web Service](https://services.azureml.net/) portal.
Keep in mind that using a high concurrency count can be detrimental if you're not calling the API with a correspondingly high rate. You might see sporadic timeouts and/or spikes in the latency if you put a relatively low load on an API configured for high load.
The synchronous APIs are typically used in situations where a low latency is des
## Next steps
-[How to consume an Azure Machine Learning web service](consume-web-services.md).
+[How to consume a Machine Learning web service](consume-web-services.md).
machine-learning Create Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-experiment.md
Last updated 02/06/2019
# Quickstart: Create your first data science experiment in Machine Learning Studio (classic)
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
[!INCLUDE [Designer notice](../../../includes/designer-notice.md)]
-In this quickstart, you create a machine learning experiment in [Azure Machine Learning Studio (classic)](../overview-what-is-machine-learning-studio.md#ml-studio-classic-vs-azure-machine-learning-studio) that predicts the price of a car based on different variables such as make and technical specifications.
+In this quickstart, you create a machine learning experiment in [Machine Learning Studio (classic)](../overview-what-is-machine-learning-studio.md#ml-studio-classic-vs-azure-machine-learning-studio) that predicts the price of a car based on different variables such as make and technical specifications.
If you're brand new to machine learning, the video series [Data Science for Beginners](data-science-for-beginners-the-5-questions-data-science-answers.md) is a great introduction to machine learning using everyday language and concepts.
machine-learning Create Models And Endpoints With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-models-and-endpoints-with-powershell.md
Last updated 04/04/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Here's a common machine learning problem: You want to create many models that have the same training workflow and use the same algorithm. But you want them to have different training datasets as input. This article shows you how to do this at scale in Azure Machine Learning Studio (classic) using just a single experiment.
+Here's a common machine learning problem: You want to create many models that have the same training workflow and use the same algorithm. But you want them to have different training datasets as input. This article shows you how to do this at scale in Machine Learning Studio (classic) using just a single experiment.
For example, let's say you own a global bike rental franchise business. You want to build a regression model to predict the rental demand based on historic data. You have 1,000 rental locations across the world and you've collected a dataset for each location. They include important features such as date, time, weather, and traffic that are specific to each location. You could train your model once using a merged version of all the datasets across all locations. But, each of your locations has a unique environment. So a better approach would be to train your regression model separately using the dataset for each location. That way, each trained model could take into account the different store sizes, volume, geography, population, bike-friendly traffic environment, and more.
-That may be the best approach, but you don't want to create 1,000 training experiments in Azure Machine Learning Studio (classic) with each one representing a unique location. Besides being an overwhelming task, it also seems inefficient since each experiment would have all the same components except for the training dataset.
+That may be the best approach, but you don't want to create 1,000 training experiments in Machine Learning Studio (classic) with each one representing a unique location. Besides being an overwhelming task, it also seems inefficient since each experiment would have all the same components except for the training dataset.
-Fortunately, you can accomplish this by using the [Azure Machine Learning Studio (classic) retraining API](./retrain-machine-learning-model.md) and automating the task with [Azure Machine Learning Studio (classic) PowerShell](powershell-module.md).
+Fortunately, you can accomplish this by using the [Machine Learning Studio (classic) retraining API](./retrain-machine-learning-model.md) and automating the task with [Machine Learning Studio (classic) PowerShell](powershell-module.md).
> [!NOTE] > To make your sample run faster, reduce the number of locations from 1,000 to 10. But the same principles and procedures apply to 1,000 locations. However, if you do want to train from 1,000 datasets you might want to run the following PowerShell scripts in parallel. How to do that is beyond the scope of this article, but you can find examples of PowerShell multi-threading on the Internet.
Fortunately, you can accomplish this by using the [Azure Machine Learning Studio
> ## Set up the training experiment
-Use the example [training experiment](https://gallery.azure.ai/Experiment/Bike-Rental-Training-Experiment-1) that's in the [Cortana Intelligence Gallery](https://gallery.azure.ai). Open this experiment in your [Azure Machine Learning Studio (classic)](https://studio.azureml.net) workspace.
+Use the example [training experiment](https://gallery.azure.ai/Experiment/Bike-Rental-Training-Experiment-1) that's in the [Cortana Intelligence Gallery](https://gallery.azure.ai). Open this experiment in your [Machine Learning Studio (classic)](https://studio.azureml.net) workspace.
> [!NOTE] > In order to follow along with this example, you may want to use a standard workspace rather than a free workspace. You create one endpoint for each customer - for a total of 10 endpoints - and that requires a standard workspace since a free workspace is limited to 3 endpoints.
To deploy the training web service, click the **Set Up Web Service** button belo
Now you need to deploy the scoring web service. To do this, click **Set Up Web Service** below the canvas and select **Predictive Web Service**. This creates a scoring experiment.
-You need to make a few minor adjustments to make it work as a web service. Remove the label column "cnt" from the input data and limit the output to only the instance id and the corresponding predicted value.
+You need to make a few minor adjustments to make it work as a web service. Remove the label column "cnt" from the input data and limit the output to only the instance ID and the corresponding predicted value.
To save yourself that work, you can open the [predictive experiment](https://gallery.azure.ai/Experiment/Bike-Rental-Predicative-Experiment-1) in the Gallery that has already been prepared.
machine-learning Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-workspace.md
Title: 'ML Studio (classic): Create a workspace - Azure'
-description: To use Azure Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
+description: To use Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
Last updated 12/07/2017
# Create and share an Machine Learning Studio (classic) workspace
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-To use Azure Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
+To use Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
## Create a Studio (classic) workspace
Once the workspace is deployed, you can open it in Machine Learning Studio (clas
![Open experiments](./media/create-workspace/my-experiments.png)
-For information about managing your Studio (classic) workspace, see [Manage an Azure Machine Learning Studio (classic) workspace](manage-workspace.md).
+For information about managing your Studio (classic) workspace, see [Manage a Machine Learning Studio (classic) workspace](manage-workspace.md).
If you encounter a problem creating your workspace, see [Troubleshooting guide: Create and connect to a Machine Learning Studio (classic) workspace](index.yml).
-## Share an Azure Machine Learning Studio (classic) workspace
+## Share a Machine Learning Studio (classic) workspace
Once a Machine Learning Studio (classic) workspace is created, you can invite users to your workspace to share access to your workspace and all its experiments, datasets, etc. You can add users in one of two roles: * **User** - A workspace user can create, open, modify, and delete experiments, datasets, etc. in the workspace.
After the new Machine Learning Studio (classic) workspace is created, you can si
![Delete cookies](media/troubleshooting-creating-ml-workspace/screen6.png)
-After the cookies are deleted, restart the browser and then go to the [Microsoft Azure Machine Learning Studio (classic)](https://studio.azureml.net) page. When you are prompted for a user name and password, enter the same Microsoft account you used to create the workspace.
+After the cookies are deleted, restart the browser and then go to the [Machine Learning Studio (classic)](https://studio.azureml.net) page. When you are prompted for a user name and password, enter the same Microsoft account you used to create the workspace.
## Next steps
-For more information on managing a workspace, see [Manage an Azure Machine Learning Studio (classic) workspace](manage-workspace.md).
+For more information on managing a workspace, see [Manage a Machine Learning Studio (classic) workspace](manage-workspace.md).
machine-learning Custom R Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/custom-r-modules.md
Last updated 11/29/2017
This topic describes how to author and deploy a custom R Studio (classic). It explains what custom R modules are and what files are used to define them. It illustrates how to construct the files that define a module and how to register the module for deployment in a Machine Learning workspace. The elements and attributes used in the definition of the custom module are then described in more detail. How to use auxiliary functionality and files and multiple outputs is also discussed.
-A **custom module** is a user-defined module that can be uploaded to your workspace and executed as part of Azure Machine Learning Studio (classic) experiment. A **custom R module** is a custom module that executes a user-defined R function. **R** is a programming language for statistical computing and graphics that is widely used by statisticians and data scientists for implementing algorithms. Currently, R is the only language supported in custom modules, but support for additional languages is scheduled for future releases.
+A **custom module** is a user-defined module that can be uploaded to your workspace and executed as part of Machine Learning Studio (classic) experiment. A **custom R module** is a custom module that executes a user-defined R function. **R** is a programming language for statistical computing and graphics that is widely used by statisticians and data scientists for implementing algorithms. Currently, R is the only language supported in custom modules, but support for additional languages is scheduled for future releases.
-Custom modules have **first-class status** in Azure Machine Learning Studio (classic) in the sense that they can be used just like any other module. They can be executed with other modules, included in published experiments or in visualizations. You have control over the algorithm implemented by the module, the input and output ports to be used, the modeling parameters, and other various runtime behaviors. An experiment that contains custom modules can also be published into the Azure AI Gallery for easy sharing.
+Custom modules have **first-class status** in Machine Learning Studio (classic) in the sense that they can be used just like any other module. They can be executed with other modules, included in published experiments or in visualizations. You have control over the algorithm implemented by the module, the input and output ports to be used, the modeling parameters, and other various runtime behaviors. An experiment that contains custom modules can also be published into the Azure AI Gallery for easy sharing.
## Files in a custom R module A custom R module is defined by a .zip file that contains, at a minimum, two files:
CustomAddRows <- function(dataset1, dataset2, swap=FALSE)
``` ### The XML definition file
-To expose this `CustomAddRows` function as the Azure Machine Learning Studio (classic) module, an XML definition file must be created to specify how the **Custom Add Rows** module should look and behave.
+To expose this `CustomAddRows` function as the Machine Learning Studio (classic) module, an XML definition file must be created to specify how the **Custom Add Rows** module should look and behave.
```xml <!-- Defined a module using an R Script -->
In contrast, the **id** attribute for the **Output** element does not correspond
### Package and register the module Save these two files as *CustomAddRows.R* and *CustomAddRows.xml* and then zip the two files together into a *CustomAddRows.zip* file.
-To register them in your Machine Learning workspace, go to your workspace in Azure Machine Learning Studio (classic), click the **+NEW** button on the bottom and choose **MODULE -> FROM ZIP PACKAGE** to upload the new **Custom Add Rows** module.
+To register them in your Machine Learning workspace, go to your workspace in Machine Learning Studio (classic), click the **+NEW** button on the bottom and choose **MODULE -> FROM ZIP PACKAGE** to upload the new **Custom Add Rows** module.
![Upload Zip](./media/custom-r-modules/upload-from-zip-package.png)
The **Custom Add Rows** module is now ready to be accessed by your Machine Learn
## Elements in the XML definition file ### Module elements
-The **Module** element is used to define a custom module in the XML file. Multiple modules can be defined in one XML file using multiple **module** elements. Each module in your workspace must have a unique name. Register a custom module with the same name as an existing custom module and it replaces the existing module with the new one. Custom modules can, however, be registered with the same name as an existing Azure Machine Learning Studio (classic) module. If so, they appear in the **Custom** category of the module palette.
+The **Module** element is used to define a custom module in the XML file. Multiple modules can be defined in one XML file using multiple **module** elements. Each module in your workspace must have a unique name. Register a custom module with the same name as an existing custom module and it replaces the existing module with the new one. Custom modules can, however, be registered with the same name as an existing Machine Learning Studio (classic) module. If so, they appear in the **Custom** category of the module palette.
```xml <Module name="Custom Add Rows" isDeterministic="false">
Rules for characters limits in the Module elements:
* The content of the **Description** element must not exceed 128 characters in length. * The content of the **Owner** element must not exceed 32 characters in length.
-A module's results can be deterministic or nondeterministic.** By default, all modules are considered to be deterministic. That is, given an unchanging set of input parameters and data, the module should return the same results eacRAND or a function time it is run. Given this behavior, Azure Machine Learning Studio (classic) only reruns modules marked as deterministic if a parameter or the input data has changed. Returning the cached results also provides much faster execution of experiments.
+A module's results can be deterministic or nondeterministic.** By default, all modules are considered to be deterministic. That is, given an unchanging set of input parameters and data, the module should return the same results eacRAND or a function time it is run. Given this behavior, Machine Learning Studio (classic) only reruns modules marked as deterministic if a parameter or the input data has changed. Returning the cached results also provides much faster execution of experiments.
There are functions that are nondeterministic, such as RAND or a function that returns the current date or time. If your module uses a nondeterministic function, you can specify that the module is non-deterministic by setting the optional **isDeterministic** attribute to **FALSE**. This insures that the module is rerun whenever the experiment is run, even if the module input and parameters have not changed.
A module parameter is defined using the **Arg** child element of the **Arguments
* **default** - The value for the default property must correspond with an ID value from one of the **Item** elements. ### Auxiliary Files
-Any file that is placed in your custom module ZIP file is going to be available for use during execution time. Any directory structures present are preserved. This means that file sourcing works the same locally and in the Azure Machine Learning Studio (classic) execution.
+Any file that is placed in your custom module ZIP file is going to be available for use during execution time. Any directory structures present are preserved. This means that file sourcing works the same locally and in the Machine Learning Studio (classic) execution.
> [!NOTE] > Notice that all files are extracted to 'src' directory so all paths should have 'src/' prefix.
machine-learning Data Science For Beginners Ask A Question You Can Answer With Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-ask-a-question-you-can-answer-with-data.md
But, most important, ask that sharp question - the question that you can answer
We've talked about some basic principles for asking a question you can answer with data.
-Be sure to check out the other videos in "Data Science for Beginners" from Microsoft Azure Machine Learning Studio (classic).
+Be sure to check out the other videos in "Data Science for Beginners" from Machine Learning Studio (classic).
## Next steps * [Try a first data science experiment with Machine Learning Studio (classic)](create-experiment.md)
machine-learning Data Science For Beginners Copy Other Peoples Work To Do Data Science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-copy-other-peoples-work-to-do-data-science.md
One of the trade secrets of data science is getting other people to do your work
## Find examples in the Azure AI Gallery
-Microsoft has a cloud-based service called [Azure Machine Learning Studio (classic)](https://azure.microsoft.com/services/machine-learning-studio/). It provides you with a workspace where you can experiment with different machine learning algorithms, and, when you've got your solution worked out, you can launch it as a web service.
+Microsoft has a cloud-based service called [Machine Learning Studio (classic)](https://azure.microsoft.com/services/machine-learning-studio/). It provides you with a workspace where you can experiment with different machine learning algorithms, and, when you've got your solution worked out, you can launch it as a web service.
-Part of this service is something called the **[Azure AI Gallery](https://gallery.azure.ai/)**. It contains resources, including a collection of Azure Machine Learning Studio (classic) experiments, or models, that people have built and contributed for others to use. These experiments are a great way to leverage the thought and hard work of others to get you started on your own solutions. Everyone is welcome to browse through it.
+Part of this service is something called the **[Azure AI Gallery](https://gallery.azure.ai/)**. It contains resources, including a collection of Machine Learning Studio (classic) experiments, or models, that people have built and contributed for others to use. These experiments are a great way to leverage the thought and hard work of others to get you started on your own solutions. Everyone is welcome to browse through it.
![Azure AI Gallery](./media/data-science-for-beginners-copy-other-peoples-work-to-do-data-science/azure-ai-gallery.png)
Notice the link that says **Open in Studio (classic)**.
![Open in Studio (classic) button](./media/data-science-for-beginners-copy-other-peoples-work-to-do-data-science/open-in-studio.png)
-I can click on that and it takes me right to **Azure Machine Learning Studio (classic)**. It creates a copy of the experiment and puts it in my own workspace. This includes the contributor's dataset, all the processing that they did, all of the algorithms that they used, and how they saved out the results.
+I can click on that and it takes me right to **Machine Learning Studio (classic)**. It creates a copy of the experiment and puts it in my own workspace. This includes the contributor's dataset, all the processing that they did, all of the algorithms that they used, and how they saved out the results.
![Open a Gallery experiment in Machine Learning Studio (classic) - clustering algorithm example](./media/data-science-for-beginners-copy-other-peoples-work-to-do-data-science/cluster-experiment-open-in-studio.png)
There are other experiments in the [Azure AI Gallery](https://gallery.azure.ai)
[Azure AI Gallery](https://gallery.azure.ai) is a place to find working experiments that you can use as a starting point for your own solutions.
-Be sure to check out the other videos in "Data Science for Beginners" from Microsoft Azure Machine Learning Studio (classic).
+Be sure to check out the other videos in "Data Science for Beginners" from Machine Learning Studio (classic).
## Next steps
-* [Try your first data science experiment with Azure Machine Learning Studio (classic)](create-experiment.md)
+* [Try your first data science experiment with Machine Learning Studio (classic)](create-experiment.md)
* [Get an introduction to Machine Learning on Microsoft Azure](../overview-what-is-azure-ml.md)
machine-learning Data Science For Beginners Is Your Data Ready For Data Science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-is-your-data-ready-for-data-science.md
As you add more data, the picture becomes clearer and you can make more detailed
With data that's relevant, connected, accurate, and enough, you have all the ingredients needed to do some high-quality data science.
-Be sure to check out the other four videos in *Data Science for Beginners* from Microsoft Azure Machine Learning Studio (classic).
+Be sure to check out the other four videos in *Data Science for Beginners* from Machine Learning Studio (classic).
## Next steps * [Try a first data science experiment with Machine Learning Studio (classic)](create-experiment.md)
machine-learning Data Science For Beginners Predict An Answer With A Simple Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-predict-an-answer-with-a-simple-model.md
Also, if instead of just a handful of diamonds, we had two thousand or two milli
Today, we've talked about how to do linear regression, and we made a prediction using data.
-Be sure to check out the other videos in "Data Science for Beginners" from Microsoft Azure Machine Learning Studio (classic).
+Be sure to check out the other videos in "Data Science for Beginners" from Machine Learning Studio (classic).
## Next steps * [Try a first data science experiment with Machine Learning Studio (classic)](create-experiment.md)
machine-learning Data Science For Beginners The 5 Questions Data Science Answers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-the-5-questions-data-science-answers.md
Title: 'ML Studio (classic): Data Science for Beginners - Azure'
-description: Data Science for Beginners is teaches basic concepts in 5 short videos, starting with The 5 Questions Data Science Answers. From Azure Machine Learning.
+description: Data Science for Beginners is teaches basic concepts in 5 short videos, starting with The 5 Questions Data Science Answers.
machine-learning Deploy A Machine Learning Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-a-machine-learning-web-service.md
Title: 'ML Studio (classic): Deploy a web service - Azure'
-description: How to convert a training experiment to a predictive experiment, prepare it for deployment, then deploy it as an Azure Machine Learning Studio (classic) web service.
+description: How to convert a training experiment to a predictive experiment, prepare it for deployment, then deploy it as a Machine Learning Studio (classic) web service.
Last updated 01/06/2017
# Deploy an Azure Machine Learning Studio (classic) web service
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Azure Machine Learning Studio (classic) enables you to build and test a predictive analytic solution. Then you can deploy the solution as a web service.
+Machine Learning Studio (classic) enables you to build and test a predictive analytic solution. Then you can deploy the solution as a web service.
Machine Learning Studio (classic) web services provide an interface between an application and a Machine Learning Studio (classic) workflow scoring model. An external application can communicate with a Machine Learning Studio (classic) workflow scoring model in real time. A call to a Machine Learning Studio (classic) web service returns prediction results to an external application. To make a call to a web service, you pass an API key that was created when you deployed the web service. A Machine Learning Studio (classic) web service is based on REST, a popular architecture choice for web programming projects.
-Azure Machine Learning Studio (classic) has two types of web
+Machine Learning Studio (classic) has two types of web
* Request-Response Service (RRS): A low latency, highly scalable service that scores a single data record. * Batch Execution Service (BES): An asynchronous service that scores a batch of data records.
To train a predictive analytics model, you use Azure Machine Learning Studio (cl
The process of creating and managing training experiments is covered more thoroughly elsewhere. For more information, see these articles:
-* [Create a simple experiment in Azure Machine Learning Studio (classic)](create-experiment.md)
-* [Develop a predictive solution with Azure Machine Learning Studio (classic)](tutorial-part1-credit-risk.md)
-* [Import your training data into Azure Machine Learning Studio (classic)](import-data.md)
-* [Manage experiment iterations in Azure Machine Learning Studio (classic)](manage-experiment-iterations.md)
+* [Create a simple experiment in Machine Learning Studio (classic)](create-experiment.md)
+* [Develop a predictive solution with Machine Learning Studio (classic)](tutorial-part1-credit-risk.md)
+* [Import your training data into Machine Learning Studio (classic)](import-data.md)
+* [Manage experiment iterations in Machine Learning Studio (classic)](manage-experiment-iterations.md)
## Convert the training experiment to a predictive experiment
A common example is setting up an [Import Data][import-data] module so the user
You can define Web Service Parameters and associate them with one or more module parameters, and you can specify whether they are required or optional. The user of the web service provides values for these parameters when the service is accessed, and the module actions are modified accordingly.
-For more information about what Web Service Parameters are and how to use them, see [Using Azure Machine Learning Web Service Parameters][webserviceparameters].
+For more information about what Web Service Parameters are and how to use them, see [Using Machine Learning Web Service Parameters][webserviceparameters].
The following steps describe deploying a predictive experiment as a New web service. You can also deploy the experiment as Classic web service.
Now that the predictive experiment has been prepared, you can deploy it as a new
To deploy your predictive experiment, click **Run** at the bottom of the experiment canvas. Once the experiment has finished running, click **Deploy Web Service** and select **Deploy Web Service [New]**. The deployment page of the Machine Learning Studio (classic) Web Service portal opens. > [!NOTE]
-> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information see, [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md).
+> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information see, [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md).
### Web Service portal Deploy Experiment Page
Once you deploy your web service from Machine Learning Studio (classic), you can
The **Consume** page provides all the information you need to access your web service. For example, the API key is provided to allow authorized access to the service.
-For more information about accessing a Machine Learning Studio (classic) web service, see [How to consume an Azure Machine Learning Studio (classic) Web service](consume-web-services.md).
+For more information about accessing a Machine Learning Studio (classic) web service, see [How to consume a Machine Learning Studio (classic) Web service](consume-web-services.md).
### Manage your New web service
Pricing is region specific, so you need to define a billing plan for each region
#### Create a plan in another region
-1. Sign in to [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/).
+1. Sign in to [Machine Learning Web Services](https://services.azureml.net/).
2. Click the **Plans** menu option. 3. On the Plans over view page, click **New**. 4. From the **Subscription** dropdown, select the subscription in which the new plan will reside.
Pricing is region specific, so you need to define a billing plan for each region
#### Deploy the web service to another region
-1. On the Microsoft Azure Machine Learning Web Services page, click the **Web Services** menu option.
+1. On the Machine Learning Web Services page, click the **Web Services** menu option.
2. Select the Web Service you are deploying to a new region. 3. Click **Copy**. 4. In **Web Service Name**, type a new name for the web service.
You can test the web service in either the Machine Learning Studio (classic) Web
To test the Request Response web service, click the **Test** button in the web service dashboard. A dialog pops up to ask you for the input data for the service. These are the columns expected by the scoring experiment. Enter a set of data and then click **OK**. The results generated by the web service are displayed at the bottom of the dashboard.
-You can click the **Test** preview link to test your service in the Azure Machine Learning Studio (classic) Web Services portal as shown previously in the New web service section.
+You can click the **Test** preview link to test your service in the Machine Learning Studio (classic) Web Services portal as shown previously in the New web service section.
To test the Batch Execution Service, click **Test** preview link . On the Batch test page, click Browse under your input and select a CSV file containing appropriate sample values. If you don't have a CSV file, and you created your predictive experiment using Machine Learning Studio (classic), you can download the data set for your predictive experiment and use it.
You can enable logging to diagnose any failures that you're seeing when your web
![Enable logging in the web services portal](./media/publish-a-machine-learning-web-service/figure-4.png)
-You can also configure the endpoints for the web service in the Azure Machine Learning Web Services portal similar to the procedure shown previously in the New web service section. The options are different, you can add or change the service description, enable logging, and enable sample data for testing.
+You can also configure the endpoints for the web service in the Machine Learning Web Services portal similar to the procedure shown previously in the New web service section. The options are different, you can add or change the service description, enable logging, and enable sample data for testing.
### Access your Classic web service
-Once you deploy your web service from Azure Machine Learning Studio (classic), you can send data to the service and receive responses programmatically.
+Once you deploy your web service from Machine Learning Studio (classic), you can send data to the service and receive responses programmatically.
The dashboard provides all the information you need to access your web service. For example, the API key is provided to allow authorized access to the service, and API help pages are provided to help you get started writing your code.
-For more information about accessing a Machine Learning Studio (classic) web service, see [How to consume an Azure Machine Learning Studio (classic) Web service](consume-web-services.md).
+For more information about accessing a Machine Learning Studio (classic) web service, see [How to consume a Machine Learning Studio (classic) Web service](consume-web-services.md).
### Manage your Classic web service There are various of actions you can perform to monitor a web service. You can update it, and delete it. You can also add additional endpoints to a Classic web service in addition to the default endpoint that is created when you deploy it.
-For more information, see [Manage an Azure Machine Learning Studio (classic) workspace](manage-workspace.md) and [Manage a web service using the Azure Machine Learning Studio (classic) Web Services portal](manage-new-webservice.md).
+For more information, see [Manage a Machine Learning Studio (classic) workspace](manage-workspace.md) and [Manage a web service using the Machine Learning Studio (classic) Web Services portal](manage-new-webservice.md).
## Update the web service You can make changes to your web service, such as updating the model with additional training data, and deploy it again, overwriting the original web service.
One option for updating your web service is to retrain the model programmaticall
* For more technical details on how deployment works, see [How a Machine Learning Studio (classic) model progresses from an experiment to an operationalized Web service](model-progression-experiment-to-web-service.md).
-* For details on how to get your model ready to deploy, see [How to prepare your model for deployment in Azure Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
+* For details on how to get your model ready to deploy, see [How to prepare your model for deployment in Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
-* There are several ways to consume the REST API and access the web service. See [How to consume an Azure Machine Learning Studio (classic) web service](consume-web-services.md).
+* There are several ways to consume the REST API and access the web service. See [How to consume a Machine Learning Studio (classic) web service](consume-web-services.md).
<!-- internal links --> [Create a training experiment]: #create-a-training-experiment
machine-learning Deploy Consume Web Service Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-consume-web-service-guide.md
Title: 'ML Studio (classic): Deployment and consumption - Azure'
-description: You can use Azure Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the internet to do predictions in real time or in batch mode.
+description: You can use Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the internet to do predictions in real time or in batch mode.
Last updated 04/19/2017
-# Azure Machine Learning Studio (classic) Web
+# Machine Learning Studio (classic) Web
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-You can use Azure Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the Internet to do predictions in real time or in batch mode. Because the web services are RESTful, you can call them from various programming languages and platforms, such as .NET and Java, and from applications, such as Excel.
+You can use Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the Internet to do predictions in real time or in batch mode. Because the web services are RESTful, you can call them from various programming languages and platforms, such as .NET and Java, and from applications, such as Excel.
The next sections provide links to walkthroughs, code, and documentation to help get you started. ## Deploy a web service
-### With Azure Machine Learning Studio (classic)
+### With Machine Learning Studio (classic)
-The Studio (classic) portal and the Microsoft Azure Machine Learning Web Services portal help you deploy and manage a web service without writing code.
+The Studio (classic) portal and the Machine Learning Web Services portal help you deploy and manage a web service without writing code.
The following links provide general Information about how to deploy a new web service: * For an overview about how to deploy a new web service that's based on Azure Resource Manager, see [Deploy a new web service](deploy-a-machine-learning-web-service.md).
-* For a walkthrough about how to deploy a web service, see [Deploy an Azure Machine Learning web service](deploy-a-machine-learning-web-service.md).
+* For a walkthrough about how to deploy a web service, see [Deploy a Machine Learning web service](deploy-a-machine-learning-web-service.md).
* For a full walkthrough about how to create and deploy a web service, start with [Tutorial 1: Predict credit risk](tutorial-part1-credit-risk.md). * For specific examples that deploy a web service, see:
The following links provide general Information about how to deploy a new web se
### With web services resource provider APIs (Azure Resource Manager APIs)
-The Azure Machine Learning Studio (classic) resource provider for web services enables deployment and management of web services by using REST API calls. For more information, see the
+The Machine Learning Studio (classic) resource provider for web services enables deployment and management of web services by using REST API calls. For more information, see the
[Machine Learning Web Service (REST)](/rest/api/machinelearning/index) reference. <!-- [Machine Learning Web Service (REST)](/rest/api/machinelearning/webservices) reference. --> ### With PowerShell cmdlets
-The Azure Machine Learning Studio (classic) resource provider for web services enables deployment and management of web services by using PowerShell cmdlets.
+The Machine Learning Studio (classic) resource provider for web services enables deployment and management of web services by using PowerShell cmdlets.
To use the cmdlets, you must first sign in to your Azure account from within the PowerShell environment by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. If you are unfamiliar with how to call PowerShell commands that are based on Resource Manager, see [Using Azure PowerShell with Azure Resource Manager](../../azure-resource-manager/management/manage-resources-powershell.md).
Running the application creates a web service JSON template. To use the template
You can get the storage account name and key from the [Azure portal](https://portal.azure.com/). * Commitment plan ID
- You can get the plan ID from the [Azure Machine Learning Web Services](https://services.azureml.net) portal by signing in and clicking a plan name.
+ You can get the plan ID from the [Machine Learning Web Services](https://services.azureml.net) portal by signing in and clicking a plan name.
Add them to the JSON template as children of the *Properties* node at the same level as the *MachineLearningWorkspace* node.
Here's an example:
See the following articles and sample code for additional details:
-* [Azure Machine Learning Studio (classic) Cmdlets](/powershell/module/az.machinelearning) reference on MSDN
+* [Machine Learning Studio (classic) Cmdlets](/powershell/module/az.machinelearning) reference on MSDN
## Consume the web services
-### From the Azure Machine Learning Web Services UI (Testing)
+### From the Machine Learning Web Services UI (Testing)
-You can test your web service from the Azure Machine Learning Web Services portal. This includes testing the Request-Response service (RRS) and Batch Execution service (BES) interfaces.
+You can test your web service from the Machine Learning Web Services portal. This includes testing the Request-Response service (RRS) and Batch Execution service (BES) interfaces.
* [Deploy a new web service](deploy-a-machine-learning-web-service.md)
-* [Deploy an Azure Machine Learning web service](deploy-a-machine-learning-web-service.md)
+* [Deploy a Machine Learning web service](deploy-a-machine-learning-web-service.md)
* [Tutorial 3: Deploy credit risk model](tutorial-part3-credit-risk-deploy.md) ### From Excel You can download an Excel template that consumes the web service:
-* [Consuming an Azure Machine Learning web service from Excel](consuming-from-excel.md)
-* [Excel add-in for Azure Machine Learning Web Services](excel-add-in-for-web-services.md)
+* [Consuming a Machine Learning web service from Excel](consuming-from-excel.md)
+* [Excel add-in for Machine Learning Web Services](excel-add-in-for-web-services.md)
### From a REST-based client
-Azure Machine Learning Web Services are RESTful APIs. You can consume these APIs from various platforms, such as .NET, Python, R, Java, etc. The **Consume** page for your web service on the [Microsoft Azure Machine Learning Web Services portal](https://services.azureml.net) has sample code that can help you get started. For more information, see [How to consume an Azure Machine Learning Web service](consume-web-services.md).
+Machine Learning Web Services are RESTful APIs. You can consume these APIs from various platforms, such as .NET, Python, R, Java, etc. The **Consume** page for your web service on the [Machine Learning Web Services portal](https://services.azureml.net) has sample code that can help you get started. For more information, see [How to consume a Machine Learning Web service](consume-web-services.md).
machine-learning Deploy With Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
Title: 'ML Studio (classic): Deploy workspaces with Azure Resource Manager - Azure'
-description: How to deploy a workspace for Azure Machine Learning Studio (classic) using Azure Resource Manager template
+description: How to deploy a workspace for Machine Learning Studio (classic) using Azure Resource Manager template
Last updated 02/05/2018
-# Deploy Azure Machine Learning Studio (classic) Workspace Using Azure Resource Manager
+# Deploy Machine Learning Studio (classic) Workspace Using Azure Resource Manager
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Using an Azure Resource Manager deployment template saves you time by giving you a scalable way to deploy interconnected components with a validation and retry mechanism. To set up Azure Machine Learning Studio (classic) Workspaces, for example, you need to first configure an Azure storage account and then deploy your workspace. Imagine doing this manually for hundreds of workspaces. An easier alternative is to use an Azure Resource Manager template to deploy an Studio (classic) Workspace and all its dependencies. This article takes you through this process step-by-step. For a great overview of Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md).
+Using an Azure Resource Manager deployment template saves you time by giving you a scalable way to deploy interconnected components with a validation and retry mechanism. To set up Machine Learning Studio (classic) Workspaces, for example, you need to first configure an Azure storage account and then deploy your workspace. Imagine doing this manually for hundreds of workspaces. An easier alternative is to use an Azure Resource Manager template to deploy an Studio (classic) Workspace and all its dependencies. This article takes you through this process step-by-step. For a great overview of Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md).
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Step-by-step: create a Machine Learning Workspace
-We will create an Azure resource group, then deploy a new Azure storage account and a new Azure Machine Learning Studio (classic) Workspace using a Resource Manager template. Once the deployment is complete, we will print out important information about the workspaces that were created (the primary key, the workspaceID, and the URL to the workspace).
+We will create an Azure resource group, then deploy a new Azure storage account and a new Machine Learning Studio (classic) Workspace using a Resource Manager template. Once the deployment is complete, we will print out important information about the workspaces that were created (the primary key, the workspaceID, and the URL to the workspace).
### Create an Azure Resource Manager template
$rgd = New-AzResourceGroupDeployment -Name "demo" -TemplateFile "C:\temp\mlworks
Once the deployment is completed, it is straightforward to access properties of the workspace you deployed. For example, you can access the Primary Key Token. ```powershell
-# Access Azure Machine Learning Studio Workspace Token after its deployment.
+# Access Machine Learning Studio (classic) Workspace Token after its deployment.
$rgd.Outputs.mlWorkspaceToken.Value ```
Another way to retrieve tokens of existing workspace is to use the Invoke-AzReso
# List the primary and secondary tokens of all workspaces Get-AzResource |? { $_.ResourceType -Like "*MachineLearning/workspaces*"} |ForEach-Object { Invoke-AzResourceAction -ResourceId $_.ResourceId -Action listworkspacekeys -Force} ```
-After the workspace is provisioned, you can also automate many Azure Machine Learning Studio (classic) tasks using the [PowerShell Module for Azure Machine Learning Studio (classic)](https://aka.ms/amlps).
+After the workspace is provisioned, you can also automate many Machine Learning Studio (classic) tasks using the [PowerShell Module for Machine Learning Studio (classic)](https://aka.ms/amlps).
## Next steps
machine-learning Evaluate Model Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/evaluate-model-performance.md
Title: 'ML Studio (classic): Evaluate & cross-validate models - Azure'
-description: Learn about the metrics you can use to monitor model performance in Azure Machine Learning Studio (classic).
+description: Learn about the metrics you can use to monitor model performance in Machine Learning Studio (classic).
Last updated 03/20/2017
-# Evaluate model performance in Azure Machine Learning Studio (classic)
+# Evaluate model performance in Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-In this article, you can learn about the metrics you can use to monitor model performance in Azure Machine Learning Studio (classic). Evaluating the performance of a model is one of the core stages in the data science process. It indicates how successful the scoring (predictions) of a dataset has been by a trained model. Azure Machine Learning Studio (classic) supports model evaluation through two of its main machine learning modules:
+In this article, you can learn about the metrics you can use to monitor model performance in Machine Learning Studio (classic). Evaluating the performance of a model is one of the core stages in the data science process. It indicates how successful the scoring (predictions) of a dataset has been by a trained model. Machine Learning Studio (classic) supports model evaluation through two of its main machine learning modules:
+ [Evaluate Model][evaluate-model] + [Cross-Validate Model][cross-validate-model]
In the following sections, we will build simple regression and classification mo
Assume we want to predict a car's price using features such as dimensions, horsepower, engine specs, and so on. This is a typical regression problem, where the target variable (*price*) is a continuous numeric value. We can fit a linear regression model that, given the feature values of a certain car, can predict the price of that car. This regression model can be used to score the same dataset we trained on. Once we have the predicted car prices, we can evaluate the model performance by looking at how much the predictions deviate from the actual prices on average. To illustrate this, we use the *Automobile price data (Raw) dataset* available in the **Saved Datasets** section in Machine Learning Studio (classic). ### Creating the Experiment
-Add the following modules to your workspace in Azure Machine Learning Studio (classic):
+Add the following modules to your workspace in Machine Learning Studio (classic):
* Automobile price data (Raw) * [Linear Regression][linear-regression]
After running the experiment, you can inspect the evaluation results by clicking
Figure 4. Cross-Validation Results of a Regression Model. ## Evaluating a Binary Classification Model
-In a binary classification scenario, the target variable has only two possible outcomes, for example: {0, 1} or {false, true}, {negative, positive}. Assume you are given a dataset of adult employees with some demographic and employment variables, and that you are asked to predict the income level, a binary variable with the values {"<=50 K", ">50 K"}. In other words, the negative class represents the employees who make less than or equal to 50 K per year, and the positive class represents all other employees. As in the regression scenario, we would train a model, score some data, and evaluate the results. The main difference here is the choice of metrics Azure Machine Learning Studio (classic) computes and outputs. To illustrate the income level prediction scenario, we will use the [Adult](https://archive.ics.uci.edu/ml/datasets/Adult) dataset to create a Studio (classic) experiment and evaluate the performance of a two-class logistic regression model, a commonly used binary classifier.
+In a binary classification scenario, the target variable has only two possible outcomes, for example: {0, 1} or {false, true}, {negative, positive}. Assume you are given a dataset of adult employees with some demographic and employment variables, and that you are asked to predict the income level, a binary variable with the values {"<=50 K", ">50 K"}. In other words, the negative class represents the employees who make less than or equal to 50 K per year, and the positive class represents all other employees. As in the regression scenario, we would train a model, score some data, and evaluate the results. The main difference here is the choice of metrics Machine Learning Studio (classic) computes and outputs. To illustrate the income level prediction scenario, we will use the [Adult](https://archive.ics.uci.edu/ml/datasets/Adult) dataset to create a Studio (classic) experiment and evaluate the performance of a two-class logistic regression model, a commonly used binary classifier.
### Creating the Experiment
-Add the following modules to your workspace in Azure Machine Learning Studio (classic):
+Add the following modules to your workspace in Machine Learning Studio (classic):
* Adult Census Income Binary Classification dataset * [Two-Class Logistic Regression][two-class-logistic-regression]
After running the experiment, you can click on the output port of the [Evaluate
Accuracy is simply the proportion of correctly classified instances. It is usually the first metric you look at when evaluating a classifier. However, when the test data is unbalanced (where most of the instances belong to one of the classes), or you are more interested in the performance on either one of the classes, accuracy doesn't really capture the effectiveness of a classifier. In the income level classification scenario, assume you are testing on some data where 99% of the instances represent people who earn less than or equal to 50K per year. It is possible to achieve a 0.99 accuracy by predicting the class "<=50K" for all instances. The classifier in this case appears to be doing a good job overall, but in reality, it fails to classify any of the high-income individuals (the 1%) correctly.
-For that reason, it is helpful to compute additional metrics that capture more specific aspects of the evaluation. Before going into the details of such metrics, it is important to understand the confusion matrix of a binary classification evaluation. The class labels in the training set can take on only two possible values, which we usually refer to as positive or negative. The positive and negative instances that a classifier predicts correctly are called true positives (TP) and true negatives (TN), respectively. Similarly, the incorrectly classified instances are called false positives (FP) and false negatives (FN). The confusion matrix is simply a table showing the number of instances that fall under each of these four categories. Azure Machine Learning Studio (classic) automatically decides which of the two classes in the dataset is the positive class. If the class labels are Boolean or integers, then the 'true' or '1' labeled instances are assigned the positive class. If the labels are strings, such as with the income dataset, the labels are sorted alphabetically and the first level is chosen to be the negative class while the second level is the positive class.
+For that reason, it is helpful to compute additional metrics that capture more specific aspects of the evaluation. Before going into the details of such metrics, it is important to understand the confusion matrix of a binary classification evaluation. The class labels in the training set can take on only two possible values, which we usually refer to as positive or negative. The positive and negative instances that a classifier predicts correctly are called true positives (TP) and true negatives (TN), respectively. Similarly, the incorrectly classified instances are called false positives (FP) and false negatives (FN). The confusion matrix is simply a table showing the number of instances that fall under each of these four categories. Machine Learning Studio (classic) automatically decides which of the two classes in the dataset is the positive class. If the class labels are Boolean or integers, then the 'true' or '1' labeled instances are assigned the positive class. If the labels are strings, such as with the income dataset, the labels are sorted alphabetically and the first level is chosen to be the negative class while the second level is the positive class.
![Binary Classification Confusion Matrix](./media/evaluate-model-performance/6a.png)
In this experiment, we will use the popular [Iris](https://archive.ics.uci.edu/m
The Iris dataset is publicly available on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/https://docsupdatetracker.net/index.html), and can be downloaded using an [Import Data][import-data] module. ### Creating the Experiment
-Add the following modules to your workspace in Azure Machine Learning Studio (classic):
+Add the following modules to your workspace in Machine Learning Studio (classic):
* [Import Data][import-data] * [Multiclass Decision Forest][multiclass-decision-forest]
machine-learning Excel Add In For Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/excel-add-in-for-web-services.md
Title: 'ML Studio (classic): Excel add-in for web services - Azure'
-description: How to use Azure Machine Learning Web services directly in Excel without writing any code.
+description: How to use Machine Learning Web services directly in Excel without writing any code.
Last updated 02/01/2018
-# Excel Add-in for Azure Machine Learning Studio (classic) web services
+# Excel Add-in for Machine Learning Studio (classic) web services
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
Excel makes it easy to call web services directly without the need to write any
> [!NOTE] > - You will see the list of the Web Services related to the file and at the bottom a checkbox for "Auto-predict". If you enable auto-predict the predictions of **all** your services will be updated every time there is a change on the inputs. If unchecked you will have to click on "Predict All" for refresh. For enabling auto-predict at a service level go to step 6.
- > - The Azure Machine Learning Excel add-in will call Office add-ins store to load. If your organization ban access to Office add-ins store, you will see error when loading the add-in. For this case, please deploy the Azure Machine Learning Excel add-in from Microsoft 365 admin center. Then invoke the add-in and add the web service manually by paste the URL and API key.
+ > - The Machine Learning Excel add-in will call Office add-ins store to load. If your organization ban access to Office add-ins store, you will see error when loading the add-in. For this case, please deploy the Machine Learning Excel add-in from Microsoft 365 admin center. Then invoke the add-in and add the web service manually by paste the URL and API key.
Get the API key for your web service. Where you perform this action depends on w
4. Look for the **Request URI** section. Copy and save the URL. > [!NOTE]
-> It is now possible to sign into the [Azure Machine Learning Web Services](https://services.azureml.net) portal to obtain the API key for a Classic Machine Learning web service.
+> It is now possible to sign into the [Machine Learning Web Services](https://services.azureml.net) portal to obtain the API key for a Classic Machine Learning web service.
> > **Use a New web service**
-1. In the [Azure Machine Learning Web Services](https://services.azureml.net) portal, click **Web Services**, then select your web service.
+1. In the [Machine Learning Web Services](https://services.azureml.net) portal, click **Web Services**, then select your web service.
2. Click **Consume**. 3. Look for the **Basic consumption info** section. Copy and save the **Primary Key** and the **Request-Response** URL.
machine-learning Execute Python Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/execute-python-scripts.md
Last updated 03/12/2019
-# Execute Python machine learning scripts in Azure Machine Learning Studio (classic)
+# Execute Python machine learning scripts in Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Python is a valuable tool in the tool chest of many data scientists. It's used in every stage of typical machine learning workflows including data exploration, feature extraction, model training and validation, and deployment.
-This article describes how you can use the Execute Python Script module to use Python code in your Azure Machine Learning Studio (classic) experiments and web services.
+This article describes how you can use the Execute Python Script module to use Python code in your Machine Learning Studio (classic) experiments and web services.
## Using the Execute Python Script module
Here is the Python function used to compute the importance scores and order the
![Function to rank features by scores](./media/execute-python-scripts/figure8.png)
-The following experiment then computes and returns the importance scores of features in the "Pima Indian Diabetes" dataset in Azure Machine Learning Studio (classic):
+The following experiment then computes and returns the importance scores of features in the "Pima Indian Diabetes" dataset in Machine Learning Studio (classic):
![Experiment to rank features in the Pima Indian Diabetes dataset using Python](./media/execute-python-scripts/figure9a.png)
machine-learning Export Delete Personal Data Dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/export-delete-personal-data-dsr.md
Title: 'ML Studio (classic): Export & delete your data - Azure'
-description: In-product data stored by Azure Machine Learning Studio (classic) is available for export and deletion through the Azure portal and also through authenticated REST APIs. Telemetry data can be accessed through the Azure Privacy Portal. This article shows you how.
+description: In-product data stored by Machine Learning Studio (classic) is available for export and deletion through the Azure portal and also through authenticated REST APIs. Telemetry data can be accessed through the Azure Privacy Portal. This article shows you how.
Last updated 05/25/2018
-# Export and delete in-product user data from Azure Machine Learning Studio (classic)
+# Export and delete in-product user data from Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-You can delete or export in-product data stored by Azure Machine Learning Studio (classic) by using the Azure portal, the Studio (classic) interface, PowerShell, and authenticated REST APIs. This article tells you how.
+You can delete or export in-product data stored by Machine Learning Studio (classic) by using the Azure portal, the Studio (classic) interface, PowerShell, and authenticated REST APIs. This article tells you how.
Telemetry data can be accessed through the Azure Privacy portal.
Users can also delete their entire workspace:
![Delete a free workspace in Machine Learning Studio (classic)](./media/export-delete-personal-data-dsr/delete-studio-data-workspace.png) ## Export Studio (classic) data with PowerShell
-Use PowerShell to export all your information to a portable format from Azure Machine Learning Studio (classic) using commands. For information, see the [PowerShell module for Azure Machine Learning Studio (classic)](powershell-module.md) article.
+Use PowerShell to export all your information to a portable format from Machine Learning Studio (classic) using commands. For information, see the [PowerShell module for Machine Learning Studio (classic)](powershell-module.md) article.
## Next steps
-For documentation covering web services and commitment plan billing, see [Azure Machine Learning Studio (classic) REST API reference](/rest/api/machinelearning/).
+For documentation covering web services and commitment plan billing, see [Machine Learning Studio (classic) REST API reference](/rest/api/machinelearning/).
machine-learning Gallery How To Use Contribute Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/gallery-how-to-use-contribute-publish.md
Last updated 01/11/2019
# Share and discover resources in the Azure AI Gallery
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
The Gallery has a variety of resources that you can use to develop your own anal
The Azure AI Gallery contains a number of different resources that have been contributed by Microsoft and members of the data science community. These include:
-* **Experiments** - The Gallery contains a wide variety of experiments that have been developed in Azure Machine Learning Studio (classic). These range from quick proof-of-concept experiments that demonstrate a specific machine learning technique, to fully-developed solutions for complex machine learning problems.
+* **Experiments** - The Gallery contains a wide variety of experiments that have been developed in Machine Learning Studio (classic). These range from quick proof-of-concept experiments that demonstrate a specific machine learning technique, to fully-developed solutions for complex machine learning problems.
* **Tutorials** - A number of tutorials are available to walk you through machine learning technologies and concepts, or to describe advanced methods for solving various machine learning problems. * **Collections** - A collection allows you to group together experiments, APIs, and other Gallery resources that address a specific solution or concept. * **Custom Modules** - You can download custom modules into your Studio (classic) workspace to use in your own experiments.
machine-learning Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/import-data.md
Title: 'ML Studio (classic): Import training data - Azure'
-description: How to import your data into Azure Machine Learning Studio (classic) from various data sources. Learn what data types and data formats are supported.
+description: How to import your data into Machine Learning Studio (classic) from various data sources. Learn what data types and data formats are supported.
Last updated 02/01/2019
-# Import your training data into Azure Machine Learning Studio (classic) from various data sources
+# Import your training data into Machine Learning Studio (classic) from various data sources
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
To use your own data in Machine Learning Studio (classic) to develop and train a
* [**SQL Server database**](use-data-from-an-on-premises-sql-server.md) - Use data from a SQL Server database without having to copy data manually > [!NOTE]
-> There are a number of sample datasets available in Machine Learning Studio (classic) that you can use for training data. For information on these, see [Use the sample datasets in Azure Machine Learning Studio (classic)](use-sample-datasets.md).
+> There are a number of sample datasets available in Machine Learning Studio (classic) that you can use for training data. For information on these, see [Use the sample datasets in Machine Learning Studio (classic)](use-sample-datasets.md).
## Prepare data
The online data sources that are supported are itemized in the table below. This
> For more information, see [Azure Blob Storage: Hot and Cool Storage Tiers](../../storage/blobs/storage-blob-storage-tiers.md). ### Supported online data sources
-The Azure Machine Learning Studio (classic) **Import Data** module supports the following data sources:
+The Machine Learning Studio (classic) **Import Data** module supports the following data sources:
| Data Source | Description | Parameters | | | | | | Web URL via HTTP |Reads data in comma-separated values (CSV), tab-separated values (TSV), attribute-relation file format (ARFF), and Support Vector Machines (SVM-light) formats, from any web URL that uses HTTP |<b>URL</b>: Specifies the full name of the file, including the site URL and the file name, with any extension. <br/><br/><b>Data format</b>: Specifies one of the supported data formats: CSV, TSV, ARFF, or SVM-light. If the data has a header row, it is used to assign column names. | | Hadoop/HDFS |Reads data from distributed storage in Hadoop. You specify the data you want by using HiveQL, a SQL-like query language. HiveQL can also be used to aggregate data and perform data filtering before you add the data to Studio (classic). |<b>Hive database query</b>: Specifies the Hive query used to generate the data.<br/><br/><b>HCatalog server URI </b> : Specified the name of your cluster using the format *&lt;your cluster name&gt;.azurehdinsight.net.*<br/><br/><b>Hadoop user account name</b>: Specifies the Hadoop user account name used to provision the cluster.<br/><br/><b>Hadoop user account password</b> : Specifies the credentials used when provisioning the cluster. For more information, see [Create Hadoop clusters in HDInsight](../../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).<br/><br/><b>Location of output data</b>: Specifies whether the data is stored in a Hadoop distributed file system (HDFS) or in Azure. <br/><ul>If you store output data in HDFS, specify the HDFS server URI. (Be sure to use the HDInsight cluster name without the HTTPS:// prefix). <br/><br/>If you store your output data in Azure, you must specify the Azure storage account name, Storage access key and Storage container name.</ul> | | SQL database |Reads data that is stored in Azure SQL Database, SQL Managed Instance, or in a SQL Server database running on an Azure virtual machine. |<b>Database server name</b>: Specifies the name of the server on which the database is running.<br/><ul>In case of Azure SQL Database enter the server name that is generated. Typically it has the form *&lt;generated_identifier&gt;.database.windows.net.* <br/><br/>In case of a SQL server hosted on an Azure Virtual machine enter *tcp:&lt;Virtual Machine DNS Name&gt;, 1433*</ul><br/><b>Database name </b>: Specifies the name of the database on the server. <br/><br/><b>Server user account name</b>: Specifies a user name for an account that has access permissions for the database. <br/><br/><b>Server user account password</b>: Specifies the password for the user account.<br/><br/><b>Database query</b>:Enter a SQL statement that describes the data you want to read. |
-| On-premises SQL database |Reads data that is stored in a SQL database. |<b>Data gateway</b>: Specifies the name of the Data Management Gateway installed on a computer where it can access your SQL Server database. For information about setting up the gateway, see [Perform advanced analytics with Azure Machine Learning Studio (classic) using data from a SQL server](use-data-from-an-on-premises-sql-server.md).<br/><br/><b>Database server name</b>: Specifies the name of the server on which the database is running.<br/><br/><b>Database name </b>: Specifies the name of the database on the server. <br/><br/><b>Server user account name</b>: Specifies a user name for an account that has access permissions for the database. <br/><br/><b>User name and password</b>: Click <b>Enter values</b> to enter your database credentials. You can use Windows Integrated Authentication or SQL Server Authentication depending upon how your SQL Server is configured.<br/><br/><b>Database query</b>:Enter a SQL statement that describes the data you want to read. |
+| On-premises SQL database |Reads data that is stored in a SQL database. |<b>Data gateway</b>: Specifies the name of the Data Management Gateway installed on a computer where it can access your SQL Server database. For information about setting up the gateway, see [Perform advanced analytics with Machine Learning Studio (classic) using data from a SQL server](use-data-from-an-on-premises-sql-server.md).<br/><br/><b>Database server name</b>: Specifies the name of the server on which the database is running.<br/><br/><b>Database name </b>: Specifies the name of the database on the server. <br/><br/><b>Server user account name</b>: Specifies a user name for an account that has access permissions for the database. <br/><br/><b>User name and password</b>: Click <b>Enter values</b> to enter your database credentials. You can use Windows Integrated Authentication or SQL Server Authentication depending upon how your SQL Server is configured.<br/><br/><b>Database query</b>:Enter a SQL statement that describes the data you want to read. |
| Azure Table |Reads data from the Table service in Azure Storage.<br/><br/>If you read large amounts of data infrequently, use the Azure Table Service. It provides a flexible, non-relational (NoSQL), massively scalable, inexpensive, and highly available storage solution. |The options in the **Import Data** change depending on whether you are accessing public information or a private storage account that requires login credentials. This is determined by the <b>Authentication Type</b> which can have value of "PublicOrSAS" or "Account", each of which has its own set of parameters. <br/><br/><b>Public or Shared Access Signature (SAS) URI</b>: The parameters are:<br/><br/><ul><b>Table URI</b>: Specifies the Public or SAS URL for the table.<br/><br/><b>Specifies the rows to scan for property names</b>: The values are <i>TopN</i> to scan the specified number of rows, or <i>ScanAll</i> to get all rows in the table. <br/><br/>If the data is homogeneous and predictable, it is recommended that you select *TopN* and enter a number for N. For large tables, this can result in quicker reading times.<br/><br/>If the data is structured with sets of properties that vary based on the depth and position of the table, choose the *ScanAll* option to scan all rows. This ensures the integrity of your resulting property and metadata conversion.<br/><br/></ul><b>Private Storage Account</b>: The parameters are: <br/><br/><ul><b>Account name</b>: Specifies the name of the account that contains the table to read.<br/><br/><b>Account key</b>: Specifies the storage key associated with the account.<br/><br/><b>Table name</b> : Specifies the name of the table that contains the data to read.<br/><br/><b>Rows to scan for property names</b>: The values are <i>TopN</i> to scan the specified number of rows, or <i>ScanAll</i> to get all rows in the table.<br/><br/>If the data is homogeneous and predictable, we recommend that you select *TopN* and enter a number for N. For large tables, this can result in quicker reading times.<br/><br/>If the data is structured with sets of properties that vary based on the depth and position of the table, choose the *ScanAll* option to scan all rows. This ensures the integrity of your resulting property and metadata conversion.<br/><br/> | | Azure Blob Storage |Reads data stored in the Blob service in Azure Storage, including images, unstructured text, or binary data.<br/><br/>You can use the Blob service to publicly expose data, or to privately store application data. You can access your data from anywhere by using HTTP or HTTPS connections. |The options in the **Import Data** module change depending on whether you are accessing public information or a private storage account that requires login credentials. This is determined by the <b>Authentication Type</b> which can have a value either of "PublicOrSAS" or of "Account".<br/><br/><b>Public or Shared Access Signature (SAS) URI</b>: The parameters are:<br/><br/><ul><b>URI</b>: Specifies the Public or SAS URL for the storage blob.<br/><br/><b>File Format</b>: Specifies the format of the data in the Blob service. The supported formats are CSV, TSV, and ARFF.<br/><br/></ul><b>Private Storage Account</b>: The parameters are: <br/><br/><ul><b>Account name</b>: Specifies the name of the account that contains the blob you want to read.<br/><br/><b>Account key</b>: Specifies the storage key associated with the account.<br/><br/><b>Path to container, directory, or blob </b> : Specifies the name of the blob that contains the data to read.<br/><br/><b>Blob file format</b>: Specifies the format of the data in the blob service. The supported data formats are CSV, TSV, ARFF, CSV with a specified encoding, and Excel. <br/><br/><ul>If the format is CSV or TSV, be sure to indicate whether the file contains a header row.<br/><br/>You can use the Excel option to read data from Excel workbooks. In the <i>Excel data format</i> option, indicate whether the data is in an Excel worksheet range, or in an Excel table. In the <i>Excel sheet or embedded table </i>option, specify the name of the sheet or table that you want to read from.</ul><br/> | | Data Feed Provider |Reads data from a supported feed provider. Currently only the Open Data Protocol (OData) format is supported. |<b>Data content type</b>: Specifies the OData format.<br/><br/><b>Source URL</b>: Specifies the full URL for the data feed. <br/>For example, the following URL reads from the Northwind sample database: https://services.odata.org/northwind/northwind.svc/ |
When the save finishes, the dataset will be available for use within any experim
## Next steps
-[Deploying Azure Machine Learning Studio web services that use Data Import and Data Export modules](web-services-that-use-import-export-modules.md)
+[Deploying Machine Learning Studio (classic) web services that use Data Import and Data Export modules](web-services-that-use-import-export-modules.md)
<!-- Module References -->
machine-learning Interpret Model Results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/interpret-model-results.md
Last updated 11/29/2017
-# Interpret model results in Azure Machine Learning Studio (classic)
+# Interpret model results in Machine Learning Studio (classic)
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-This topic explains how to visualize and interpret prediction results in Azure Machine Learning Studio (classic). After you have trained a model and done predictions on top of it ("scored the model"), you need to understand and interpret the prediction result.
+This topic explains how to visualize and interpret prediction results in Machine Learning Studio (classic). After you have trained a model and done predictions on top of it ("scored the model"), you need to understand and interpret the prediction result.
-There are four major kinds of machine learning models in Azure Machine Learning Studio (classic):
+There are four major kinds of machine learning models in Machine Learning Studio (classic):
* Classification * Clustering
There are two subcategories of classification problems:
* Problems with only two classes (two-class or binary classification) * Problems with more than two classes (multi-class classification)
-Azure Machine Learning Studio (classic) has different modules to deal with each of these types of classification, but the methods for interpreting their prediction results are similar.
+Machine Learning Studio (classic) has different modules to deal with each of these types of classification, but the methods for interpreting their prediction results are similar.
### Two-class classification **Example experiment**
-An example of a two-class classification problem is the classification of iris flowers. The task is to classify iris flowers based on their features. The Iris data set provided in Azure Machine Learning Studio (classic) is a subset of the popular [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) containing instances of only two flower species (classes 0 and 1). There are four features for each flower (sepal length, sepal width, petal length, and petal width).
+An example of a two-class classification problem is the classification of iris flowers. The task is to classify iris flowers based on their features. The Iris data set provided in Machine Learning Studio (classic) is a subset of the popular [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) containing instances of only two flower species (classes 0 and 1). There are four features for each flower (sepal length, sepal width, petal length, and petal width).
![Screenshot of iris experiment](./media/interpret-model-results/1.png)
For recommender systems, you can use the restaurant recommendation problem as an
* Customer feature data * Restaurant feature data
-There are several things we can do with the [Train Matchbox Recommender][train-matchbox-recommender] module in Azure Machine Learning Studio (classic):
+There are several things we can do with the [Train Matchbox Recommender][train-matchbox-recommender] module in Machine Learning Studio (classic):
* Predict ratings for a given user and item * Recommend items to a given user
You can choose what you want to do by selecting from the four options in the **R
![Matchbox recommender](./media/interpret-model-results/19_1.png)
-A typical Azure Machine Learning Studio (classic) experiment for a recommender system looks like Figure 20. For information about how to use those recommender system modules, see [Train matchbox recommender][train-matchbox-recommender] and [Score matchbox recommender][score-matchbox-recommender].
+A typical Machine Learning Studio (classic) experiment for a recommender system looks like Figure 20. For information about how to use those recommender system modules, see [Train matchbox recommender][train-matchbox-recommender] and [Score matchbox recommender][score-matchbox-recommender].
![Recommender system experiment](./media/interpret-model-results/20.png)
machine-learning Manage Experiment Iterations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-experiment-iterations.md
Title: 'ML Studio (classic): View & rerun experiments - Azure'
-description: Manage experiment runs in Azure Machine Learning Studio (classic). You can review previous runs of your experiments at any time in order to challenge, revisit, and ultimately either confirm or refine previous assumptions.
+description: Manage experiment runs in Machine Learning Studio (classic). You can review previous runs of your experiments at any time in order to challenge, revisit, and ultimately either confirm or refine previous assumptions.
Last updated 03/20/2017
-# Manage experiment runs in Azure Machine Learning Studio (classic)
+# Manage experiment runs in Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
machine-learning Manage New Webservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-new-webservice.md
Title: 'ML Studio (classic): Manage web services - Azure'
-description: Manage your Machine Learning New and Classic Web services using the Microsoft Azure Machine Learning Web Services portal. Since Classic Web services and New Web services are based on different underlying technologies, you have slightly different management capabilities for each of them.
+description: Manage your Machine Learning Studio (classic) Web services using the Machine Learning Web Services portal.
Last updated 02/28/2017
-# Manage a web service using the Azure Machine Learning Studio (classic) Web Services portal
+# Manage a web service using the Machine Learning Studio (classic) Web Services portal
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-You can manage your Machine Learning New and Classic Web services using the Microsoft Azure Machine Learning Web Services portal. Since Classic Web services and New Web services are based on different underlying technologies, you have slightly different management capabilities for each of them.
+You can manage your Machine Learning Studio (classic) web services using the Machine Learning Web Services portal.
In the Machine Learning Web Services portal you can: * Monitor how the web service is being used. * Configure the description, update the keys for the web service (New only), update your storage account key (New only), enable logging, and enable or disable sample data. * Delete the web service.
-* Create, delete, or update billing plans (New only).
-* Add and delete endpoints (Classic only)
+* Create, delete, or update billing plans: [Azure Machine Learning only](../index.yml).
+* Add and delete endpoints: ML Studio (classic) only
>[!NOTE] >You also can manage Classic web services in [Machine Learning Studio (classic)](https://studio.azureml.net) on the **Web services** tab.
In the Machine Learning Web Services portal you can:
New web services are deployed as Azure resources. As such, you must have the correct permissions to deploy and manage New web services. To deploy or manage New web services you must be assigned a contributor or administrator role on the subscription to which the web service is deployed. If you invite another user to a machine learning workspace, you must assign them to a contributor or administrator role on the subscription before they can deploy or manage web services.
-If the user does not have the correct permissions to access resources in the Azure Machine Learning Web Services portal, they will receive the following error when trying to deploy a web service:
+If the user does not have the correct permissions to access resources in the Machine Learning Web Services portal, they will receive the following error when trying to deploy a web service:
*Web Service deployment failed. This account does not have sufficient access to the Azure subscription that contains the Workspace. In order to deploy a Web Service to Azure, the same account must be invited to the Workspace and be given access to the Azure subscription that contains the Workspace.*
-For more information on creating a workspace, see [Create and share an Azure Machine Learning Studio (classic) workspace](create-workspace.md).
+For more information on creating a workspace, see [Create and share a Machine Learning Studio (classic) workspace](create-workspace.md).
For more information on setting access permissions, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
For more information on setting access permissions, see [Assign Azure roles usin
## Manage New Web services To manage your New Web
-1. Sign in to the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal using your Microsoft Azure account - use the account that's associated with the Azure subscription.
+1. Sign in to the [Machine Learning Web Services](https://services.azureml.net/quickstart) portal using your Microsoft Azure account - use the account that's associated with the Azure subscription.
2. On the menu, click **Web Services**. This displays a list of deployed Web services for your subscription.
The plan dashboard provides the following information:
## Manage Classic Web Services > [!NOTE]
-> The procedures in this section are relevant to managing Classic web services through the Azure Machine Learning Web Services portal. For information on managing Classic Web services through the Machine Learning Studio (classic) and the Azure portal, see [Manage an Azure Machine Learning Studio (classic) workspace](manage-workspace.md).
+> The procedures in this section are relevant to managing Classic web services through the Machine Learning Web Services portal. For information on managing Classic Web services through the Machine Learning Studio (classic) and the Azure portal, see [Manage a Machine Learning Studio (classic) workspace](manage-workspace.md).
> > To manage your Classic Web
-1. Sign in to the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal using your Microsoft Azure account - use the account that's associated with the Azure subscription.
+1. Sign in to the [Machine Learning Web Services](https://services.azureml.net/quickstart) portal using your Microsoft Azure account - use the account that's associated with the Azure subscription.
2. On the menu, click **Classic Web Services**. To manage a Classic Web Service, click **Classic Web Services**. From the Classic Web Services page you can:
machine-learning Manage Web Service Endpoints Using Api Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-web-service-endpoints-using-api-management.md
Last updated 11/03/2017
-# Manage Azure Machine Learning Studio (classic) web services using API Management
+# Manage Machine Learning Studio (classic) web services using API Management
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) ## Overview
-This guide shows you how to quickly get started using API Management to manage your Azure Machine Learning Studio (classic) web services.
+This guide shows you how to quickly get started using API Management to manage your Machine Learning Studio (classic) web services.
## What is Azure API Management? Azure API Management is an Azure service that lets you manage your REST API endpoints by defining user access, usage throttling, and dashboard monitoring. See the [Azure API management site](https://azure.microsoft.com/services/api-management/) for more details. To get started with Azure API Management, see [the import and publish guide](../../api-management/import-and-publish.md). This other guide, which this guide is based on, covers more topics, including notification configurations, tier pricing, response handling, user authentication, creating products, developer subscriptions, and usage dashboarding.
To complete this guide, you need:
## Create an API Management instance
-You can manage your Azure Machine Learning web service with an API Management instance.
+You can manage your Machine Learning web service with an API Management instance.
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **+ Create a resource**.
machine-learning Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-workspace.md
Title: 'ML Studio (classic): Manage workspaces - Azure'
-description: Manage access to Azure Machine Learning Studio (classic) workspaces, and deploy and manage Machine Learning API web services
+description: Manage access to Machine Learning Studio (classic) workspaces, and deploy and manage Machine Learning API web services
Last updated 02/27/2017
-# Manage an Azure Machine Learning Studio (classic) workspace
+# Manage a Machine Learning Studio (classic) workspace
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) > [!NOTE]
-> For information on managing Web services in the Machine Learning Web Services portal, see [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md).
+> For information on managing Web services in the Machine Learning Web Services portal, see [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md).
> >
In addition to the standard resource management information and options availabl
- View **Properties** - This page displays the workspace and resource information, and you can change the subscription and resource group that this workspace is connected with. - **Resync Storage Keys** - The workspace maintains keys to the storage account. If the storage account changes keys, then you can click **Resync keys** to synchronize the keys with the workspace.
-To manage the web services associated with this Studio (classic) workspace, use the Machine Learning Web Services portal. See [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md) for complete information.
+To manage the web services associated with this Studio (classic) workspace, use the Machine Learning Web Services portal. See [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md) for complete information.
> [!NOTE] > To deploy or manage New web services you must be assigned a contributor or administrator role on the subscription to which the web service is deployed. If you invite another user to a machine learning Studio (classic) workspace, you must assign them to a contributor or administrator role on the subscription before they can deploy or manage web services.
machine-learning Migrate Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-execute-r-script.md
Last updated 03/08/2021
# Migrate Execute R Script modules in Studio (classic)
-In this article, you learn how to rebuild a Studio (classic) **Execute R Script** module in Azure Machine Learning.
+In this article, you learn how to rebuild a Studio (classic) **Execute R Script** module in [Azure Machine Learning](../index.yml).
For more information on migrating from Studio (classic), see the [migration overview article](migrate-overview.md).
See the other articles in the Studio (classic) migration series:
1. [Migrate dataset](migrate-register-dataset.md). 1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md). 1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
-1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Integrate a Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
1. **Migrate Execute R Script modules**.
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-overview.md
Last updated 03/08/2021
-# Migrate to Azure Machine Learning
+# Migrate to Azure Machine Learning
-In this article, you learn how to migrate Studio (classic) assets to Azure Machine Learning. At this time, to migrate resources, you must manually rebuild your experiments.
+In this article, you learn how to migrate Machine Learning Studio (classic) assets to [Azure Machine Learning](../index.yml). At this time, to migrate resources, you must manually rebuild your experiments.
Azure Machine Learning provides a modernized data science platform that combines no-code and code-first approaches. To learn more about the differences between Studio (classic) and Azure Machine Learning, see the [Assess Azure Machine Learning](#step-1-assess-azure-machine-learning) section.
To migrate to Azure Machine Learning, we recommend the following approach:
> * Step 2: Create a migration plan > * Step 3: Rebuild experiments and web services > * Step 4: Integrate client apps
-> * Step 5: Clean up Studio (classic) assets
+> * Step 5: Clean up ML Studio (classic) assets
## Step 1: Assess Azure Machine Learning
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-experiment.md
Last updated 03/08/2021
# Rebuild a Studio (classic) experiment in Azure Machine Learning
-In this article, you learn how to rebuild a Studio (classic) experiment in Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+In this article, you learn how to rebuild a Studio (classic) experiment in [Azure Machine Learning](../index.yml). For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
Studio (classic) **experiments** are similar to **pipelines** in Azure Machine Learning. However, in Azure Machine Learning pipelines are built on the same back-end that powers the SDK. This means that you have two options for machine learning development: the drag-and-drop designer or code-first SDKs.
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-integrate-with-client-app.md
Last updated 03/08/2021
# Consume pipeline endpoints from client applications
-In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](../how-to-consume-web-service.md).
+In this article, you learn how to integrate client applications with [Azure Machine Learning](../index.yml) endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](../how-to-consume-web-service.md).
This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-web-service.md
Last updated 03/08/2021
# Rebuild a Studio (classic) web service in Azure Machine Learning
-In this article, you learn how to rebuild a Studio (classic) web service as an **endpoint** in Azure Machine Learning.
+In this article, you learn how to rebuild a Studio (classic) web service as an **endpoint** in [Azure Machine Learning](../index.yml).
Use Azure Machine Learning pipeline endpoints to make predictions, retrain models, or run any generic pipeline. The REST endpoint lets you run pipelines from any platform.
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-register-dataset.md
Last updated 02/04/2021
# Migrate a Studio (classic) dataset to Azure Machine Learning
-In this article, you learn how to migrate a Studio (classic) dataset to Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+In this article, you learn how to migrate a Studio (classic) dataset to [Azure Machine Learning](../index.yml). For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
You have three options to migrate a dataset to Azure Machine Learning. Read each section to determine which option is best for your scenario.
machine-learning Model Progression Experiment To Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/model-progression-experiment-to-web-service.md
Title: 'ML Studio (classic): How a model becomes a web service - Azure'
-description: An overview of the mechanics of how your Azure Machine Learning Studio (classic) model progresses from a development experiment to a Web service.
+description: An overview of the mechanics of how your Machine Learning Studio (classic) model progresses from a development experiment to a Web service.
Last updated 03/20/2017
# How a Machine Learning Studio (classic) model progresses from an experiment to a Web service
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Azure Machine Learning Studio (classic) provides an interactive canvas that allows you to develop, run, test, and iterate an ***experiment*** representing a predictive analysis model. There are a wide variety of modules available that can:
+Machine Learning Studio (classic) provides an interactive canvas that allows you to develop, run, test, and iterate an ***experiment*** representing a predictive analysis model. There are a wide variety of modules available that can:
* Input data into your experiment * Manipulate the data
Azure Machine Learning Studio (classic) provides an interactive canvas that allo
* Evaluate the results * Output final values
-Once you're satisfied with your experiment, you can deploy it as a ***Classic Azure Machine Learning Web service*** or a ***New Azure Machine Learning Web service*** so that users can send it new data and receive back results.
+Once you're satisfied with your experiment, you can deploy it as a ***Machine Learning (classic) Web service*** or an ***Azure Machine Learning Web service*** so that users can send it new data and receive back results.
In this article, we give an overview of the mechanics of how your Machine Learning model progresses from a development experiment to an operationalized Web service.
In this article, we give an overview of the mechanics of how your Machine Learni
> >
-While Azure Machine Learning Studio (classic) is designed to help you develop and deploy a *predictive analysis model*, it's possible to use Studio (classic) to develop an experiment that doesn't include a predictive analysis model. For example, an experiment might just input data, manipulate it, and then output the results. Just like a predictive analysis experiment, you can deploy this non-predictive experiment as a Web service, but it's a simpler process because the experiment isn't training or scoring a machine learning model. While it's not the typical to use Studio (classic) in this way, we'll include it in the discussion so that we can give a complete explanation of how Studio (classic) works.
+While Machine Learning Studio (classic) is designed to help you develop and deploy a *predictive analysis model*, it's possible to use Studio (classic) to develop an experiment that doesn't include a predictive analysis model. For example, an experiment might just input data, manipulate it, and then output the results. Just like a predictive analysis experiment, you can deploy this non-predictive experiment as a Web service, but it's a simpler process because the experiment isn't training or scoring a machine learning model. While it's not the typical to use Studio (classic) in this way, we'll include it in the discussion so that we can give a complete explanation of how Studio (classic) works.
## Developing and deploying a predictive Web service Here are the stages that a typical solution follows as you develop and deploy it using Machine Learning Studio (classic):
Here are the stages that a typical solution follows as you develop and deploy it
*Figure 1 - Stages of a typical predictive analysis model* ### The training experiment
-The ***training experiment*** is the initial phase of developing your Web service in Machine Learning Studio (classic). The purpose of the training experiment is to give you a place to develop, test, iterate, and eventually train a machine learning model. You can even train multiple models simultaneously as you look for the best solution, but once you're done experimenting you'll select a single trained model and eliminate the rest from the experiment. For an example of developing a predictive analysis experiment, see [Develop a predictive analytics solution for credit risk assessment in Azure Machine Learning Studio (classic)](tutorial-part1-credit-risk.md).
+The ***training experiment*** is the initial phase of developing your Web service in Machine Learning Studio (classic). The purpose of the training experiment is to give you a place to develop, test, iterate, and eventually train a machine learning model. You can even train multiple models simultaneously as you look for the best solution, but once you're done experimenting you'll select a single trained model and eliminate the rest from the experiment. For an example of developing a predictive analysis experiment, see [Develop a predictive analytics solution for credit risk assessment in Machine Learning Studio (classic)](tutorial-part1-credit-risk.md).
### The predictive experiment Once you have a trained model in your training experiment, click **Set Up Web Service** and select **Predictive Web Service** in Machine Learning Studio (classic) to initiate the process of converting your training experiment to a ***predictive experiment***. The purpose of the predictive experiment is to use your trained model to score new data, with the goal of eventually becoming operationalized as an Azure Web service.
There may be more changes you want to make to get your predictive experiment rea
In this conversion process, the training experiment is not discarded. When the process is complete, you have two tabs in Studio (classic): one for the training experiment and one for the predictive experiment. This way you can make changes to the training experiment before you deploy your Web service and rebuild the predictive experiment. Or you can save a copy of the training experiment to start another line of experimentation. > [!NOTE]
-> When you click **Predictive Web Service** you start an automatic process to convert your training experiment to a predictive experiment, and this works well in most cases. If your training experiment is complex (for example, you have multiple paths for training that you join together), you might prefer to do this conversion manually. For more information, see [How to prepare your model for deployment in Azure Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
+> When you click **Predictive Web Service** you start an automatic process to convert your training experiment to a predictive experiment, and this works well in most cases. If your training experiment is complex (for example, you have multiple paths for training that you join together), you might prefer to do this conversion manually. For more information, see [How to prepare your model for deployment in Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
> > ### The web service
-Once you're satisfied that your predictive experiment is ready, you can deploy your service as either a Classic Web service or a New Web service based on Azure Resource Manager. To operationalize your model by deploying it as a *Classic Machine Learning Web service*, click **Deploy Web Service** and select **Deploy Web Service [Classic]**. To deploy as *New Machine Learning Web service*, click **Deploy Web Service** and select **Deploy Web Service [New]**. Users can now send data to your model using the Web service REST API and receive back the results. For more information, see [How to consume an Azure Machine Learning Web service](consume-web-services.md).
+Once you're satisfied that your predictive experiment is ready, you can deploy your service as either a Classic Web service or a New Web service based on Azure Resource Manager. To operationalize your model by deploying it as a *Classic Machine Learning Web service*, click **Deploy Web Service** and select **Deploy Web Service [Classic]**. To deploy as *New Machine Learning Web service*, click **Deploy Web Service** and select **Deploy Web Service [New]**. Users can now send data to your model using the Web service REST API and receive back the results. For more information, see [How to consume a Machine Learning Web service](consume-web-services.md).
## The non-typical case: creating a non-predictive Web service If your experiment does not train a predictive analysis model, then you don't need to create both a training experiment and a scoring experiment - there's just one experiment, and you can deploy it as a Web service. Machine Learning Studio (classic) detects whether your experiment contains a predictive model by analyzing the modules you've used.
If you want to make changes to your original predictive experiment, such as sele
## Next steps For more details on the process of developing and experiment, see the following articles:
-* converting the experiment - [How to prepare your model for deployment in Azure Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md)
-* deploying the Web service - [Deploy an Azure Machine Learning web service](deploy-a-machine-learning-web-service.md)
+* converting the experiment - [How to prepare your model for deployment in Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md)
+* deploying the Web service - [Deploy a Machine Learning web service](deploy-a-machine-learning-web-service.md)
* retraining the model - [Retrain Machine Learning models programmatically](./retrain-machine-learning-model.md) For examples of the whole process, see:
-* [Machine learning tutorial: Create your first experiment in Azure Machine Learning Studio (classic)](create-experiment.md)
-* [Walkthrough: Develop a predictive analytics solution for credit risk assessment in Azure Machine Learning](tutorial-part1-credit-risk.md)
+* [Machine learning tutorial: Create your first experiment in Machine Learning Studio (classic)](create-experiment.md)
+* [Walkthrough: Develop a predictive analytics solution for credit risk assessment in Machine Learning Studio (classic)](tutorial-part1-credit-risk.md)
machine-learning Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/powershell-module.md
Title: 'ML Studio (classic): PowerShell modules - Azure'
-description: Use PowerShell to create and manage Azure Machine Learning Studio (classic) workspaces, experiments, web services, and more.
+description: Use PowerShell to create and manage Machine Learning Studio (classic) workspaces, experiments, web services, and more.
Last updated 04/25/2019
-# PowerShell modules for Azure Machine Learning Studio (classic)
+# PowerShell modules for Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
You can interact with Studio (classic) resources using three PowerShell modules:
* [Azure PowerShell Az](#az-rm) released in 2018, includes all functionality of AzureRM, although with different cmdlet names * [AzureRM](#az-rm) released in 2016, replaced by PowerShell Az
-* [Azure Machine Learning PowerShell classic](#classic) released in 2016
+* [Machine Learning PowerShell classic](#classic) released in 2016
Although these PowerShell modules have some similarities, each is designed for particular scenarios. This article describes the differences between the PowerShell modules, and helps you decide which ones to choose.
machine-learning R Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/r-get-started.md
Title: Use R with Machine Learning Studio (classic) - Azure
-description: Use this R programming tutorial to get started with Azure Machine Learning Studio (classic) in R to create a forecasting solution.
+description: Use this R programming tutorial to get started with Machine Learning Studio (classic) in R to create a forecasting solution.
Last updated 03/01/2019
-# Get started with Azure Machine Learning Studio (classic) in R
+# Get started with Machine Learning Studio (classic) in R
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
<!-- Stephen F Elston, Ph.D. -->
-In this tutorial, you learn how to use Azure Machine Learning Studio (classic) to create, test, and execute R code. In the end, you'll have a complete forecasting solution.
+In this tutorial, you learn how to use Machine Learning Studio (classic) to create, test, and execute R code. In the end, you'll have a complete forecasting solution.
> [!div class="checklist"] > * Create code for data cleaning and transformation.
cadairydata <- maml.mapInputPort(1)
str(cadairydata) pairs(~ Cotagecheese.Prod + Icecream.Prod + Milk.Prod + N.CA.Fat.Price, data = cadairydata) ## The following line should be executed only when running in
-## Azure Machine Learning Studio (classic)
+## Machine Learning Studio (classic)
maml.mapOutputPort('cadairydata') ```
We already discussed loading datasets in [Load the dataset](#loading). After you
str(cadairydata) pairs(~ Cotagecheese.Prod + Icecream.Prod + Milk.Prod + N.CA.Fat.Price, data = cadairydata) ## The following line should be executed only when running in
- ## Azure Machine Learning Studio (classic)
+ ## Machine Learning Studio (classic)
maml.mapOutputPort('cadairydata') ```
cadairydata <- maml.mapInputPort(1)
cadairydata$Month <- as.factor(cadairydata$Month) str(cadairydata) # Check the result ## The following line should be executed only when running in
-## Azure Machine Learning Studio (classic)
+## Machine Learning Studio (classic)
maml.mapOutputPort('cadairydata') ```
outframe
## WARNING!
-## The following line works only in Azure Machine Learning Studio (classic)
+## The following line works only in Machine Learning Studio (classic)
## When running in RStudio, this code will result in an error #maml.mapOutputPort('outframe') ```
rowNames = c("Trend Model", "Seasonal Model"),
RMS.df ## The following line should be executed only when running in
-## Azure Machine Learning Studio (classic)
+## Machine Learning Studio (classic)
maml.mapOutputPort('RMS.df') ```
machine-learning Retrain Classic Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-classic-web-service.md
Title: 'ML Studio (classic): retrain classic web service - Azure'
-description: Learn how to retrain a model and update a classic web service to use the newly trained model in Azure Machine Learning Studio (classic).
+description: Learn how to retrain a model and update a classic web service to use the newly trained model in Machine Learning Studio (classic).
Last updated 02/14/2019
# Retrain and deploy a classic Studio (classic) web service
-**APPLIES TO:** ![Green check mark.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
Retraining machine learning models is one way to ensure they stay accurate and based on the most relevant data available. This article will show you how to retrain a classic Studio (classic) web service. For a guide on how to retrain a new Studio (classic) web service, [view this how-to article.](retrain-machine-learning-model.md)
machine-learning Retrain Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-machine-learning-model.md
Title: 'ML Studio (classic): retrain a web service - Azure'
-description: Learn how to update a web service to use a newly trained machine learning model in Azure Machine Learning Studio (classic).
+description: Learn how to update a web service to use a newly trained machine learning model in Machine Learning Studio (classic).
Use the following steps to deploy a retraining web service:
Now, you deploy the training experiment as a retraining web service that outputs a trained model and model evaluation results. 1. At the bottom of the experiment canvas, click **Set Up Web Service**
-1. Select **Deploy Web Service [New]**. The Azure Machine Learning Web Services portal opens to the **Deploy Web Service** page.
+1. Select **Deploy Web Service [New]**. The Machine Learning Web Services portal opens to the **Deploy Web Service** page.
1. Type a name for your web service and choose a payment plan. 1. Select **Deploy**.
Use the following steps to call the retraining APIs:
Add the NuGet package Microsoft.AspNet.WebApi.Client, as specified in the comments. To add the reference to Microsoft.WindowsAzure.Storage.dll, you might need to install the [client library for Azure Storage services](https://www.nuget.org/packages/WindowsAzure.Storage).
-The following screenshot shows the **Consume** page in the Azure Machine Learning Web Services portal.
+The following screenshot shows the **Consume** page in the Machine Learning Web Services portal.
![Consume page](media/retrain-machine-learning/machine-learning-retrain-models-consume-page.png)
Type : Microsoft.MachineLearning/webServices
Tags : {} ```
-Alternatively, to determine the resource group name of an existing web service, sign in to the Azure Machine Learning Web Services portal. Select the web service. The resource group name is the fifth element of the URL of the web service, just after the *resourceGroups* element. In the following example, the resource group name is Default-MachineLearning-SouthCentralUS.
+Alternatively, to determine the resource group name of an existing web service, sign in to the Machine Learning Web Services portal. Select the web service. The resource group name is the fifth element of the URL of the web service, just after the *resourceGroups* element. In the following example, the resource group name is Default-MachineLearning-SouthCentralUS.
`https://services.azureml.net/subscriptions/<subscription ID>/resourceGroups/Default-MachineLearning-SouthCentralUS/providers/Microsoft.MachineLearning/webServices/RetrainSamplePre.2016.8.17.0.3.51.237`
machine-learning Sample Experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/sample-experiments.md
Title: 'ML Studio (classic): start experiments from examples - Azure'
-description: Learn how to use example machine learning experiments to create new experiments with Azure AI Gallery and Azure Machine Learning Studio (classic).
+description: Learn how to use example machine learning experiments to create new experiments with Azure AI Gallery and Machine Learning Studio (classic).
Last updated 01/05/2018
-# Create Azure Machine Learning Studio (classic) experiments from working examples in Azure AI Gallery
+# Create Machine Learning Studio (classic) experiments from working examples in Azure AI Gallery
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
Last updated 01/05/2018
Learn how to start with example experiments from [Azure AI Gallery](https://gallery.azure.ai/) instead of creating machine learning experiments from scratch. You can use the examples to build your own machine learning solution.
-The gallery has example experiments by the Microsoft Azure Machine Learning Studio (classic) team as well as examples shared by the Machine Learning community. You also can ask questions or post comments about experiments.
+The gallery has example experiments by the Machine Learning Studio (classic) team as well as examples shared by the Machine Learning community. You also can ask questions or post comments about experiments.
To see how to use the gallery, watch the 3-minute video [Copy other people's work to do data science](data-science-for-beginners-copy-other-peoples-work-to-do-data-science.md) from the series [Data Science for Beginners](data-science-for-beginners-the-5-questions-data-science-answers.md).
machine-learning Studio Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/studio-classic-overview.md
Last updated 08/19/2020
# What can I do with Machine Learning Studio (classic)?
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
[!INCLUDE [Designer notice](../../../includes/designer-notice.md)]
A module may have a set of parameters that you can use to configure the module's
For some help navigating through the large library of machine learning algorithms available, see [How to choose algorithms for Microsoft Machine Learning Studio (classic)](../how-to-select-algorithms.md). ## Deploying a predictive analytics web service
-Once your predictive analytics model is ready, you can deploy it as a web service right from Machine Learning Studio (classic). For more information on this process, see [Deploy an Azure Machine Learning web service](deploy-a-machine-learning-web-service.md).
+Once your predictive analytics model is ready, you can deploy it as a web service right from Machine Learning Studio (classic). For more information on this process, see [Deploy an Machine Learning web service](deploy-a-machine-learning-web-service.md).
## Next steps You can learn the basics of predictive analytics and machine learning using a [step-by-step quickstart](create-experiment.md) and by [building on samples](sample-experiments.md).
machine-learning Support Aml Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/support-aml-studio.md
Title: 'ML Studio (classic) tutorial support & training - Azure'
-description: Get support and training and provide feedback for Azure Machine Learning Studio (classic)
+description: Get support and training and provide feedback for Machine Learning Studio (classic).
Last updated 01/18/2019
-# Get support and training for Azure Machine Learning Studio (classic)
+# Get support and training for Machine Learning Studio (classic)
-**APPLIES TO:** ![Green check mark.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-This article provides information on how to learn more about Azure Machine Learning Studio (classic) and get support for your issues and questions.
+This article provides information on how to learn more about Machine Learning Studio (classic) and get support for your issues and questions.
## Learn more about Studio (classic)
Check out these support resources:
+ **Technical support for Azure Customers**: [Submit and manage support requests](../../azure-portal/supportability/how-to-create-azure-support-request.md) through the Azure portal.
-+ **User forum**: Ask questions, answer questions, and connect with other users in the [Azure Machine Learning Studio (classic) support forum](/answers/topics/azure-machine-learning.html).
++ **User forum**: Ask questions, answer questions, and connect with other users in the [Machine Learning Studio (classic) support forum](/answers/topics/azure-machine-learning.html). + **Stack Overflow**: Visit the Azure Machine Learning community on [StackOverflow](https://stackoverflow.com/questions/tagged/azure-machine-learning) tagged with "Azure-Machine-Learning".
machine-learning Tutorial Part1 Credit Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part1-credit-risk.md
Title: 'ML Studio (classic) tutorial: Predict credit risk - Azure'
-description: A detailed tutorial showing how to create a predictive analytics solution for credit risk assessment in Azure Machine Learning Studio (classic). This tutorial is part one of a three-part tutorial series. It shows how to create a workspace, upload data, and create an experiment.
+description: A detailed tutorial showing how to create a predictive analytics solution for credit risk assessment in Machine Learning Studio (classic).
keywords: credit risk, predictive analytics solution,risk assessment
Last updated 02/11/2019
-# Tutorial 1: Predict credit risk - Azure Machine Learning Studio (classic)
+# Tutorial 1: Predict credit risk - Machine Learning Studio (classic)
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
[!INCLUDE [Designer notice](../../../includes/designer-notice.md)]
-In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as an Azure Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part one of a three-part tutorial series**.
+In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part one of a three-part tutorial series**.
Suppose you need to predict an individual's credit risk based on the information they gave on a credit application.
-Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Microsoft Azure Machine Learning Studio (classic). You'll use Azure Machine Learning Studio (classic) and a Machine Learning web service for this solution.
+Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Machine Learning Studio (classic). You'll use aMachine Learning Studio (classic) and a Machine Learning web service for this solution.
In this three-part tutorial, you start with publicly available credit risk data. You then develop and train a predictive model. Finally you deploy the model as a web service.
You can then use this experiment to [train models in part 2](tutorial-part2-cred
This tutorial assumes that you've used Machine Learning Studio (classic) at least once before, and that you have some understanding of machine learning concepts. But it doesn't assume you're an expert in either.
-If you've never used **Azure Machine Learning Studio (classic)** before, you might want to start with the quickstart, [Create your first data science experiment in Azure Machine Learning Studio (classic)](create-experiment.md). The quickstart takes you through Machine Learning Studio (classic) for the first time. It shows you the basics of how to drag-and-drop modules onto your experiment, connect them together, run the experiment, and look at the results.
+If you've never used **Machine Learning Studio (classic)** before, you might want to start with the quickstart, [Create your first data science experiment in Machine Learning Studio (classic)](create-experiment.md). The quickstart takes you through Machine Learning Studio (classic) for the first time. It shows you the basics of how to drag-and-drop modules onto your experiment, connect them together, run the experiment, and look at the results.
> [!TIP]
If you've never used **Azure Machine Learning Studio (classic)** before, you mig
## Create a Machine Learning Studio (classic) workspace
-To use Machine Learning Studio (classic), you need to have a Microsoft Azure Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
+To use Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments.
-To create a workspace, see [Create and share an Azure Machine Learning Studio (classic) workspace](create-workspace.md).
+To create a workspace, see [Create and share a Machine Learning Studio (classic) workspace](create-workspace.md).
After your workspace is created, open Machine Learning Studio (classic) ([https://studio.azureml.net/Home](https://studio.azureml.net/Home)). If you have more than one workspace, you can select the workspace in the toolbar in the upper-right corner of the window.
You can manage datasets that you've uploaded to Studio (classic) by clicking the
![Manage datasets](./media/tutorial-part1-credit-risk/dataset-list.png)
-For more information about importing other types of data into an experiment, see [Import your training data into Azure Machine Learning Studio (classic)](import-data.md).
+For more information about importing other types of data into an experiment, see [Import your training data into Machine Learning Studio (classic)](import-data.md).
## Create an experiment
machine-learning Tutorial Part2 Credit Risk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part2-credit-risk-train.md
Title: 'ML Studio (classic) tutorial 2: Train credit risk models - Azure'
-description: A detailed tutorial showing how to create a predictive analytics solution for credit risk assessment in Azure Machine Learning Studio (classic). This tutorial is part two of a three-part tutorial series. It shows how to train and evaluate models.
+description: This tutorial is part two of a three-part tutorial series for Machine Learning Studio (classic). It shows how to train and evaluate models.
keywords: credit risk, predictive analytics solution,risk assessment
Last updated 02/11/2019
-# Tutorial 2: Train credit risk models - Azure Machine Learning Studio (classic)
+# Tutorial 2: Train credit risk models - Machine Learning Studio (classic)
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as an Azure Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part two of a three-part tutorial series**.
+In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part two of a three-part tutorial series**.
Suppose you need to predict an individual's credit risk based on the information they gave on a credit application.
-Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Microsoft Azure Machine Learning Studio (classic). You'll use Azure Machine Learning Studio (classic) and a Machine Learning web service for this solution.
+Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Machine Learning Studio (classic). You'll use Machine Learning Studio (classic) and a Machine Learning web service for this solution.
In this three-part tutorial, you start with publicly available credit risk data. You then develop and train a predictive model. Finally you deploy the model as a web service.
Complete [part one of the tutorial](tutorial-part1-credit-risk.md).
## <a name="train"></a>Train multiple models
-One of the benefits of using Azure Machine Learning Studio (classic) for creating machine learning models is the ability to try more than one type of model at a time in a single experiment and compare the results. This type of experimentation helps you find the best solution for your problem.
+One of the benefits of using Machine Learning Studio (classic) for creating machine learning models is the ability to try more than one type of model at a time in a single experiment and compare the results. This type of experimentation helps you find the best solution for your problem.
In the experiment we're developing in this tutorial, you'll create two different types of models and then compare their scoring results to decide which algorithm you want to use in our final experiment. There are various models you could choose from. To see the models available, expand the **Machine Learning** node in the module palette, and then expand **Initialize Model** and the nodes beneath it. For the purposes of this experiment, you'll select the [Two-Class Support Vector Machine][two-class-support-vector-machine] (SVM) and the [Two-Class Boosted Decision Tree][two-class-boosted-decision-tree] modules.
-> [!TIP]
-> To get help deciding which Machine Learning algorithm best suits the particular problem you're trying to solve, see [How to choose algorithms for Microsoft Azure Machine Learning Studio (classic)](../how-to-select-algorithms.md).
->
->
You'll add both the [Two-Class Boosted Decision Tree][two-class-boosted-decision-tree] module and [Two-Class Support Vector Machine][two-class-support-vector-machine] module in this experiment.
To the right of the graph, click **Scored dataset** or **Scored dataset to compa
By examining these values, you can decide which model is closest to giving you the results you're looking for. You can go back and iterate on your experiment by changing parameter values in the different models. The science and art of interpreting these results and tuning the model performance is outside the scope of this tutorial. For additional help, you might read the following articles:-- [How to evaluate model performance in Azure Machine Learning Studio (classic)](evaluate-model-performance.md)-- [Choose parameters to optimize your algorithms in Azure Machine Learning Studio (classic)](algorithm-parameters-optimize.md)-- [Interpret model results in Azure Machine Learning Studio (classic)](interpret-model-results.md)
+- [How to evaluate model performance in Machine Learning Studio (classic)](evaluate-model-performance.md)
+- [Choose parameters to optimize your algorithms in Machine Learning Studio (classic)](algorithm-parameters-optimize.md)
+- [Interpret model results in Machine Learning Studio (classic)](interpret-model-results.md)
> [!TIP] > Each time you run the experiment a record of that iteration is kept in the Run History. You can view these iterations, and return to any of them, by clicking **VIEW RUN HISTORY** below the canvas. You can also click **Prior Run** in the **Properties** pane to return to the iteration immediately preceding the one you have open.
The science and art of interpreting these results and tuning the model performan
> You can make a copy of any iteration of your experiment by clicking **SAVE AS** below the canvas. > Use the experiment's **Summary** and **Description** properties to keep a record of what you've tried in your experiment iterations. >
-> For more information, see [Manage experiment iterations in Azure Machine Learning Studio (classic)](manage-experiment-iterations.md).
+> For more information, see [Manage experiment iterations in Machine Learning Studio (classic)](manage-experiment-iterations.md).
> >
machine-learning Tutorial Part3 Credit Risk Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part3-credit-risk-deploy.md
Title: 'ML Studio (classic) tutorial 3: Deploy credit risk models - Azure'
-description: A detailed tutorial showing how to create a predictive analytics solution for credit risk assessment in Azure Machine Learning Studio (classic). This tutorial is part three of a three-part tutorial series. It shows how to deploy a model as a web service.
+description: This tutorial is part three of a three-part tutorial series for Machine Learning Studio (classic). It shows how to deploy a model as a web service.
keywords: credit risk, predictive analytics solution,risk assessment, deploy, web service
Last updated 07/27/2020
-# Tutorial 3: Deploy credit risk model - Azure Machine Learning Studio (classic)
+# Tutorial 3: Deploy credit risk model - Machine Learning Studio (classic)
-**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
+**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as an Azure Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part three of a three-part tutorial series**.
+In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part three of a three-part tutorial series**.
Suppose you need to predict an individual's credit risk based on the information they gave on a credit application.
-Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Microsoft Azure Machine Learning Studio (classic). You'll use Azure Machine Learning Studio (classic) and a Machine Learning web service for this solution.
+Credit risk assessment is a complex problem, but this tutorial will simplify it a bit. You'll use it as an example of how you can create a predictive analytics solution using Machine Learning Studio (classic). You'll use Machine Learning Studio (classic) and a Machine Learning web service for this solution.
In this three-part tutorial, you start with publicly available credit risk data. You then develop and train a predictive model. Finally you deploy the model as a web service.
To get this model ready for deployment, you need to convert this training experi
you could do this manually, but fortunately all three steps can be accomplished by clicking **Set Up Web Service** at the bottom of the experiment canvas (and selecting the **Predictive Web Service** option). > [!TIP]
-> If you want more details on what happens when you convert a training experiment to a predictive experiment, see [How to prepare your model for deployment in Azure Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
+> If you want more details on what happens when you convert a training experiment to a predictive experiment, see [How to prepare your model for deployment in Machine Learning Studio (classic)](deploy-a-machine-learning-web-service.md).
When you click **Set Up Web Service**, several things happen:
You can configure the service by clicking the **CONFIGURATION** tab. Here you ca
### Deploy as a New web service > [!NOTE]
-> To deploy a New web service you must have sufficient permissions in the subscription to which you are deploying the web service. For more information, see [Manage a web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md).
+> To deploy a New web service you must have sufficient permissions in the subscription to which you are deploying the web service. For more information, see [Manage a web service using the Machine Learning Web Services portal](manage-new-webservice.md).
To deploy a New web service derived from our experiment:
-1. Click **Deploy Web Service** below the canvas and select **Deploy Web Service [New]**. Machine Learning Studio (classic) transfers you to the Azure Machine Learning web services **Deploy Experiment** page.
+1. Click **Deploy Web Service** below the canvas and select **Deploy Web Service [New]**. Machine Learning Studio (classic) transfers you to the Machine Learning web services **Deploy Experiment** page.
1. Enter a name for the web service.
You can test a Classic web service either in **Machine Learning Studio (classic)
You can test a New web service only in the **Machine Learning Web Services** portal. > [!TIP]
-> When testing in the Azure Machine Learning Web Services portal, you can have the portal create sample data that you can use to test the Request-Response service. On the **Configure** page, select "Yes" for **Sample Data Enabled?**. When you open the Request-Response tab on the **Test** page, the portal fills in sample data taken from the original credit risk dataset.
+> When testing in the Machine Learning Web Services portal, you can have the portal create sample data that you can use to test the Request-Response service. On the **Configure** page, select "Yes" for **Sample Data Enabled?**. When you open the Request-Response tab on the **Test** page, the portal fills in sample data taken from the original credit risk dataset.
### Test a Classic web service
You can test a Classic web service in Machine Learning Studio (classic) or in th
#### Test in the Machine Learning Web Services portal
-1. On the **DASHBOARD** page for the web service, click the **Test preview** link under **Default Endpoint**. The test page in the Azure Machine Learning Web Services portal for the web service endpoint opens and asks you for the input data for the service. These are the same columns that appeared in the original credit risk dataset.
+1. On the **DASHBOARD** page for the web service, click the **Test preview** link under **Default Endpoint**. The test page in the Machine Learning Web Services portal for the web service endpoint opens and asks you for the input data for the service. These are the same columns that appeared in the original credit risk dataset.
2. Click **Test Request-Response**.
You can test a Classic web service in Machine Learning Studio (classic) or in th
You can test a New web service only in the Machine Learning Web Services portal.
-1. In the [Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal, click **Test** at the top of the page. The **Test** page opens and you can input data for the service. The input fields displayed correspond to the columns that appeared in the original credit risk dataset.
+1. In the [Machine Learning Web Services](https://services.azureml.net/quickstart) portal, click **Test** at the top of the page. The **Test** page opens and you can input data for the service. The input fields displayed correspond to the columns that appeared in the original credit risk dataset.
1. Enter a set of data and then click **Test Request-Response**.
The results of the test are displayed on the right-hand side of the page in the
## Manage the web service
-Once you've deployed your web service, whether Classic or New, you can manage it from the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal.
+Once you've deployed your web service, whether Classic or New, you can manage it from the [Machine Learning Web Services](https://services.azureml.net/quickstart) portal.
To monitor the performance of your web service:
-1. Sign in to the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal
+1. Sign in to the [Machine Learning Web Services](https://services.azureml.net/quickstart) portal
1. Click **Web services** 1. Click your web service 1. Click the **Dashboard**
The Web service is an Azure web service that can receive and return data using R
> [!NOTE] > Feature column names in Studio (classic) are **case sensitive**. Make sure your input data for invoking the web service has the same column names as in the training dataset.
-For more information on accessing and consuming the web service, see [Consume an Azure Machine Learning Web service with a web app template](./consume-web-services.md).
+For more information on accessing and consuming the web service, see [Consume a Machine Learning Web service with a web app template](./consume-web-services.md).
In this tutorial, you completed these steps:
You can also develop a custom application to access the web service using starter code provided for you in R, C#, and Python programming languages. > [!div class="nextstepaction"]
-> [Consume an Azure Machine Learning Web service](consume-web-services.md)
+> [Consume a Machine Learning Web service](consume-web-services.md)
<!-- Module References --> [evaluate-model]: /azure/machine-learning/studio-module-reference/evaluate-model
machine-learning Use Data From An On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-data-from-an-on-premises-sql-server.md
Title: 'ML Studio (classic): On-premises SQL Server - Azure'
-description: Use data from a SQL Server database to perform advanced analytics with Azure Machine Learning Studio (classic).
+description: Use data from a SQL Server database to perform advanced analytics with Machine Learning Studio (classic).
Last updated 03/13/2017
-# Perform analytics with Azure Machine Learning Studio (classic) using a SQL Server database
+# Perform analytics with Machine Learning Studio (classic) using a SQL Server database
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Often enterprises that work with on-premises data would like to take advantage of the scale and agility of the cloud for their machine learning workloads. But they don't want to disrupt their current business processes and workflows by moving their on-premises data to the cloud. Azure Machine Learning Studio (classic) now supports reading your data from a SQL Server database and then training and scoring a model with this data. You no longer have to manually copy and sync the data between the cloud and your on-premises server. Instead, the **Import Data** module in Azure Machine Learning Studio (classic) can now read directly from your SQL Server database for your training and scoring jobs.
+Often enterprises that work with on-premises data would like to take advantage of the scale and agility of the cloud for their machine learning workloads. But they don't want to disrupt their current business processes and workflows by moving their on-premises data to the cloud. Machine Learning Studio (classic) now supports reading your data from a SQL Server database and then training and scoring a model with this data. You no longer have to manually copy and sync the data between the cloud and your on-premises server. Instead, the **Import Data** module in Machine Learning Studio (classic) can now read directly from your SQL Server database for your training and scoring jobs.
-This article provides an overview of how to ingress SQL Server data into Azure Machine Learning Studio (classic). It assumes that you're familiar with Studio (classic) concepts like workspaces, modules, datasets, experiments, *etc.*.
+This article provides an overview of how to ingress SQL Server data into Machine Learning Studio (classic). It assumes that you're familiar with Studio (classic) concepts like workspaces, modules, datasets, experiments, *etc.*.
> [!NOTE] > This feature is not available for free workspaces. For more
-> information about Machine Learning pricing and tiers, see [Azure Machine
+> information about Machine Learning pricing and tiers, see [Machine
> Learning
-> Pricing](https://azure.microsoft.com/pricing/details/machine-learning/).
+> Studio (classic) Pricing](https://azure.microsoft.com/pricing/details/machine-learning-studio/).
> >
This article provides an overview of how to ingress SQL Server data into Azure M
## Install the Data Factory Self-hosted Integration Runtime
-To access a SQL Server database in Azure Machine Learning Studio (classic), you need
+To access a SQL Server database in Machine Learning Studio (classic), you need
to download and install the Data Factory Self-hosted Integration Runtime, formerly known as the Data Management Gateway. When you configure the connection in Machine Learning Studio (classic), you have the opportunity to download and install the Integration Runtime (IR) using the **Download and register data gateway** dialog described below.
Consider the following when setting up and using a Data Factory Self-hosted Inte
* You configure an IRs for only one workspace at a time. Currently, IRs can't be shared across workspaces. * You can configure multiple IRs for a single workspace. For example, you may want to use an IR that's connected to your test data sources during development and a production IR when you're ready to operationalize. * The IR does not need to be on the same machine as the data source. But staying closer to the data source reduces the time for the gateway to connect to the data source. We recommend that you install the IR on a machine that's different from the one that hosts the on-premises data source so that the gateway and data source don't compete for resources.
-* If you already have an IR installed on your computer serving Power BI or Azure Data Factory scenarios, install a separate IR for Azure Machine Learning Studio (classic) on another computer.
+* If you already have an IR installed on your computer serving Power BI or Azure Data Factory scenarios, install a separate IR for Machine Learning Studio (classic) on another computer.
> [!NOTE] > You can't run Data Factory Self-hosted Integration Runtime and Power BI Gateway on the same computer. > >
-* You need to use the Data Factory Self-hosted Integration Runtime for Azure Machine Learning Studio (classic) even if you are using Azure ExpressRoute for other data. You should treat your data source as an on-premises data source (that's behind a firewall) even when you use ExpressRoute. Use the Data Factory Self-hosted Integration Runtime to establish connectivity between Machine Learning and the data source.
+* You need to use the Data Factory Self-hosted Integration Runtime for Machine Learning Studio (classic) even if you are using Azure ExpressRoute for other data. You should treat your data source as an on-premises data source (that's behind a firewall) even when you use ExpressRoute. Use the Data Factory Self-hosted Integration Runtime to establish connectivity between Machine Learning and the data source.
You can find detailed information on installation prerequisites, installation steps, and troubleshooting tips in the article [Integration Runtime in Data Factory](../../data-factory/concepts-integration-runtime.md).
-## <span id="using-the-data-gateway-step-by-step-walk" class="anchor"><span id="_Toc450838866" class="anchor"></span></span>Ingress data from your SQL Server database into Azure Machine Learning
+## <span id="using-the-data-gateway-step-by-step-walk" class="anchor"><span id="_Toc450838866" class="anchor"></span></span>Ingress data from your SQL Server database into Machine Learning
In this walkthrough, you will set up an Azure Data Factory Integration Runtime in an Azure Machine Learning workspace, configure it, and then read data from a SQL Server database.
SQL Server database.
The first step is to create and set up the gateway to access your SQL database.
-1. Log in to [Azure Machine Learning
+1. Log in to [Machine Learning
Studio (classic)](https://studio.azureml.net/Home/) and select the workspace that you want to work in. 2. Click the **SETTINGS** blade on the left, and then click the **DATA
SQL database.
![Data Management Gateway Manager](./media/use-data-from-an-on-premises-sql-server/data-gateway-configuration-manager-registered.png)
- Azure Machine Learning Studio (classic) also gets updated when the registration is successful.
+ Machine Learning Studio (classic) also gets updated when the registration is successful.
![Gateway registration successful](./media/use-data-from-an-on-premises-sql-server/gateway-registered.png) 11. In the **Download and register data gateway** dialog, click the
SQL database.
![Enable verbose logging](./media/use-data-from-an-on-premises-sql-server/data-gateway-configuration-manager-verbose-logging.png)
-This completes the gateway setup process in Azure Machine Learning Studio (classic).
+This completes the gateway setup process in Machine Learning Studio (classic).
You're now ready to use your on-premises data. You can create and set up multiple gateways in Studio (classic) for each workspace. For example, you may have a gateway that you want to connect to your test data sources during development, and a different gateway
-for your production data sources. Azure Machine Learning Studio (classic) gives you the
+for your production data sources. Machine Learning Studio (classic) gives you the
flexibility to set up multiple gateways depending upon your corporate environment. Currently you can't share a gateway between workspaces and only one gateway can be installed on a single computer. For more information, see [Move data between on-premises sources and cloud with Data Management Gateway](../../data-factory/tutorial-hybrid-copy-portal.md).
an experiment that inputs the data from the SQL Server database.
**+NEW** in the lower-left corner, and select **Blank Experiment** (or select one of several sample experiments available). 2. Find and drag the **Import Data** module to the experiment canvas.
-3. Click **Save as** below the canvas. Enter "Azure Machine Learning Studio (classic)
+3. Click **Save as** below the canvas. Enter "Machine Learning Studio (classic)
On-Premises SQL Server Tutorial" for the experiment name, select the workspace, and click the **OK** check mark.
an experiment that inputs the data from the SQL Server database.
![Enter database credentials](./media/use-data-from-an-on-premises-sql-server/database-credentials.png)
- The message "values required" changes to "values set" with a green check mark. You only need to enter the credentials once unless the database information or password changes. Azure Machine Learning Studio (classic) uses the certificate you provided when you installed the gateway to encrypt the credentials in the cloud. Azure never stores on-premises credentials without encryption.
+ The message "values required" changes to "values set" with a green check mark. You only need to enter the credentials once unless the database information or password changes. Machine Learning Studio (classic) uses the certificate you provided when you installed the gateway to encrypt the credentials in the cloud. Azure never stores on-premises credentials without encryption.
![Import Data module properties](./media/use-data-from-an-on-premises-sql-server/import-data-properties-entered.png) 8. Click **RUN** to run the experiment.
machine-learning Use Sample Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-sample-datasets.md
Last updated 01/19/2018
-# Use the sample datasets in Azure Machine Learning Studio (classic)
+# Use the sample datasets in Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) [top]: #machine-learning-sample-datasets
-When you create a new workspace in Azure Machine Learning Studio (classic), a number of sample datasets and experiments are included by default. Many of these sample datasets are used by the sample models in the [Azure AI Gallery](https://gallery.azure.ai/). Others are included as examples of various types of data typically used in machine learning.
+When you create a new workspace in Machine Learning Studio (classic), a number of sample datasets and experiments are included by default. Many of these sample datasets are used by the sample models in the [Azure AI Gallery](https://gallery.azure.ai/). Others are included as examples of various types of data typically used in machine learning.
Some of these datasets are available in Azure Blob storage. For these datasets, the following table provides a direct link. You can use these datasets in your experiments by using the [Import Data][import-data] module.
A collection of simulated energy profiles, based on 12 different building shapes
<td> Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation (<a href="https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time">On-Time</a>). <p></p>
-The dataset covers the time period April-October 2013. Before uploading to Azure Machine Learning Studio (classic), the dataset was processed as follows:
+The dataset covers the time period April-October 2013. Before uploading to Machine Learning Studio (classic), the dataset was processed as follows:
<ul> <li>The dataset was filtered to cover only the 70 busiest airports in the continental US</li> <li>Canceled flights were labeled as delayed by more than 15 minutes</li>
Also, note that the number of background events (h, for hadronic showers) is und
<td> Hourly land-based weather observations from NOAA (<a href="https://az754797.vo.msecnd.net/data/WeatherDataset.csv">merged data from 201304 to 201310</a>). <p></p>
-The weather data covers observations made from airport weather stations, covering the time period April-October 2013. Before uploading to Azure Machine Learning Studio (classic), the dataset was processed as follows:
+The weather data covers observations made from airport weather stations, covering the time period April-October 2013. Before uploading to Machine Learning Studio (classic), the dataset was processed as follows:
<ul> <li>Weather station IDs were mapped to corresponding airport IDs</li> <li>Weather stations not associated with the 70 busiest airports were filtered out</li>
The weather data covers observations made from airport weather stations, coverin
<td> Data is derived from Wikipedia (<a href="https://www.wikipedia.org/">https://www.wikipedia.org/</a>) based on articles of each S&P 500 company, stored as XML data. <p></p>
-Before uploading to Azure Machine Learning Studio (classic), the dataset was processed as follows:
+Before uploading to Machine Learning Studio (classic), the dataset was processed as follows:
<ul> <li>Extract text content for each specific company</li> <li>Remove wiki formatting</li>
machine-learning Version Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/version-control.md
Title: 'ML Studio (classic): Application lifecycle management - Azure'
-description: Apply Application Lifecycle Management best practices in Azure Machine Learning Studio (classic)
+description: Apply Application Lifecycle Management best practices in Machine Learning Studio (classic)
Last updated 10/27/2016
-# Application Lifecycle Management in Azure Machine Learning Studio (classic)
+# Application Lifecycle Management in Machine Learning Studio (classic)
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-Azure Machine Learning Studio (classic) is a tool for developing machine learning experiments that are operationalized in the Azure cloud platform. It is like the Visual Studio IDE and scalable cloud service merged into a single platform. You can incorporate standard Application Lifecycle Management (ALM) practices from versioning various assets to automated execution and deployment, into Azure Machine Learning Studio (classic). This article discusses some of the options and approaches.
+Machine Learning Studio (classic) is a tool for developing machine learning experiments that are operationalized in the Azure cloud platform. It is like the Visual Studio IDE and scalable cloud service merged into a single platform. You can incorporate standard Application Lifecycle Management (ALM) practices from versioning various assets to automated execution and deployment, into Machine Learning Studio (classic). This article discusses some of the options and approaches.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
Azure Machine Learning Studio (classic) is a tool for developing machine learnin
There are two recommended ways to version your experiments. You can either rely on the built-in run history or export the experiment in a JSON format so as to manage it externally. Each approach comes with its pros and cons. ### Experiment snapshots using Run History
-In the execution model of the Azure Machine Learning Studio (classic) learning experiment, an immutable snapshot of the experiment is submitted to the job scheduler whenever you click **Run** in the experiment editor. To view this list of snapshots, click **Run History** on the command bar in the experiment editor view.
+In the execution model of the Machine Learning Studio (classic) learning experiment, an immutable snapshot of the experiment is submitted to the job scheduler whenever you click **Run** in the experiment editor. To view this list of snapshots, click **Run History** on the command bar in the experiment editor view.
![Run History button](./media/version-control/runhistory.png)
You can then open the snapshot in Locked mode by clicking the name of the experi
![Run History list](./media/version-control/runhistorylist.png)
-After it's opened, you can save the snapshot experiment as a new experiment and then modify it. If your experiment snapshot contains assets such as trained models, transforms, or datasets that have updated versions, the snapshot retains the references to the original version when the snapshot was taken. If you save the locked snapshot as a new experiment, Azure Machine Learning Studio (classic) detects the existence of a newer version of these assets, and automatically updates them in the new experiment.
+After it's opened, you can save the snapshot experiment as a new experiment and then modify it. If your experiment snapshot contains assets such as trained models, transforms, or datasets that have updated versions, the snapshot retains the references to the original version when the snapshot was taken. If you save the locked snapshot as a new experiment, Machine Learning Studio (classic) detects the existence of a newer version of these assets, and automatically updates them in the new experiment.
If you delete the experiment, all snapshots of that experiment are deleted. ### Export/import experiment in JSON format
-The run history snapshots keep an immutable version of the experiment in Azure Machine Learning Studio (classic) every time it is submitted to run. You can also save a local copy of the experiment and check it in to your favorite source control system, such as Team Foundation Server, and later on re-create an experiment from that local file. You can use the [Azure Machine Learning PowerShell](https://aka.ms/amlps) commandlets [*Export-AmlExperimentGraph*](https://github.com/hning86/azuremlps#export-amlexperimentgraph) and [*Import-AmlExperimentGraph*](https://github.com/hning86/azuremlps#import-amlexperimentgraph) to accomplish that.
+The run history snapshots keep an immutable version of the experiment in Machine Learning Studio (classic) every time it is submitted to run. You can also save a local copy of the experiment and check it in to your favorite source control system, such as Team Foundation Server, and later on re-create an experiment from that local file. You can use the [Machine Learning Studio (classic) PowerShell](https://aka.ms/amlps) commandlets [*Export-AmlExperimentGraph*](https://github.com/hning86/azuremlps#export-amlexperimentgraph) and [*Import-AmlExperimentGraph*](https://github.com/hning86/azuremlps#import-amlexperimentgraph) to accomplish that.
The JSON file is a textual representation of the experiment graph, which might include a reference to assets in the workspace such as a dataset or a trained model. It doesn't contain a serialized version of the asset. If you attempt to import the JSON document back into the workspace, the referenced assets must already exist with the same asset IDs that are referenced in the experiment. Otherwise you cannot access the imported experiment. ## Versioning trained model
-A trained model in Azure Machine Learning Studio (classic) is serialized into a format known as an iLearner file (`.iLearner`), and is stored in the Azure Blob storage account associated with the workspace. One way to get a copy of the iLearner file is through the retraining API. [This article](./retrain-machine-learning-model.md) explains how the retraining API works. The high-level steps:
+A trained model in Machine Learning Studio (classic) is serialized into a format known as an iLearner file (`.iLearner`), and is stored in the Azure Blob storage account associated with the workspace. One way to get a copy of the iLearner file is through the retraining API. [This article](./retrain-machine-learning-model.md) explains how the retraining API works. The high-level steps:
1. Set up your training experiment. 2. Add a web service output port to the Train Model module, or the module that produces the trained model, such as Tune Model Hyperparameter or Create R Model.
After you have the iLearner file containing the trained model, you can then empl
The saved iLearner file can then be used for scoring through deployed web services. ## Versioning web service
-You can deploy two types of web services from an Azure Machine Learning Studio (classic) experiment. The classic web service is tightly coupled with the experiment as well as the workspace. The new web service uses the Azure Resource Manager framework, and it is no longer coupled with the original experiment or the workspace.
+You can deploy two types of web services from an Machine Learning Studio (classic) experiment. The classic web service is tightly coupled with the experiment as well as the workspace. The new web service uses the Azure Resource Manager framework, and it is no longer coupled with the original experiment or the workspace.
### Classic web service To version a classic web service, you can take advantage of the web service endpoint construct. Here is a typical flow:
If you create a new Azure Resource Manager-based web service, the endpoint const
After you have the exported WSD file and version control it, you can also deploy the WSD as a new web service in a different web service plan in a different Azure region. Just make sure you supply the proper storage account configuration as well as the new web service plan ID. To patch in different iLearner files, you can modify the WSD file and update the location reference of the trained model, and deploy it as a new web service. ## Automate experiment execution and deployment
-An important aspect of ALM is to be able to automate the execution and deployment process of the application. In Azure Machine Learning Studio (classic), you can accomplish this by using the [PowerShell module](https://aka.ms/amlps). Here is an example of end-to-end steps that are relevant to a standard ALM automated execution/deployment process by using the [Azure Machine Learning Studio (classic) PowerShell module](https://aka.ms/amlps). Each step is linked to one or more PowerShell commandlets that you can use to accomplish that step.
+An important aspect of ALM is to be able to automate the execution and deployment process of the application. In Machine Learning Studio (classic), you can accomplish this by using the [PowerShell module](https://aka.ms/amlps). Here is an example of end-to-end steps that are relevant to a standard ALM automated execution/deployment process by using the [Machine Learning Studio (classic) PowerShell module](https://aka.ms/amlps). Each step is linked to one or more PowerShell commandlets that you can use to accomplish that step.
1. [Upload a dataset](https://github.com/hning86/azuremlps#upload-amldataset). 2. Copy a training experiment into the workspace from a [workspace](https://github.com/hning86/azuremlps#copy-amlexperiment) or from [Gallery](https://github.com/hning86/azuremlps#copy-amlexperimentfromgallery), or [import](https://github.com/hning86/azuremlps#import-amlexperimentgraph) an [exported](https://github.com/hning86/azuremlps#export-amlexperimentgraph) experiment from local disk.
An important aspect of ALM is to be able to automate the execution and deploymen
10. Test the web service [RRS](https://github.com/hning86/azuremlps#invoke-amlwebservicerrsendpoint) or [BES](https://github.com/hning86/azuremlps#invoke-amlwebservicebesendpoint) endpoint. ## Next steps
-* Download the [Azure Machine Learning Studio (classic) PowerShell](https://aka.ms/amlps) module and start to automate your ALM tasks.
+* Download the [Machine Learning Studio (classic) PowerShell](https://aka.ms/amlps) module and start to automate your ALM tasks.
* Learn how to [create and manage large number of ML models by using just a single experiment](create-models-and-endpoints-with-powershell.md) through PowerShell and retraining API.
-* Learn more about [deploying Azure Machine Learning web services](deploy-a-machine-learning-web-service.md).
+* Learn more about [deploying Machine Learning web services](deploy-a-machine-learning-web-service.md).
machine-learning Web Service Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-service-parameters.md
Title: 'ML Studio (classic): Web service parameters - Azure'
-description: How to use Azure Machine Learning Web Service Parameters to modify the behavior of your model when the web service is accessed.
+description: How to use Machine Learning Web Service Parameters to modify the behavior of your model when the web service is accessed.
Last updated 01/12/2017
-# Use Azure Machine Learning Studio (classic) web service parameters
+# Use Machine Learning Studio (classic) web service parameters
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-An Azure Machine Learning web service is created by publishing an experiment that contains modules with configurable parameters. In some cases, you may want to change the module behavior while the web service is running. *Web Service Parameters* allow you to do this task.
+A Machine Learning web service is created by publishing an experiment that contains modules with configurable parameters. In some cases, you may want to change the module behavior while the web service is running. *Web Service Parameters* allow you to do this task.
A common example is setting up the [Import Data][reader] module so that the user of the published web service can specify a different data source when the web service is accessed. Or configuring the [Export Data][writer] module so that a different destination can be specified. Some other examples include changing the number of bits for the [Feature Hashing][feature-hashing] module or the number of desired features for the [Filter-Based Feature Selection][filter-based-feature-selection] module.
You can decide whether to provide a default value for the Web Service Parameter.
The API documentation for the web service includes information for the web service user on how to specify the Web Service Parameter programmatically when accessing the web service. > [!NOTE]
-> The API documentation for a classic web service is provided through the **API help page** link in the web service **DASHBOARD** in Machine Learning Studio (classic). The API documentation for a new web service is provided through the [Azure Machine Learning Web Services](https://services.azureml.net/Quickstart) portal on the **Consume** and **Swagger API** pages for your web service.
+> The API documentation for a classic web service is provided through the **API help page** link in the web service **DASHBOARD** in Machine Learning Studio (classic). The API documentation for a new web service is provided through the [Machine Learning Web Services](https://services.azureml.net/Quickstart) portal on the **Consume** and **Swagger API** pages for your web service.
> >
As an example, let's assume we have an experiment with an [Export Data][writer]
7. Click **Deploy Web Service** and select **Deploy Web Service [Classic]** or **Deploy Web Service [New]** to deploy the web service. > [!NOTE]
-> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information see, [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md).
+> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information see, [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md).
The user of the web service can now specify a new destination for the [Export Data][writer] module when accessing the web service. ## More information For a more detailed example, see the [Web Service Parameters](/archive/blogs/machinelearning/azureml-web-service-parameters) entry in the [Machine Learning Blog](/archive/blogs/machinelearning/azureml-web-service-parameters).
-For more information on accessing a Machine Learning web service, see [How to consume an Azure Machine Learning Web service](consume-web-services.md).
+For more information on accessing a Machine Learning web service, see [How to consume a Machine Learning Web service](consume-web-services.md).
<!-- Module References --> [feature-hashing]: /azure/machine-learning/studio-module-reference/feature-hashing
machine-learning Web Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-logging.md
Last updated 06/15/2017
-# Enable logging for Azure Machine Learning Studio (classic) web services
+# Enable logging for Machine Learning Studio (classic) web services
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
This document provides information on the logging capability of Machine Learning
## How to enable logging for a Web service
-You enable logging from the [Azure Machine Learning Studio (classic) Web Services](https://services.azureml.net) portal.
+You enable logging from the [Machine Learning Studio (classic) Web Services](https://services.azureml.net) portal.
-1. Sign in to the Azure Machine Learning Studio (classic) Web Services portal at [https://services.azureml.net](https://services.azureml.net). For a Classic web service, you can also get to the portal by clicking **New Web Services Experience** on the Machine Learning Studio (classic) Web Services page in Studio (classic).
+1. Sign in to the Machine Learning Studio (classic) Web Services portal at [https://services.azureml.net](https://services.azureml.net). For a Classic web service, you can also get to the portal by clicking **New Web Services Experience** on the Machine Learning Studio (classic) Web Services page in Studio (classic).
![New Web Services Experience link](./media/web-services-logging/new-web-services-experience-link.png)
machine-learning Web Services That Use Import Export Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-that-use-import-export-modules.md
Last updated 03/28/2017
-# Deploy Azure Machine Learning Studio (classic) web services that use Data Import and Data Export modules
+# Deploy Machine Learning Studio (classic) web services that use Data Import and Data Export modules
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
Next you set up the predictive experiment from which you deploy your web service
9. In the **Data table name field**, type dbo.ScoredLabels. If the table does not exist, it is created when the experiment is run or the web service is called. 10. In the **Comma separated list of datatable columns** field, type ScoredLabels.
-When you write an application that calls the final web service, you may want to specify a different input query or destination table at run time. To configure these inputs and outputs, use the Web Service Parameters feature to set the *Import Data* module *Data source* property and the *Export Data* mode data destination property. For more information on Web Service Parameters, see the [Azure Machine Learning Studio Web Service Parameters entry](/archive/blogs/machinelearning/azureml-web-service-parameters) on the Cortana Intelligence and Machine Learning Blog.
+When you write an application that calls the final web service, you may want to specify a different input query or destination table at run time. To configure these inputs and outputs, use the Web Service Parameters feature to set the *Import Data* module *Data source* property and the *Export Data* mode data destination property. For more information on Web Service Parameters, see the [Machine Learning Studio (classic) Web Service Parameters entry](/archive/blogs/machinelearning/azureml-web-service-parameters) on the Cortana Intelligence and Machine Learning Blog.
To configure the Web Service Parameters for the import query and the destination table:
On completion of the run, a new table is added to the database containing the sc
### Deploy a New Web Service > [!NOTE]
-> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information, see [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md).
+> To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information, see [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md).
To deploy as a New Web Service and create an application to consume it:
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
The data is read as a data frame:
### Azure Synapse Analytics and databases Azure Synapse Analytics is an elastic data warehouse as a service with an enterprise-class SQL Server experience.
-You can provision Azure Synapse Analytics by following the instructions in [this article](../../synapse-analytics/sql-data-warehouse/create-data-warehouse-portal.md). After you provision Azure Synapse Analytics, you can use [this walkthrough](../team-data-science-process/sqldw-walkthrough.md) to do data upload, exploration, and modeling by using data within Azure Synapse Analytics.
+You can provision Azure Synapse Analytics by following the instructions in [this article](../../synapse-analytics/sql-data-warehouse/create-data-warehouse-portal.md). After you provision Azure Synapse Analytics, you can use [this walkthrough](/azure/architecture/data-science-process/sqldw-walkthrough) to do data upload, exploration, and modeling by using data within Azure Synapse Analytics.
#### Azure Cosmos DB Azure Cosmos DB is a NoSQL database in the cloud. You can use it to work with documents like JSON, and to store and query the documents.
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-optimize-cost.md
You can also configure the amount of time the node is idle before scale down. By
+ If you perform less iterative experimentation, reduce this time to save costs. + If you perform highly iterative dev/test experimentation, you might need to increase the time so you aren't paying for constant scaling up and down after each change to your training script or environment.
-AmlCompute clusters can be configured for your changing workload requirements in Azure portal, using the [AmlCompute SDK class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute), [AmlCompute CLI](/cli/azure/ml/computetarget/create#az_ml_computetarget_create_amlcompute), with the [REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable).
+AmlCompute clusters can be configured for your changing workload requirements in Azure portal, using the [AmlCompute SDK class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute), [AmlCompute CLI](/cli/azure/ml(v1)/computetarget/create#az_ml_v1__computetarget_create_amlcompute), with the [REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable).
```azurecli az ml computetarget create amlcompute --name testcluster --vm-size Standard_NC6 --min-nodes 0 --max-nodes 5 --idle-seconds-before-scaledown 300
One of the key methods of optimizing cost and performance is by parallelizing th
## Set data retention & deletion policies
-Every time a pipeline is executed, intermediate datasets are generated at each step. Over time, these intermediate datasets take up space in your storage account. Consider setting up policies to manage your data throughout its lifecycle to archive and delete your datasets. For more information, see [optimize costs by automating Azure Blob Storage access tiers](/storage/blobs/storage-lifecycle-management-concepts.md).
+Every time a pipeline is executed, intermediate datasets are generated at each step. Over time, these intermediate datasets take up space in your storage account. Consider setting up policies to manage your data throughout its lifecycle to archive and delete your datasets. For more information, see [optimize costs by automating Azure Blob Storage access tiers](/storage/blobs/storage-lifecycle-management-concepts).
## Deploy resources to the same region
For hybrid cloud scenarios like those using ExpressRoute, it can sometimes be mo
## Next steps - [Plan to manage costs for Azure Machine Learning](concept-plan-manage-cost.md)-- [Manage budgets, costs, and quota for Azure Machine Learning at organizational scale](/azure/cloud-adoption-framework/ready/azure-best-practices/optimize-ai-machine-learning-cost)
+- [Manage budgets, costs, and quota for Azure Machine Learning at organizational scale](/azure/cloud-adoption-framework/ready/azure-best-practices/optimize-ai-machine-learning-cost)
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-azure-machine-learning-cli.md
The following commands demonstrate how to work with machine learning pipelines:
+ Schedule a pipeline: ```azurecli-interactive
- az ml pipeline create-schedule -n myschedule -e myexpereiment -i mypipelineid -y myschedule.yml
+ az ml pipeline create-schedule -n myschedule -e myexperiment -i mypipelineid -y myschedule.yml
``` For more information, see [az ml pipeline create-schedule](/cli/azure/ml(v1)/pipeline#az_ml_pipeline_create-schedule).
machine-learning Agile Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/agile-development.md
- Title: Agile development of data science projects - Team Data Science Process
-description: Execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process.
----- Previously updated : 01/10/2020-----
-# Agile development of data science projects
-
-This document describes how developers can execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the [Team Data Science Process](overview.md) (TDSP). The TDSP is a framework developed by Microsoft that provides a structured sequence of activities to efficiently execute cloud-based, predictive analytics solutions. For an outline of the roles and tasks that are handled by a data science team standardizing on the TDSP, see [Team Data Science Process roles and tasks](roles-tasks.md).
-
-This article includes instructions on how to:
--- Do *sprint planning* for work items involved in a project.-- Add *work items* to sprints.-- Create and use an *agile-derived work item template* that specifically aligns with TDSP lifecycle stages.-
-The following instructions outline the steps needed to set up a TDSP team environment using Azure Boards and Azure Repos in Azure DevOps. The instructions use Azure DevOps because that is how to implement TDSP at Microsoft. If your group uses a different code hosting platform, the team lead tasks generally don't change, but the way to complete the tasks is different. For example, linking a work item with a Git branch might not be the same with GitHub as it is with Azure Repos.
-
-The following figure illustrates a typical sprint planning, coding, and source-control workflow for a data science project:
-
-![Team Data Science Process](./media/agile-development/1-project-execute.png)
-
-## <a name='Terminology-1'></a>Work item types
-
-In the TDSP sprint planning framework, there are four frequently used *work item* types: *Features*, *User Stories*, *Tasks*, and *Bugs*. The backlog for all work items is at the project level, not the Git repository level.
-
-Here are the definitions for the work item types:
--- **Feature**: A Feature corresponds to a project engagement. Different engagements with a client are different Features, and it's best to consider different phases of a project as different Features. If you choose a schema such as *\<ClientName>-\<EngagementName>* to name your Features, you can easily recognize the context of the project and engagement from the names themselves.
-
-- **User Story**: User Stories are work items needed to complete a Feature end-to-end. Examples of User Stories include:
- - Get data
- - Explore data
- - Generate features
- - Build models
- - Operationalize models
- - Retrain models
-
-- **Task**: Tasks are assignable work items that need to be done to complete a specific User Story. For example, Tasks in the User Story *Get data* could be:
- - Get SQL Server credentials
- - Upload data to Azure Synapse Analytics
-
-- **Bug**: Bugs are issues in existing code or documents that must be fixed to complete a Task. If Bugs are caused by missing work items, they can escalate to be User Stories or Tasks. -
-Data scientists may feel more comfortable using an agile template that replaces Features, User Stories, and Tasks with TDSP lifecycle stages and substages. To create an agile-derived template that specifically aligns with the TDSP lifecycle stages, see [Use an agile TDSP work template](#set-up-agile-dsp-6).
-
-> [!NOTE]
-> TDSP borrows the concepts of Features, User Stories, Tasks, and Bugs from software code management (SCM). The TDSP concepts might differ slightly from their conventional SCM definitions.
-
-## <a name='SprintPlanning-2'></a>Plan sprints
-
-Many data scientists are engaged with multiple projects, which can take months to complete and proceed at different paces. Sprint planning is useful for project prioritization, and resource planning and allocation. In Azure Boards, you can easily create, manage, and track work items for your projects, and conduct sprint planning to ensure projects are moving forward as expected.
-
-For more information about sprint planning, see [Scrum sprints](https://en.wikipedia.org/wiki/Scrum_(software_development)#Sprint).
-
-For more information about sprint planning in Azure Boards, see [Assign backlog items to a sprint](/azure/devops/boards/sprints/assign-work-sprint).
-
-## <a name='AddFeature-3'></a>Add a Feature to the backlog
-
-After your project and project code repository are created, you can add a Feature to the backlog to represent the work for your project.
-
-1. From your project page, select **Boards** > **Backlogs** in the left navigation.
-
-1. On the **Backlog** tab, if the work item type in the top bar is **Stories**, drop down and select **Features**. Then select **New Work Item.**
-
- ![Select New Work Item](./media/agile-development/2-sprint-team-overview.png)
-
-1. Enter a title for the Feature, usually your project name, and then select **Add to top**.
-
- ![Enter a title and select Add to top](./media/agile-development/3-sprint-team-add-work.png)
-
-1. From the **Backlog** list, select and open the new Feature. Fill in the description, assign a team member, and set planning parameters.
-
- You can also link the Feature to the project's Azure Repos code repository by selecting **Add link** under the **Development** section.
-
- After you edit the Feature, select **Save & Close**.
-
- ![Edit Feature and select Save & Close](./media/agile-development/3a-add-link-repo.png)
-
-## <a name='AddStoryunderfeature-4'></a>Add a User Story to the Feature
-
-Under the Feature, you can add User Stories to describe major steps needed to complete the project.
-
-To add a new User Story to a Feature:
-
-1. On the **Backlog** tab, select the **+** to the left of the Feature.
-
- ![Add a new User Story under the Feature](./media/agile-development/4-sprint-add-story.png)
-
-1. Give the User Story a title, and edit details such as assignment, status, description, comments, planning, and priority.
-
- You can also link the User Story to a branch of the project's Azure Repos code repository by selecting **Add link** under the **Development** section. Select the repository and branch you want to link the work item to, and then select **OK**.
-
- ![Add link](./media/agile-development/5-sprint-edit-story.png)
-
-1. When you're finished editing the User Story, select **Save & Close**.
-
-## <a name='AddTaskunderstory-5'></a>Add a Task to a User Story
-
-Tasks are specific detailed steps that are needed to complete each User Story. After all Tasks of a User Story are completed, the User Story should be completed too.
-
-To add a Task to a User Story, select the **+** next to the User Story item, and select **Task**. Fill in the title and other information in the Task.
-
-![Add a Task to a User Story](./media/agile-development/7-sprint-add-task.png)
-
-After you create Features, User Stories, and Tasks, you can view them in the **Backlogs** or **Boards** views to track their status.
-
-![Backlogs view](./media/agile-development/8-sprint-backlog-view.png)
-
-![Boards view](./media/agile-development/8a-sprint-board-view.png)
-
-## <a name='set-up-agile-dsp-6'></a>Use an agile TDSP work template
-
-Data scientists may feel more comfortable using an agile template that replaces Features, User Stories, and Tasks with TDSP lifecycle stages and substages. In Azure Boards, you can create an agile-derived template that uses TDSP lifecycle stages to create and track work items. The following steps walk through setting up a data science-specific agile process template and creating data science work items based on the template.
-
-### Set up an Agile Data Science Process template
-
-1. From your Azure DevOps organization main page, select **Organization settings** from the left navigation.
-
-1. In the **Organization Settings** left navigation, under **Boards**, select **Process**.
-
-1. In the **All processes** pane, select the **...** next to **Agile**, and then select **Create inherited process**.
-
- ![Create inherited process from Agile](./media/agile-development/10-settings.png)
-
-1. In the **Create inherited process from Agile** dialog, enter the name *AgileDataScienceProcess*, and select **Create process**.
-
- ![Create AgileDataScienceProcess process](./media/agile-development/11-agileds.png)
-
-1. In **All processes**, select the new **AgileDataScienceProcess**.
-
-1. On the **Work item types** tab, disable **Epic**, **Feature**, **User Story**, and **Task** by selecting the **...** next to each item and then selecting **Disable**.
-
- ![Disable work item types](./media/agile-development/12-disable.png)
-
-1. In **All processes**, select the **Backlog levels** tab. Under **Portfolios backlogs**, select the **...** next to **Epic (disabled)**, and then select **Edit/Rename**.
-
-1. In the **Edit backlog level** dialog box:
- 1. Under **Name**, replace **Epic** with *TDSP Projects*.
- 1. Under **Work item types on this backlog level**, select **New work item type**, enter *TDSP Project*, and select **Add**.
- 1. Under **Default work item type**, drop down and select **TDSP Project**.
- 1. Select **Save**.
-
- ![Set Portfolio backlog level](./media/agile-development/13-rename.png)
-
-1. Follow the same steps to rename **Features** to *TDSP Stages*, and add the following new work item types:
-
- - *Business Understanding*
- - *Data Acquisition*
- - *Modeling*
- - *Deployment*
-
-1. Under **Requirement backlog**, rename **Stories** to *TDSP Substages*, add the new work item type *TDSP Substage*, and set the default work item type to **TDSP Substage**.
-
-1. Under **Iteration backlog**, add a new work item type *TDSP Task*, and set it to be the default work item type.
-
-After you complete the steps, the backlog levels should look like this:
-
- ![TDSP template backlog levels](./media/agile-development/14-template.png)
-
-### Create Agile Data Science Process work items
-
-You can use the data science process template to create TDSP projects and track work items that correspond to TDSP lifecycle stages.
-
-1. From your Azure DevOps organization main page, select **New project**.
-
-1. In the **Create new project** dialog, give your project a name, and then select **Advanced**.
-
-1. Under **Work item process**, drop down and select **AgileDataScienceProcess**, and then select **Create**.
-
- ![Create a TDSP project](./media/agile-development/15-newproject.png)
-
-1. In the newly created project, select **Boards** > **Backlogs** in the left navigation.
-
-1. To make TDSP Projects visible, select the **Configure team settings** icon. In the **Settings** screen, select the **TDSP Projects** check box, and then select **Save and close**.
-
- ![Select TDSP Projects check box](./media/agile-development/16-enabledsprojects1.png)
-
-1. To create a data science-specific TDSP Project, select **TDSP Projects** in the top bar, and then select **New work item**.
-
-1. In the popup, give the TDSP Project work item a name, and select **Add to top**.
-
- ![Create data science project work item](./media/agile-development/17-dsworkitems0.png)
-
-1. To add a work item under the TDSP Project, select the **+** next to the project, and then select the type of work item to create.
-
- ![Select data science work item type](./media/agile-development/17-dsworkitems1.png)
-
-1. Fill in the details in the new work item, and select **Save & Close**.
-
-1. Continue to select the **+** symbols next to work items to add new TDSP Stages, Substages, and Tasks.
-
-Here is an example of how the data science project work items should appear in **Backlogs** view:
-
-![18](./media/agile-development/18-workitems1.png)
--
-## Next steps
-
-[Collaborative coding with Git](collaborative-coding-with-git.md) describes how to do collaborative code development for data science projects using Git as the shared code development framework, and how to link these coding activities to the work planned with the agile process.
-
-[Example walkthroughs](walkthroughs.md) lists walkthroughs of specific scenarios, with links and thumbnail descriptions. The linked scenarios illustrate how to combine cloud and on-premises tools and services into workflows or pipelines to create intelligent applications.
-
-Additional resources on agile processes:
--- [Agile process](/azure/devops/boards/work-items/guidance/agile-process)
-
-- [Agile process work item types and workflow](/azure/devops/boards/work-items/guidance/agile-process-workflow)-
machine-learning Apps Anomaly Detection Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/apps-anomaly-detection-api.md
- Title: Azure Machine Learning Anomaly Detection API - Team Data Science Process
-description: Anomaly Detection API is an example built with Microsoft Azure Machine Learning that detects anomalies in time series data with numerical values that are uniformly spaced in time.
------ Previously updated : 01/10/2020----
-# Machine Learning Anomaly Detection API
-
-> [!NOTE]
-> This item is under maintenance. We encourage you to use the [Anomaly Detector API service](https://azure.microsoft.com/services/cognitive-services/anomaly-detector/) powered by a gallery of Machine Learning algorithms under Azure Cognitive Services to detect anomalies from business, operational, and IoT metrics.
-
-## Overview
-[Anomaly Detection API](https://gallery.azure.ai/MachineLearningAPI/Anomaly-Detection-2) is an example built with Azure Machine Learning that detects anomalies in time series data with numerical values that are uniformly spaced in time.
-
-This API can detect the following types of anomalous patterns in time series data:
-
-* **Positive and negative trends**: For example, when monitoring memory usage in computing an upward trend may be of interest as it may be indicative of a memory leak,
-* **Changes in the dynamic range of values**: For example, when monitoring the exceptions thrown by a cloud service, any changes in the dynamic range of values could indicate instability in the health of the service, and
-* **Spikes and Dips**: For example, when monitoring the number of login failures in a service or number of checkouts in an e-commerce site, spikes or dips could indicate abnormal behavior.
-
-These machine learning detectors track such changes in values over time and report ongoing changes in their values as anomaly scores. They do not require adhoc threshold tuning and their scores can be used to control false positive rate. The anomaly detection API is useful in several scenarios like service monitoring by tracking KPIs over time, usage monitoring through metrics such as number of searches, numbers of clicks, performance monitoring through counters like memory, CPU, file reads, etc. over time.
-
-The Anomaly Detection offering comes with useful tools to get you started.
-
-* The [web application](https://anomalydetection-aml.azurewebsites.net/) helps you evaluate and visualize the results of anomaly detection APIs on your data.
-
-> [!NOTE]
-> Try **IT Anomaly Insights solution** powered by [this API](https://gallery.azure.ai/MachineLearningAPI/Anomaly-Detection-2)
->
-<!-- This Solution is no longer available
-> To get this end to end solution deployed to your Azure subscription <a href="https://gallery.cortanaintelligence.com/Solution/Anomaly-Detection-Pre-Configured-Solution-1" target="_blank">**Start here >**</a>
>-
-## API Deployment
-In order to use the API, you must deploy it to your Azure subscription where it will be hosted as an Azure Machine Learning web service. You can do this from the [Azure AI Gallery](https://gallery.azure.ai/MachineLearningAPI/Anomaly-Detection-2). This will deploy two Azure Machine Learning Studio (classic) Web Services (and their related resources) to your Azure subscription - one for anomaly detection with seasonality detection, and one without seasonality detection. Once the deployment has completed, you will be able to manage your APIs from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. From this page, you will be able to find your endpoint locations, API keys, as well as sample code for calling the API. More detailed instructions are available [here](../classic/manage-new-webservice.md).
-
-## Scaling the API
-By default, your deployment will have a free Dev/Test billing plan that includes 1,000 transactions/month and 2 compute hours/month. You can upgrade to another plan as per your needs. Details on the pricing of different plans are available [here](https://azure.microsoft.com/pricing/details/machine-learning/) under "Production Web API pricing".
-
-## Managing AML Plans
-You can manage your billing plan [here](https://services.azureml.net/plans/). The plan name will be based on the resource group name you chose when deploying the API, plus a string that is unique to your subscription. Instructions on how to upgrade your plan are available [here](../classic/manage-new-webservice.md) under the "Managing billing plans" section.
-
-## API Definition
-The web service provides a REST-based API over HTTPS that can be consumed in different ways including a web or mobile application, R, Python, Excel, etc. You send your time series data to this service via a REST API call, and it runs a combination of the three anomaly types described below.
-
-## Calling the API
-In order to call the API, you will need to know the endpoint location and API key. These two requirements, along with sample code for calling the API, are available from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. Navigate to the desired API, and then click the "Consume" tab to find them. You can call the API as a Swagger API (that is, with the URL parameter `format=swagger`) or as a non-Swagger API (that is, without the `format` URL parameter). The sample code uses the Swagger format. Below is an example request and response in non-Swagger format. These examples are to the seasonality endpoint. The non-seasonality endpoint is similar.
-
-### Sample Request Body
-The request contains two objects: `Inputs` and `GlobalParameters`. In the example request below, some parameters are sent explicitly while others are not (scroll down for a full list of parameters for each endpoint). Parameters that are not sent explicitly in the request will use the default values given below.
-
-```json
-{
- "Inputs": {
- "input1": {
- "ColumnNames": ["Time", "Data"],
- "Values": [
- ["5/30/2010 18:07:00", "1"],
- ["5/30/2010 18:08:00", "1.4"],
- ["5/30/2010 18:09:00", "1.1"]
- ]
- }
- },
- "GlobalParameters": {
- "tspikedetector.sensitivity": "3",
- "zspikedetector.sensitivity": "3",
- "bileveldetector.sensitivity": "3.25",
- "detectors.spikesdips": "Both"
- }
-}
-```
-
-### Sample Response
-In order to see the `ColumnNames` field, you must include `details=true` as a URL parameter in your request. See the tables below for the meaning behind each of these fields.
-
-```json
-{
- "Results": {
- "output1": {
- "type": "table",
- "value": {
- "Values": [
- ["5/30/2010 6:07:00 PM", "1", "1", "0", "0", "-0.687952590518378", "0", "-0.687952590518378", "0", "-0.687952590518378", "0"],
- ["5/30/2010 6:08:00 PM", "1.4", "1.4", "0", "0", "-1.07030497733224", "0", "-0.884548154298423", "0", "-1.07030497733224", "0"],
- ["5/30/2010 6:09:00 PM", "1.1", "1.1", "0", "0", "-1.30229513613974", "0", "-1.173800281031", "0", "-1.30229513613974", "0"]
- ],
- "ColumnNames": ["Time", "OriginalData", "ProcessedData", "TSpike", "ZSpike", "BiLevelChangeScore", "BiLevelChangeAlert", "PosTrendScore", "PosTrendAlert", "NegTrendScore", "NegTrendAlert"],
- "ColumnTypes": ["DateTime", "Double", "Double", "Double", "Double", "Double", "Int32", "Double", "Int32", "Double", "Int32"]
- }
- }
- }
-}
-```
--
-## Score API
-The Score API is used for running anomaly detection on non-seasonal time series data. The API runs a number of anomaly detectors on the data and returns their anomaly scores.
-The figure below shows an example of anomalies that the Score API can detect. This time series has two distinct level changes, and three spikes. The red dots show the time at which the level change is detected, while the black dots show the detected spikes.
-![Score API][1]
-
-### Detectors
-The anomaly detection API supports detectors in three broad categories. Details on specific input parameters and outputs for each detector can be found in the following table.
-
-| Detector Category | Detector | Description | Input Parameters | Outputs |
-| | | | | |
-| Spike Detectors |TSpike Detector |Detect spikes and dips based on far the values are from first and third quartiles |*tspikedetector.sensitivity:* takes integer value in the range 1-10, default: 3; Higher values will catch more extreme values thus making it less sensitive |TSpike: binary values ΓÇô ΓÇÿ1ΓÇÖ if a spike/dip is detected, ΓÇÿ0ΓÇÖ otherwise |
-| Spike Detectors | ZSpike Detector |Detect spikes and dips based on how far the datapoints are from their mean |*zspikedetector.sensitivity:* take integer value in the range 1-10, default: 3; Higher values will catch more extreme values making it less sensitive |ZSpike: binary values ΓÇô ΓÇÿ1ΓÇÖ if a spike/dip is detected, ΓÇÿ0ΓÇÖ otherwise |
-| Slow Trend Detector |Slow Trend Detector |Detect slow positive trend as per the set sensitivity |*trenddetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 ΓÇô 5 is a reasonable range to select from; The higher the less sensitive) |tscore: floating number representing anomaly score on trend |
-| Level Change Detectors | Bidirectional Level Change Detector |Detect both upward and downward level change as per the set sensitivity |*bileveldetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 ΓÇô 5 is a reasonable range to select from; The higher the less sensitive) |rpscore: floating number representing anomaly score on upward and downward level change |
-
-### Parameters
-More detailed information on these input parameters is listed in the table below:
-
-| Input Parameters | Description | Default Setting | Type | Valid Range | Suggested Range |
-| | | | | | |
-| detectors.historywindow |History (in # of data points) used for anomaly score computation |500 |integer |10-2000 |Time-series dependent |
-| detectors.spikesdips | Whether to detect only spikes, only dips, or both |Both |enumerated |Both, Spikes, Dips |Both |
-| bileveldetector.sensitivity |Sensitivity for bidirectional level change detector. |3.25 |double |None |3.25-5 (Lesser values mean more sensitive) |
-| trenddetector.sensitivity |Sensitivity for positive trend detector. |3.25 |double |None |3.25-5 (Lesser values mean more sensitive) |
-| tspikedetector.sensitivity |Sensitivity for TSpike Detector |3 |integer |1-10 |3-5 (Lesser values mean more sensitive) |
-| zspikedetector.sensitivity |Sensitivity for ZSpike Detector |3 |integer |1-10 |3-5 (Lesser values mean more sensitive) |
-| postprocess.tailRows |Number of the latest data points to be kept in the output results |0 |integer |0 (keep all data points), or specify number of points to keep in results |N/A |
-
-### Output
-The API runs all detectors on your time series data and returns anomaly scores and binary spike indicators for each point in time. The table below lists outputs from the API.
-
-| Outputs | Description |
-| | |
-| Time |Timestamps from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
-| Data |Values from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
-| TSpike |Binary indicator to indicate whether a spike is detected by TSpike Detector |
-| ZSpike |Binary indicator to indicate whether a spike is detected by ZSpike Detector |
-| rpscore |A floating number representing anomaly score on bidirectional level change |
-| rpalert |1/0 value indicating there is a bidirectional level change anomaly based on the input sensitivity |
-| tscore |A floating number representing anomaly score on positive trend |
-| talert |1/0 value indicating there is a positive trend anomaly based on the input sensitivity |
-
-## ScoreWithSeasonality API
-The ScoreWithSeasonality API is used for running anomaly detection on time series that have seasonal patterns. This API is useful to detect deviations in seasonal patterns.
-The following figure shows an example of anomalies detected in a seasonal time series. The time series has one spike (the first black dot), two dips (the second black dot and one at the end), and one level change (red dot). Both the dip in the middle of the time series and the level change are only discernable after seasonal components are removed from the series.
-![Seasonality API][2]
-
-### Detectors
-The detectors in the seasonality endpoint are similar to the ones in the non-seasonality endpoint, but with slightly different parameter names (listed below).
-
-### Parameters
-
-More detailed information on these input parameters is listed in the table below:
-
-| Input Parameters | Description | Default Setting | Type | Valid Range | Suggested Range |
-| | | | | | |
-| preprocess.aggregationInterval |Aggregation interval in seconds for aggregating input time series |0 (no aggregation is performed) |integer |0: skip aggregation, > 0 otherwise |5 minutes to 1 day, time-series dependent |
-| preprocess.aggregationFunc |Function used for aggregating data into the specified AggregationInterval |mean |enumerated |mean, sum, length |N/A |
-| preprocess.replaceMissing |Values used to impute missing data |lkv (last known value) |enumerated |zero, lkv, mean |N/A |
-| detectors.historywindow |History (in # of data points) used for anomaly score computation |500 |integer |10-2000 |Time-series dependent |
-| detectors.spikesdips | Whether to detect only spikes, only dips, or both |Both |enumerated |Both, Spikes, Dips |Both |
-| bileveldetector.sensitivity |Sensitivity for bidirectional level change detector. |3.25 |double |None |3.25-5 (Lesser values mean more sensitive) |
-| postrenddetector.sensitivity |Sensitivity for positive trend detector. |3.25 |double |None |3.25-5 (Lesser values mean more sensitive) |
-| negtrenddetector.sensitivity |Sensitivity for negative trend detector. |3.25 |double |None |3.25-5 (Lesser values mean more sensitive) |
-| tspikedetector.sensitivity |Sensitivity for TSpike Detector |3 |integer |1-10 |3-5 (Lesser values mean more sensitive) |
-| zspikedetector.sensitivity |Sensitivity for ZSpike Detector |3 |integer |1-10 |3-5 (Lesser values mean more sensitive) |
-| seasonality.enable |Whether seasonality analysis is to be performed |true |boolean |true, false |Time-series dependent |
-| seasonality.numSeasonality |Maximum number of periodic cycles to be detected |1 |integer |1, 2 |1-2 |
-| seasonality.transform |Whether seasonal (and) trend components shall be removed before applying anomaly detection |deseason |enumerated |none, deseason, deseasontrend |N/A |
-| postprocess.tailRows |Number of the latest data points to be kept in the output results |0 |integer |0 (keep all data points), or specify number of points to keep in results |N/A |
-
-### Output
-The API runs all detectors on your time series data and returns anomaly scores and binary spike indicators for each point in time. The table below lists outputs from the API.
-
-| Outputs | Description |
-| | |
-| Time |Timestamps from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
-| OriginalData |Values from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
-| ProcessedData |Either of the following options: <ul><li>Seasonally adjusted time series if significant seasonality has been detected and deseason option selected;</li><li>seasonally adjusted and detrended time series if significant seasonality has been detected and deseasontrend option selected</li><li>otherwise, this option is the same as OriginalData</li> |
-| TSpike |Binary indicator to indicate whether a spike is detected by TSpike Detector |
-| ZSpike |Binary indicator to indicate whether a spike is detected by ZSpike Detector |
-| BiLevelChangeScore |A floating number representing anomaly score on level change |
-| BiLevelChangeAlert |1/0 value indicating there is a level change anomaly based on the input sensitivity |
-| PosTrendScore |A floating number representing anomaly score on positive trend |
-| PosTrendAlert |1/0 value indicating there is a positive trend anomaly based on the input sensitivity |
-| NegTrendScore |A floating number representing anomaly score on negative trend |
-| NegTrendAlert |1/0 value indicating there is a negative trend anomaly based on the input sensitivity |
-
-[1]: ./media/apps-anomaly-detection-api/anomaly-detection-score.png
-[2]: ./media/apps-anomaly-detection-api/anomaly-detection-seasonal.png
machine-learning Automated Data Pipeline Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/automated-data-pipeline-cheat-sheet.md
- Title: Azure Machine Learning data pipeline cheat sheet - Team Data Science Process
-description: A printable cheat sheet that shows you how to set up an automated data pipeline to your Azure Machine Learning web service whether your data is on-premises, streaming, in Azure, or in a third-party cloud service.
------ Previously updated : 01/10/2020---
-# Cheat sheet for an automated data pipeline for Azure Machine Learning predictions
-The **Microsoft Azure Machine Learning automated data pipeline cheat sheet** helps you navigate through the
-technology you can use to get your data to your Machine Learning web service where it can be scored by your predictive analytics model.
-
-Depending on whether your data is on-premises, in the cloud, or real-time streaming, there are different mechanisms available to move the data to your web service endpoint for scoring.
-This cheat sheet walks you through the decisions you need to make, and it offers links to articles that can help you develop your solution.
-
-## Download the Machine Learning automated data pipeline cheat sheet
-Once you download the cheat sheet, you can print it in tabloid size (11 x 17 in.).
-
-Download the cheat sheet here: **[Microsoft Azure Machine Learning automated data pipeline cheat sheet](https://download.microsoft.com/download/C/C/7/CC726F8B-2E6F-4C20-9B6F-AFBEE8253023/microsoft-machine-learning-operationalization-cheat-sheet_v1.pdf)**
-
-![Microsoft Azure Machine Learning Studio (classic) Capabilities Overview][op-cheat-sheet]
-
-[op-cheat-sheet]: ./media/automated-data-pipeline-cheat-sheet/machine-learning-automated-data-pipeline-cheat-sheet_v1.1.png
--
-## More help with Machine Learning Studio
-* For an overview of Microsoft Azure Machine Learning, see [Introduction to machine learning on Microsoft Azure](../classic/index.yml).
-* For an explanation of how to deploy a scoring web service, see [Deploy an Azure Machine Learning web service](../classic/deploy-a-machine-learning-web-service.md).
-* For a discussion of how to consume a scoring web service, see [How to consume an Azure Machine Learning Web service](../classic/consume-web-services.md).
machine-learning Ci Cd Flask https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/ci-cd-flask.md
- Title: Create a CI/CD pipeline with Azure Pipelines - Team Data Science Process
-description: "Create a continuous integration and continuous delivery pipeline for Artificial Intelligence (AI) applications using Docker and Kubernetes."
------ Previously updated : 01/10/2020---
-# Create CI/CD pipelines for AI apps using Azure Pipelines, Docker, and Kubernetes
-
-An Artificial Intelligence (AI) application is application code embedded with a pretrained machine learning (ML) model. There are always two streams of work for an AI application: Data scientists build the ML model, and app developers build the app and expose it to end users to consume. This article describes how to implement a continuous integration and continuous delivery (CI/CD) pipeline for an AI application that embeds the ML model into the app source code. The sample code and tutorial use a Python Flask web application, and fetch a pretrained model from a private Azure blob storage account. You could also use an AWS S3 storage account.
-
-> [!NOTE]
-> The following process is one of several ways to do CI/CD. There are alternatives to this tooling and the prerequisites.
-
-## Source code, tutorial, and prerequisites
-
-You can download [source code](https://github.com/Azure/DevOps-For-AI-Apps) and a [detailed tutorial](https://github.com/Azure/DevOps-For-AI-Apps/blob/master/Tutorial.md) from GitHub. Follow the tutorial steps to implement a CI/CD pipeline for your own application.
-
-To use the downloaded source code and tutorial, you need the following prerequisites:
--- The [source code repository](https://github.com/Azure/DevOps-For-AI-Apps) forked to your GitHub account-- An [Azure DevOps Organization](/azure/devops/organizations/accounts/create-organization-msa-or-work-student)-- [Azure CLI](/cli/azure/install-azure-cli)-- An [Azure Container Service for Kubernetes (AKS) cluster](/previous-versions/azure/container-service/kubernetes/container-service-tutorial-kubernetes-deploy-cluster)-- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to run commands and fetch configuration from the AKS cluster -- An [Azure Container Registry (ACR) account](../../container-registry/container-registry-get-started-portal.md)-
-## CI/CD pipeline summary
-
-Each new Git commit kicks off the Build pipeline. The build securely pulls the latest ML model from a blob storage account, and packages it with the app code in a single container. This decoupling of the application development and data science workstreams ensures that the production app is always running the latest code with the latest ML model. If the app passes testing, the pipeline securely stores the build image in a Docker container in ACR. The release pipeline then deploys the container using AKS.
-
-## CI/CD pipeline steps
-
-The following diagram and steps describe the CI/CD pipeline architecture:
-
-![CI/CD pipeline architecture](./media/ci-cd-flask/architecture.png)
-
-1. Developers work on the application code in the IDE of their choice.
-2. The developers commit the code to Azure Repos, GitHub, or other Git source control provider.
-3. Separately, data scientists work on developing their ML model.
-4. The data scientists publish the finished model to a model repository, in this case a blob storage account.
-5. Azure Pipelines kicks off a build based on the Git commit.
-6. The Build pipeline pulls the latest ML model from blob storage and creates a container.
-7. The pipeline pushes the build image to the private image repository in ACR.
-8. The Release pipeline kicks off based on the successful build.
-9. The pipeline pulls the latest image from ACR and deploys it across the Kubernetes cluster on AKS.
-10. User requests for the app go through the DNS server.
-11. The DNS server passes the requests to a load balancer, and sends responses back to the users.
-
-## See also
--- [Team Data Science Process (TDSP)](./index.yml)-- [Azure Machine Learning (AML)](../index.yml)-- [Azure DevOps](https://azure.microsoft.com/services/devops/)-- [Azure Kubernetes Services (AKS)](../../aks/intro-kubernetes.md)
machine-learning Code Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/code-test.md
- Title: Test data science code with Azure DevOps Services - Team Data Science Process
-description: Data science code testing on Azure with the UCI adult income prediction dataset with the Team Data Science Process and Azure DevOps Services
------ Previously updated : 01/10/2020---
-# Data science code testing on Azure with the Team Data Science Process and Azure DevOps Services
-This article gives preliminary guidelines for testing code in a data science workflow. Such testing gives data scientists a systematic and efficient way to check the quality and expected outcome of their code. We use a Team Data Science Process (TDSP) [project that uses the UCI Adult Income dataset](https://github.com/Azure/MachineLearningSamples-TDSPUCIAdultIncome) that we published earlier to show how code testing can be done.
-
-## Introduction on code testing
-"Unit testing" is a longstanding practice for software development. But for data science, it's often not clear what "unit testing" means and how you should test code for different stages of a data science lifecycle, such as:
-
-* Data preparation
-* Data quality examination
-* Modeling
-* Model deployment
-
-This article replaces the term "unit testing" with "code testing." It refers to testing as the functions that help to assess if code for a certain step of a data science lifecycle is producing results "as expected." The person who's writing the test defines what's "as expected," depending on the outcome of the function--for example, data quality check or modeling.
-
-This article provides references as useful resources.
-
-## Azure DevOps for the testing framework
-This article describes how to perform and automate testing by using Azure DevOps. You might decide to use alternative tools. We also show how to set up an automatic build by using Azure DevOps and build agents. For build agents, we use Azure Data Science Virtual Machines (DSVMs).
-
-## Flow of code testing
-The overall workflow of testing code in a data science project looks like this:
-
-![Flow chart of code testing](./media/code-test/test-flow-chart.PNG)
-
-
-## Detailed steps
-
-Use the following steps to set up and run code testing and an automated build by using a build agent and Azure DevOps:
-
-1. Create a project in the Visual Studio desktop application:
-
- !["Create new project" screen in Visual Studio](./media/code-test/create_project.PNG)
-
- After you create your project, you'll find it in Solution Explorer in the right pane:
-
- ![Steps for creating a project](./media/code-test/create_python_project_in_vs.PNG)
-
- ![Solution Explorer](./media/code-test/solution_explorer_in_vs.PNG)
-
-1. Feed your project code into the Azure DevOps project code repository:
-
- ![Project code repository](./media/code-test/create_repo.PNG)
-
-1. Suppose you've done some data preparation work, such as data ingestion, feature engineering, and creating label columns. You want to make sure your code is generating the results that you expect. Here's some code that you can use to test whether the data-processing code is working properly:
-
- * Check that column names are right:
-
- ![Code for matching column names](./media/code-test/check_column_names.PNG)
-
- * Check that response levels are right:
-
- ![Code for matching levels](./media/code-test/check_response_levels.PNG)
-
- * Check that response percentage is reasonable:
-
- ![Code for response percentage](./media/code-test/check_response_percentage.PNG)
-
- * Check the missing rate of each column in the data:
-
- ![Code for missing rate](./media/code-test/check_missing_rate.PNG)
--
-1. After you've done the data processing and feature engineering work, and you've trained a good model, make sure that the model you trained can score new datasets correctly. You can use the following two tests to check the prediction levels and distribution of label values:
-
- * Check prediction levels:
-
- ![Code for checking prediction levels](./media/code-test/check_prediction_levels.PNG)
-
- * Check the distribution of prediction values:
-
- ![Code for checking prediction values](./media/code-test/check_prediction_values.PNG)
-
-1. Put all test functions together into a Python script called **test_funcs.py**:
-
- ![Python script for test functions](./media/code-test/create_file_test_func.PNG)
--
-1. After the test codes are prepared, you can set up the testing environment in Visual Studio.
-
- Create a Python file called **test1.py**. In this file, create a class that includes all the tests you want to do. The following example shows six tests prepared:
-
- ![Python file with a list of tests in a class](./media/code-test/create_file_test1_class.PNG)
-
-1. Those tests can be automatically discovered if you put **codetest.testCase** after your class name. Open Test Explorer in the right pane, and select **Run All**. All the tests will run sequentially and will tell you if the test is successful or not.
-
- ![Running the tests](./media/code-test/run_tests.PNG)
-
-1. Check in your code to the project repository by using Git commands. Your most recent work will be reflected shortly in Azure DevOps.
-
- ![Git commands for checking in code](./media/code-test/git_check_in.PNG)
-
- ![Most recent work in Azure DevOps](./media/code-test/git_check_in_most_recent_work.PNG)
-
-1. Set up automatic build and test in Azure DevOps:
-
- a. In the project repository, select **Build and Release**, and then select **+New** to create a new build process.
-
- ![Selections for starting a new build process](./media/code-test/create_new_build.PNG)
-
- b. Follow the prompts to select your source code location, project name, repository, and branch information.
-
- ![Source, name, repository, and branch information](./media/code-test/fill_in_build_info.PNG)
-
- c. Select a template. Because there's no Python project template, start by selecting **Empty process**.
-
- ![List of templates and "Empty process" button](./media/code-test/start_empty_process_template.PNG)
-
- d. Name the build and select the agent. You can choose the default here if you want to use a DSVM to complete the build process. For more information about setting agents, see [Build and release agents](/azure/devops/pipelines/agents/agents).
-
- ![Build and agent selections](./media/code-test/select_agent.PNG)
-
- e. Select **+** in the left pane, to add a task for this build phase. Because we're going to run the Python script **test1.py** to complete all the checks, this task is using a PowerShell command to run Python code.
-
- !["Add tasks" pane with PowerShell selected](./media/code-test/add_task_powershell.PNG)
-
- f. In the PowerShell details, fill in the required information, such as the name and version of PowerShell. Choose **Inline Script** as the type.
-
- In the box under **Inline Script**, you can type **python test1.py**. Make sure the environment variable is set up correctly for Python. If you need a different version or kernel of Python, you can explicitly specify the path as shown in the figure:
-
- ![PowerShell details](./media/code-test/powershell_scripts.PNG)
-
- g. Select **Save & queue** to complete the build pipeline process.
-
- !["Save & queue" button](./media/code-test/save_and_queue_build_definition.PNG)
-
-Now every time a new commit is pushed to the code repository, the build process will start automatically. You can define any branch. The process runs the **test1.py** file in the agent machine to make sure that everything defined in the code runs correctly.
-
-If alerts are set up correctly, you'll be notified in email when the build is finished. You can also check the build status in Azure DevOps. If it fails, you can check the details of the build and find out which piece is broken.
-
-![Email notification of build success](./media/code-test/email_build_succeed.PNG)
-
-![Azure DevOps notification of build success](./media/code-test/vs_online_build_succeed.PNG)
-
-## Next steps
-* See the [UCI income prediction repository](https://github.com/Azure/MachineLearningSamples-TDSPUCIAdultIncome) for concrete examples of unit tests for data science scenarios.
-* Follow the preceding outline and examples from the UCI income prediction scenario in your own data science projects.
-
-## References
-* [Team Data Science Process](./index.yml)
-* [Visual Studio Testing Tools](https://www.visualstudio.com/vs/features/testing-tools/)
-* [Azure DevOps Testing Resources](https://www.visualstudio.com/team-services/)
-* [Data Science Virtual Machines](https://azure.microsoft.com/services/virtual-machines/data-science-virtual-machines/)
machine-learning Collaborative Coding With Git https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/collaborative-coding-with-git.md
- Title: Collaborative coding with Git - Team Data Science Process
-description: How to do collaborative code development for data science projects using Git with agile planning.
----- Previously updated : 01/10/2020-----
-# Collaborative coding with Git
-
-This article describes how to use Git as the collaborative code development framework for data science projects. The article covers how to link code in Azure Repos to [agile development](agile-development.md) work items in Azure Boards, how to do code reviews, and how to create and merge pull requests for changes.
-
-## <a name='Linkaworkitemwithagitbranch-1'></a>Link a work item to an Azure Repos branch
-
-Azure DevOps provides a convenient way to connect an Azure Boards User Story or Task work item with an Azure Repos Git repository branch. You can link your User Story or Task directly to the code associated with it.
-
-To connect a work item to a new branch, select the **Actions** ellipsis (**...**) next to the work item, and on the context menu, scroll to and select **New branch**.
-
-![1](./media/collaborative-coding-with-git/1-sprint-board-view.png)
-
-In the **Create a branch** dialog, provide the new branch name and the base Azure Repos Git repository and branch. The base repository must be in the same Azure DevOps project as the work item. The base branch can be any existing branch. Select **Create branch**.
-
-![2](./media/collaborative-coding-with-git/2-create-a-branch.png)
-
-You can also create a new branch using the following Git bash command in Windows or Linux:
-
-```bash
-git checkout -b <new branch name> <base branch name>
-
-```
-If you don't specify a \<base branch name>, the new branch is based on `main`.
-
-To switch to your working branch, run the following command:
-
-```bash
-git checkout <working branch name>
-```
-
-After you switch to the working branch, you can start developing code or documentation artifacts to complete the work item. Running `git checkout main` switches you back to the `main` branch.
-
-It's a good practice to create a Git branch for each User Story work item. Then, for each Task work item, you can create a branch based on the User Story branch. Organize the branches in a hierarchy that corresponds to the User Story-Task relationship when you have multiple people working on different User Stories for the same project, or on different Tasks for the same User Story. You can minimize conflicts by having each team member work on a different branch, or on different code or other artifacts when sharing a branch.
-
-The following diagram shows the recommended branching strategy for TDSP. You might not need as many branches as shown here, especially when only one or two people work on a project, or only one person works on all Tasks of a User Story. But separating the development branch from the primary branch is always a good practice, and can help prevent the release branch from being interrupted by development activities. For a complete description of the Git branch model, see [A Successful Git Branching Model](https://nvie.com/posts/a-successful-git-branching-model/).
-
-![3](./media/collaborative-coding-with-git/3-git-branches.png)
-
-You can also link a work item to an existing branch. On the **Detail** page of a work item, select **Add link**. Then select an existing branch to link the work item to, and select **OK**.
-
-![4](./media/collaborative-coding-with-git/4-link-to-an-existing-branch.png)
-
-## <a name='WorkonaBranchandCommittheChanges-2'></a>Work on the branch and commit changes
-
-After you make a change for your work item, such as adding an R script file to your local machine's `script` branch, you can commit the change from your local branch to the upstream working branch by using the following Git bash commands:
-
-```bash
-git status
-git add .
-git commit -m "added an R script file"
-git push origin script
-```
-
-![5](./media/collaborative-coding-with-git/5-sprint-push-to-branch.png)
-
-## <a name='CreateapullrequestonVSTS-3'></a>Create a pull request
-
-After one or more commits and pushes, when you're ready to merge your current working branch into its base branch, you can create and submit a *pull request* in Azure Repos.
-
-From the main page of your Azure DevOps project, point to **Repos** > **Pull requests** in the left navigation. Then select either of the **New pull request** buttons, or the **Create a pull request** link.
-
-![6](./media/collaborative-coding-with-git/6-spring-create-pull-request.png)
-
-On the **New Pull Request** screen, if necessary, navigate to the Git repository and branch you want to merge your changes into. Add or change any other information you want. Under **Reviewers**, add the names of the reviewers, and then select **Create**.
-
-![7](./media/collaborative-coding-with-git/7-spring-send-pull-request.png)
-
-## <a name='ReviewandMerge-4'></a>Review and merge
-
-Once you create the pull request, your reviewers get an email notification to review the pull request. The reviewers test whether the changes work, and check the changes with the requester if possible. The reviewers can make comments, request changes, and approve or reject the pull request based on their assessment.
-
-![8](./media/collaborative-coding-with-git/8-add_comments.png)
-
-After the reviewers approve the changes, you or someone else with merge permissions can merge the working branch to its base branch. Select **Complete**, and then select **Complete merge** in the **Complete pull request** dialog. You can choose to delete the working branch after it has merged.
-
-![10](./media/collaborative-coding-with-git/10-spring-complete-pullrequest.png)
-
-Confirm that the request is marked as **COMPLETED**.
-
-![11](./media/collaborative-coding-with-git/11-spring-merge-pullrequest.png)
-
-When you go back to **Repos** in the left navigation, you can see that you've been switched to the main branch since the `script` branch was deleted.
-
-![12](./media/collaborative-coding-with-git/12-spring-branch-deleted.png)
-
-You can also use the following Git bash commands to merge the `script` working branch to its base branch and delete the working branch after merging:
-
-```bash
-git checkout main
-git merge script
-git branch -d script
-```
-
-![13](./media/collaborative-coding-with-git/13-spring-branch-deleted-commandline.png)
-
-## Next steps
-
-[Execute data science tasks](execute-data-science-tasks.md) shows how to use utilities to complete several common data science tasks, such as interactive data exploration, data analysis, reporting, and model creation.
-
-[Example walkthroughs](walkthroughs.md) lists walkthroughs of specific scenarios, with links and thumbnail descriptions. The linked scenarios illustrate how to combine cloud and on-premises tools and services into workflows or pipelines to create intelligent applications.
-
machine-learning Create Features Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/create-features-hive.md
- Title: Create features for data in an Azure HDInsight Hadoop cluster - Team Data Science Process
-description: Examples of Hive queries that generate features in data stored in an Azure HDInsight Hadoop cluster.
------ Previously updated : 01/10/2020---
-# Create features for data in a Hadoop cluster using Hive queries
-This document shows how to create features for data stored in an Azure HDInsight Hadoop cluster using Hive queries. These Hive queries use embedded Hive User-Defined Functions (UDFs), the scripts for which are provided.
-
-The operations needed to create features can be memory intensive. The performance of Hive queries becomes more critical in such cases and can be improved by tuning certain parameters. The tuning of these parameters is discussed in the final section.
-
-Examples of the queries that are presented are specific to the [NYC Taxi Trip Data](https://chriswhong.com/open-data/foil_nyc_taxi/) scenarios are also provided in [GitHub repository](https://github.com/Azure/Azure-MachineLearning-DataScience/tree/master/Misc/DataScienceProcess/DataScienceScripts). These queries already have data schema specified and are ready to be submitted to run. In the final section, parameters that users can tune so that the performance of Hive queries can be improved are also discussed.
-
-This task is a step in the [Team Data Science Process (TDSP)](./index.yml).
-
-## Prerequisites
-This article assumes that you have:
-
-* Created an Azure storage account. If you need instructions, see [Create an Azure Storage account](../../storage/common/storage-account-create.md)
-* Provisioned a customized Hadoop cluster with the HDInsight service. If you need instructions, see [Customize Azure HDInsight Hadoop Clusters for Advanced Analytics](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
-* The data has been uploaded to Hive tables in Azure HDInsight Hadoop clusters. If it has not, follow [Create and load data to Hive tables](move-hive-tables.md) to upload data to Hive tables first.
-* Enabled remote access to the cluster. If you need instructions, see [Access the Head Node of Hadoop Cluster](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
-
-## <a name="hive-featureengineering"></a>Feature generation
-In this section, several examples of the ways in which features can be generating using Hive queries are described. Once you have generated additional features, you can either add them as columns to the existing table or create a new table with the additional features and primary key, which can then be joined with the original table. Here are the examples presented:
-
-1. [Frequency-based Feature Generation](#hive-frequencyfeature)
-2. [Risks of Categorical Variables in Binary Classification](#hive-riskfeature)
-3. [Extract features from Datetime Field](#hive-datefeatures)
-4. [Extract features from Text Field](#hive-textfeatures)
-5. [Calculate distance between GPS coordinates](#hive-gpsdistance)
-
-### <a name="hive-frequencyfeature"></a>Frequency-based feature generation
-It is often useful to calculate the frequencies of the levels of a categorical variable, or the frequencies of certain combinations of levels from multiple categorical variables. Users can use the following script to calculate these frequencies:
-
-```hiveql
-select
- a.<column_name1>, a.<column_name2>, a.sub_count/sum(a.sub_count) over () as frequency
-from
-(
- select
- <column_name1>,<column_name2>, count(*) as sub_count
- from <databasename>.<tablename> group by <column_name1>, <column_name2>
-)a
-order by frequency desc;
-```
--
-### <a name="hive-riskfeature"></a>Risks of categorical variables in binary classification
-In binary classification, non-numeric categorical variables must be converted into numeric features when the models being used only take numeric features. This conversion is done by replacing each non-numeric level with a numeric risk. This section shows some generic Hive queries that calculate the risk values (log odds) of a categorical variable.
-
-```hiveql
-set smooth_param1=1;
-set smooth_param2=20;
-select
- <column_name1>,<column_name2>,
- ln((sum_target+${hiveconf:smooth_param1})/(record_count-sum_target+${hiveconf:smooth_param2}-${hiveconf:smooth_param1})) as risk
-from
- (
- select
- <column_nam1>, <column_name2>, sum(binary_target) as sum_target, sum(1) as record_count
- from
- (
- select
- <column_name1>, <column_name2>, if(target_column>0,1,0) as binary_target
- from <databasename>.<tablename>
- )a
- group by <column_name1>, <column_name2>
- )b
-```
-
-In this example, variables `smooth_param1` and `smooth_param2` are set to smooth the risk values calculated from the data. Risks have a range between -Inf and Inf. A risk > 0 indicates that the probability that the target is equal to 1 is greater than 0.5.
-
-After the risk table is calculated, users can assign risk values to a table by joining it with the risk table. The Hive joining query was provided in previous section.
-
-### <a name="hive-datefeatures"></a>Extract features from datetime fields
-Hive comes with a set of UDFs for processing datetime fields. In Hive, the default datetime format is 'yyyy-MM-dd 00:00:00' ('1970-01-01 12:21:32' for example). This section shows examples that extract the day of a month, the month from a datetime field, and other examples that convert a datetime string in a format other than the default format to a datetime string in default format.
-
-```hiveql
-select day(<datetime field>), month(<datetime field>)
-from <databasename>.<tablename>;
-```
-
-This Hive query assumes that the *\<datetime field>* is in the default datetime format.
-
-If a datetime field is not in the default format, you need to convert the datetime field into Unix time stamp first, and then convert the Unix time stamp to a datetime string that is in the default format. When the datetime is in default format, users can apply the embedded datetime UDFs to extract features.
-
-```hiveql
-select from_unixtime(unix_timestamp(<datetime field>,'<pattern of the datetime field>'))
-from <databasename>.<tablename>;
-```
-
-In this query, if the *\<datetime field>* has the pattern like *03/26/2015 12:04:39*, the *\<pattern of the datetime field>'* should be `'MM/dd/yyyy HH:mm:ss'`. To test it, users can run
-
-```hiveql
-select from_unixtime(unix_timestamp('05/15/2015 09:32:10','MM/dd/yyyy HH:mm:ss'))
-from hivesampletable limit 1;
-```
-
-The *hivesampletable* in this query comes preinstalled on all Azure HDInsight Hadoop clusters by default when the clusters are provisioned.
-
-